Ontology Explainability Knowledge Representation

Planning Ontology & Explainable AI: PO, maPO, and OMEGA

Bharath Muppasani, Ritirupa Dey, Biplav Srivastava, Vignesh Narayanan · University of South Carolina

95.2%User preference
4.40/5Clarity rating
94.39%CQ coverage
3Published tools

What

This project develops a family of OWL 2 ontologies and supporting tools for making automated planning decisions explainable. The core contribution is the Planning Ontology (PO) — a formal vocabulary covering planning domains, problem instances, planners, plans, and their provenance — that enables SPARQL-queryable explanations for any automated planner.

The project has evolved across three generations: PO (single-agent), maPO (multi-agent extension), and OMEGA (interactive explanation platform for MAPF). Each generation adds new expressive power while remaining backward-compatible with prior tools.


Why

Automated planners can compute optimal plans, but rarely explain why a plan was chosen over alternatives — which actions were necessary, which were optional, why a particular goal ordering was used. This "black box" nature limits trust and adoption, especially in safety-critical domains like robotics and healthcare.


How

PO: Planning Ontology (Single-Agent)

PO is an OWL 2 ontology aligned with the PDDL standard, covering:

Domain & Problem
Types, predicates, actions, preconditions/effects modeled as OWL classes and properties.
Plan Structure
Action sequences, temporal orderings, goal achievement paths represented as RDF triples.
Planner Metadata
Which planner was used, with what parameters, runtime, search strategy.
PROV-O Provenance
W3C PROV-O integration: tracks derivation, attribution, and activity for each plan step.

The PO Tool exposes a web interface where users load a PDDL plan, annotate it with PO metadata, and query explanations in natural language (backed by SPARQL). The tool answers competency questions like "Which actions are causally required?" and "What alternative orderings exist?".

Planning Ontology OWL 2 diagram
PO: OWL 2 planning ontology with PROV-O provenance — covers planners, plans, domains, and SPARQL-queryable explanations

maPO: Multi-Agent Extension

maPO extends PO to multi-agent settings by adding:

maPO architecture
maPO: multi-agent extension with conflict and dependency modeling
OMEGA system architecture
OMEGA: system architecture connecting MAPF execution to natural-language explanations

OMEGA: Ontology-Driven MAPF Explanation Platform

OMEGA is an interactive demo platform that combines HI-MAPF execution with maPO-grounded explanations. Given any multi-agent path finding run:


Results

A user study (n=25) compared OMEGA explanations against two baselines: raw plan traces and natural-language summaries without ontology grounding. OMEGA achieved:

PO ontology coverage was evaluated against a curated set of 47 competency questions spanning plan structure, planner choice, and provenance. PO answered 44/47 questions via SPARQL, with the remaining 3 requiring natural-language post-processing.


Publications