XPlain – IFAAMAS – XAI

How XPlain, AAMAS, and STAR XAI Fit Together

XPlain is the governance-and-assurance layer for real-world AI; IFAAMAS supplies the agentic constructs and coordination norms; XAI provides interpretability of model and decision behavior. Together they cover what is permitted, how agents act, and why results occur.

Legend: ✓ = primary role · ↔ = complements/joins · — = not a focus

Concern / Layer XPlain (Governance & Assurance) AAMAS (Agentic Constructs) XAI (Interpretability)
Purpose ✓ Define rules, evidence gates, pause points, assurance checks ✓ Define agents’ goals/commitments, norms, protocols ✓ Explain model/decision behavior to humans
Goals & Commitments ↔ Encodes objectives and thresholds in Primers ✓ BDI (Beliefs–Desires–Intentions), institutional norms ↔ Explains chosen goals/plan rationale
Reasoning & Execution ✓ Romer governs lenses, rules, escalation, and trace ✓ Agent plans, interactions, coordination patterns ↔ Provide step/rule/model explanations
Coordination & Protocols ↔ Process handoffs and audited interactions ✓ Protocols (e.g., contract-net, norms, messaging) — Not a protocol framework
Human-in-the-Loop ✓ Mandatory review triggers (confidence, boundary, ethics) ↔ Escalation/commitment rules integrate with HiTL ✓ Surfaces reasons so reviewers can act
Assurance & Audit ✓ Confidence floors, drift checks, Romer Trace, reports ↔ Verifiable commitments, compliance with norms ✓ Evidence-backed explanations (local/global)
Data & Evidence ✓ Evidence gates, provenance, change budgets ↔ Beliefs can reference governed evidence ✓ Shows which inputs/features influenced results
Model Runtime / Vendor ✓ Vendor/runtime-agnostic governance wrapper ↔ Conceptual layer; can sit over many runtimes ↔ Techniques span many model types
Primary Output Assured decision + audit trace Coordinated agent behavior Human-readable explanations
Non-Goals — Not a model or protocol spec — Not an assurance or XAI toolkit — Not a governance or agency framework

Takeaway: XPlain governs and proves what should happen, AAMAS structures how agentic behavior unfolds and coordinates, and XAI explains why each result is produced—so you can scale from API calls to full agent ecosystems without losing accountability.