The Problem

Across most organisations today, AI shows up at three levels: API AI (model-as-a-service calls tightly scoped to a limited task), ad-hoc “browser AI” (one-off prompts and copilots embedded in apps), and Agentic AI (goal-driven systems that plan, act, and learn across steps and tools).

Each level raises distinct risks—API AI needs input/output controls and provenance; ad-hoc use needs consistency, auditability, and safe UX affordances; agentic systems need explicit goals, commitments, boundaries, and coordination norms.

Here, XPlain frames the governance layer: primers, evidence gates, pause points, and assurance rules that make any of the three levels auditable and human-supervised by design.

XAI contributes the interpretability layer: model- and decision-level explanations (rationales, feature attributions, traces) so stakeholders can understand why an answer or action emerged.

AAMAS provides the agency-and-coordination layer: concepts like BDI (beliefs, desires, intentions), norms/institutions, and protocols that keep multi-step or multi-agent behavior purposeful and compliant.

Put together, XPlain governs what is permitted and how it’s proven, XAI clarifies why decisions happen, and AAMAS structures how agents commit, coordinate, and escalate—a practical trio that scales from single API calls to fully agentic workflows without losing accountability.