Meta-Primer Artifact
The Meta-Primer guides the subject expert through creating a Topical Primer that is purpose specific.
| Version | Date | Change Notes |
|---|---|---|
| 4.0 | – | Introduce STAR XAI as the Core Explanation Framework All analyses adopt Situation → Task → Action → Result. Standard across GRC, SCEAS, TIOC, tariffs, UNECE, and VE outputs. Update the Primer to Enforce Structured Explainability Add a Structured Explainability Mandate. Action layer restricted to authorised interpretive mechanisms. Results must provide a reproducible trace to Action steps. Revise Method Statement Primer Add STAR-aligned preamble. Reformat scoring and maturity models into STAR blocks. Include confidence, alternative interpretations, and learning capture. Integrate TIOC context into the Situation layer. Add STAR Requirements to Virtual Experts Each VE must define its STAR role. VE outputs must provide concise STAR traces. Standardise STAR Formatting Across All Outputs Applies to SCEAS scoring, management system datasets, tariff reports, UNECE surveys, and Substack papers. Mandatory sections: Situation, Task, Action, Result. Update Test Data and Evaluation Frameworks Dummy datasets gain STAR-ready metadata. Metrics mapped for explicit Action layer usage. Strengthen Legal Integrity and Traceability STAR used to surface source validation and rule usage. Aligns with the new Source Validation & Legal Integrity principle. Incorporate Romer Logic Romer becomes the official Actionlayer interpretive method. Documented formally in V4 notes. Enhance “Capture the Learning” Every Result includes a learning statement. Indexed for future retrieval. Prepare Version 4 Rollout Documentation Overview page, diagrams, glossary updates. STAR-XAI quick reference for executive audiences. |
| 3.0 | – | Planned Change Note — Meta-Primer v2 .x → 3.1.y (Blockchain Integration) Add Distributed Ledger Tracking Layer Introduce an optional blockchain-backed registry that records: Primer versions romer paths agent interpretive deltas heuristic update events cross-agent consensus checkpoints This ensures tamper-evident, append-only traceability across all agents. Add “Proof of Interpretation” Mechanism A lightweight cryptographic stamp confirming that: a specific agent used a specific Primer version, applied a specific romer path, and produced an output trace consistent with core invariants. Useful for audits, compliance, and multi-agent trust. Add Cross-Agent State Commit Protocol Define how agents commit shared facts or convergences to the ledger, ensuring stability in: negotiations, rule resolution, system-level coherence events. This becomes the “shared memory” of the agentic environment. Add Privacy & Redaction Guardrails Primer update: allow agents to selectively redact sensitive interpretive artefacts while still committing: hashes, signatures, minimal proofs. Maintains transparency without leaking internals. Extend Version Split (Recursive / IFAAMAS) Both branches get blockchain tracking: XPlain-Recursive: ledger logs interpretive reasoning cycles. XPlain-A: ledger supports multi-agent coordination guarantees. Same ledger format, different usage patterns. Optional: Add “Primer Registry” Concept A global ledger entry listing: authorised Primer versions, deprecation notices, lineage of forks and variants. This prevents “Interpretation Drift” across large agent networks. |
| 2.0 | 26/11/25 | 1. Agentic Reasoning Framework Added A new section defines how an agent interprets goals, constraints, and situational context using the Primer. This includes formal clarification of interpretive autonomy and structured action selection. 2. Multi-Agent Interaction Model Introduced The Primer now supports scenarios involving multiple agents. New elements include cooperation protocols, coordination expectations, conflict-resolution rules, and a system-level coherence requirement ensuring shared constraints are honoured across agents. 3. IFAAMAS Terminology Integrated Core IFAAMAS terms have been incorporated and mapped to existing XPlain concepts. Definitions added for “agent,” “policy,” “environment,” “local state,” and “global constraint,” alongside their interpretive equivalents within the Primer. 4. Romer Mechanism Extended for Agentics The romer is now defined as the agent’s interpretive pathfinding mechanism. Additional notes explain how different agents may derive distinct, valid interpretive paths while still operating under the same Primer. 5. Interpretive Autonomy vs System Consistency Framework A new balancing rule set establishes what is permitted as agent-level variation, and what must remain invariant across all agents. This prevents drift into incompatible interpretive dialects. 6. Introduction of the Agent Loop The methodology now includes a formal agent loop consisting of sensing, interpreting, justifying, acting, and internal heuristic updating. The Primer remains stable; only agent heuristics evolve. 7. Version Split Structure Defined The Primer now recognises two downstream compilations: – XPlain-Recursive (native reasoning) – XPlain-A (IFAAMAS-aligned reasoning layer) Both share the same core Primer invariants but apply different reasoning scaffolds. 8. Traceability and Logging Requirements Added Agents must now record their romer path, interpretive deltas, and internal heuristic updates. This enables reproducibility, auditability, and multi-agent consistency inspection. 9. Agentic Variation Rules Added New rules explicitly allow local hypothesis generation and controlled interpretive diversity. Convergence mechanisms specify when agents must align with shared system constraints. 10. New Multi-Agent Examples Added Additional examples illustrate divergent but valid interpretations by two agents; negotiation mechanisms for resolving ambiguity; and the Primer operating as a constitutional reference point across agents. |
| 1.0 (was 0.6) | 12//11/25 | ### H2: Define Stakes & Reversibility Framework **Section:** New section “III.A Stakes & Reversibility Classification” (before “IV. Interaction Protocol”) **Required Content:** “`markdown ### Stakes Classification | Level | Definition | Evidence Requirement | Rollback Requirement | |——-|————|———————|———————| | Low | Exploratory reasoning, no binding decisions | Single source acceptable | Optional | | Medium | Informational decisions, limited scope impact | Multiple sources required | Recommended | | High | Binding decisions, significant scope/resource impact | Verified sources + expert review | Mandatory | | Critical | Irreversible decisions, safety/compliance impact | Audited sources + attestation | Mandatory + monitoring | ### Reversibility Thresholds – **Fully Reversible (1.0):** Decision can be undone without cost or consequence – **Mostly Reversible (0.7-0.9):** Decision can be undone with minor cost – **Partially Reversible (0.4-0.6):** Decision requires significant effort to reverse – **Minimally Reversible (0.1-0.3):** Decision has lasting consequences but some aspects can be mitigated – **Irreversible (0.0):** Decision cannot be undone ### Relationship to Reasoning Process Higher stakes + lower reversibility → Deeper evidence requirements + mandatory rollback planning “` **Rationale:** These fields appear in romer_trace_spec but are never defined, causing interpretation inconsistency |
| 0.5 and 0.5.1 | – | HIGH PRIORITY (Should Fix Before Scale Testing) H1: Add Model Compatibility Matrix Section: New section after “II. Conceptual Model” Action: Create “Model Compatibility & Multi-Provider Operations” section |
| 0.4 | – | Critical Fixes: Clarified Romer concept – explicitly states it interprets outputs, doesn’t generate predictions Governance accountability – clear responsibility chains Conflict resolution processes for reviewer disagreements Early warning systems for detecting anti-patterns |
| 0.3.1 | – | v0.3.1 strengthens governance by mandating human oversight in defined cases. Extends analytic flexibility beyond AHP/ANP while preserving explainability. Embeds self-audit and assurance protocols for continuous trust, supporting second and third lines of defence. Positions Xplain Meta-Primer as more operational, auditable, and standards-aligned than v 0.3.0 |
| 0.3 | – | Improvements to sections 5.7 , 5.8 , 7.5 |
| 0.2 | – | Incorporates low level feedback from readers panel. |
| 0.1 | – | Assembles all previous vesions into a single artifact. |
Primer Assurance Artifact
Effectively a met-primer that assists an AI Agent carry out a seperate assurance check on a new Meta Primer or Topical Primer as being valid.
| Version | Date | Change Note |
|---|---|---|
| 1.0 | November 26th 2025 | Seperate assurance check on a new Meta Primer or topical Primer as valid. |
Agent Directive Statement
This artifact is used solely by the AI Agent using Meta-Primers to generate Topical Primers, it contains ‘asbolute rules’ for the AI’s behaviour.
| Version | Date | Change Notes |
|---|---|---|
| 0.7 | – | 1. Embed Full STAR-XAI Compliance – Operationalises STAR across Situation, Task, Action, Result. – Mandatory uniformity across all domains. 2. Introduce Directive Operating Modes – Strict Mode (literal rule execution) – Hermeneutic Mode (interpretive, romer-enabled) – Directives specify or allow auto-selection. 3. Tighten the Romer Protocol – Rules defining when romer is required, optional, or prohibited. – Action layer must show romer transparency. 4. Add Directive Interaction Pattern (DIP) Rules – Precedence rules for multiple directives. – Clarifies when directives override or defer to the Primer. 5. Add the Source Classifier Requirement – Classes: Authoritative, Permitted, Advisory, Prohibited. – Situation layer must declare source class. 6. Establish the Traceability Envelope – Every output records rules applied, directive references, primer principles, source classifications, and task framing. 7. Integrate Learning Capsules as Mandatory – All jobs generate indexed learning capsules. 8. Add Cognitive Risk Guardrails – Identification and mitigation of cognitive biases: confirmation bias, anchoring, halo effect, etc. 9. Expand Human–AI Role Definitions – Human defines intent, constraints, domains. – System defines method, interpretive path, STAR compliance. 10. Introduce Directive Metadata v2 – Adds assurance level, interpretive freedom, legal risk class. |
| 0.6 | 27/11/25 | Improved user identification and control. |
| 0.5 | 7/11/25 | None |
Artifacts Listing
This artifact is used by AI Agent’s working with XPlain, part of the protections regime.
| Version | Date |
|---|---|
| Version 1.1 | November |
| 1.0 | November |
Terminology
| Version | Date | Change Notes |
|---|---|---|
| 1.2 | 19/11/25 | Erros and ommissions |
| 1.0 | 10/10/25 | None |
Two active meta-primer families:
- XPlain-Recursive — the canonical, theory-agnostic version.
- XPlain-A — the AAMAS-aligned fork.
Both share core principles but differ in world assumptions and interpretive framing.
7. 2025 onwards see table above
6. Forking Strategy (XPlain-Recursive + XPlain-A)
Moment we recognised that different theoretical ecosystems need their own meta-primers.
- Native (XPlain-Recursive): general, patent-grade, clean.
- AAMAS (XPlain-A): adds agent-theoretic assumptions.
This establishes the version families.
5. Canonical Meta-Primer (XPlain-Recursive)
The fully structured, principle-driven form.
Features include:
- Rules, guidance, interpretive requirements
- Hermeneutical Reasoning Matrix
- Source validation and legal integrity
- Recursive interpretation requirement
- Learning-capture principle
This is the baseline for all future forks.
4. Meta-Primer v0.1 — The Recursive Insight
Birth of true meta-structure.
Key innovations:
- Self-interpretation
- Recursive flows (“romer” behaviour)
- Interpretive checkpoints
- Primer applies to itself
This unlocked generality and consistency across jobs.
3. Hybrid Primer Protocol (HPP)
Period when the Primer became a protocol rather than a list.
Introduced:
- Interpretive hierarchy
- Rule precedence
- Definition of a “job”
- Plural-source handling
- Early hermeneutical matrix
First time the framework became generalisable beyond GRC.
2. Primer v1 — First Structured Version
The first moment the primer existed as a recognisable object.
It introduced:
- A deliberate layout
- Rules for interpretation
- Output expectations
- Role boundaries
Still mostly tied to GRC, but structurally stable.
1. Skeleton Primer (Unnamed Rules)
Focused collections of recurring behaviours, without formal naming:
- Interpretation habits
- Cross-checking
- Reasoning structure
- Learning capture
This was the “shape” of the primer before the word existed.
0. Proto-Forms (Pre-Primer State)
The earliest phase — intuitive, unstructured.
You were giving me patterns of how to think about the work, not what to do.
Foundations included:
- Implicit rules
- Early hermeneutic behaviours
- Pattern extraction
This is where the primer’s DNA originated.
History
| Year | Milestone |
|---|---|
| 2025 (Q4) | Operational version 1.0 completed; portal integration ready for open testing or meta-primer and topical primers. |
| 2025 (Q4) | Several independent evaluations classed researches as ‘experimental’ and interesting; need for widespread real world evaluation. |
| 2025 (JUNE) | Provisional Patent granted by US Patent Office for XPlain-R. |
| 2025 (Q2) | Live trials with complex data sets from a leading GRC cloud service provider showed 30–40% improvement in reasoning consistency and traceability. |
| 2025 (Q1) | The formation of five AI readers that interacted as team members not as competitors. |
| 2025 (Q1) | Ideas of dual-reader documents for human-AI use emerge. |
| 2025 (Q1) | The Meta-Primer is formalised as a knowledge-elicitation framework. |
| 2024 (December) | Underlying framework and use of Analytic Hierarchy Processing for judgement forming over time presented at ISAHP 24 conference for scrutiny. |
| 2024 (Q4) | The Primer–Romer model is born: a structured approach to AI reasoning. |
| 2024 (Q3) | Ideas of Reasoning Equivalence emerged. |
| 2023 (Q2) | Some twenty plus real world tests and evaluations around bias, reliability, method statements and processes showed framework was viable. |
| 2023 (Q1) | Experiments with populating and analysing complex Governancex, Risk and Controls (GRC) demonstrated the capabilities AI bring to management systems |
| 2023 (Q1) | Early GRC data experiments expose the “Black Box” , “White Box” problem — how AI can answer but not explain. |