How to use

The meta-primer handles the heavy lifting by guiding the AI you choose through every stage of topical (working) primer construction. Once you state and define your task, the meta-primer runs an elicitation process that identifies the domain objects, maps relationships, defines the question space, selects suitable reasoning patterns, proposes scoring scales, and drafts the guardrails. It performs the structural work automatically, asking only short confirmation questions when human judgment is needed. Instead of building a primer by hand, you simply answer what the system asks; the meta-primer assembles, validates, and packages the full topical primer for you.

Here’s a practical, repeatable way to build a topical primer from it.


1. Define the job & use-case

Use the meta-primer’s Job Header section.

  • What is the topic? (e.g. “Internal Audit of GRC Systems”, “UN Trade Corridors”, “Blockchain traceability in supply chains”)
  • What decisions will this primer support?
  • Who is the audience?
  • What outputs are expected? (scores, narratives, red/amber/green flags, recommendations, etc.)
  • Time frame: one-off analysis, recurring monthly, continuous agent?

Fill those in first: they anchor everything else.


2. Instantiate the domain objects

Use the meta-primer’s Domain / Ontology slots.

For this topic:

  • List the core objects: e.g. Risks, Controls, Actions, Suppliers, Tariffs, Nodes, Agents.
  • For each object define:
    • Essential fields (name, ID, type, status, dates, links…)
    • Relationships (e.g. “Actions link to Risks and Controls”, “Suppliers link to Nodes and Routes”)
  • Note any canonical scales: maturity levels, risk levels, criticality, etc.

This gives the topical primer its “things in the world”.


3. Lock in sources & constraints

Use the Source Validation & Integrity section.

  • List authorised sources: standards, laws, policies, datasets, prior reports.
  • For each:
    • Status: allowed / reference only / forbidden
    • Licensing or copyright constraints (especially for ISO / UNECE etc.)
  • State hierarchy of authority: e.g. “Local law > Contract > Company policy > Industry guidance”.

This prevents the agent from “making law up” or using random sources.


4. Define the question space

Use the Question & Task Classes section.

  • Cluster the topic into 4–8 question families, e.g.:
    • Data quality & completeness
    • Maturity & effectiveness
    • Risk alignment
    • Trade impact & scenario analysis
  • For each family specify:
    • What is in scope (“Assess control maturity using X scale”)
    • What is out of scope (“Do not give legal advice on liability”)
    • Required inputs (datasets, fields, thresholds)
    • Expected form of answer (score + narrative, diagnosis + recommendation, etc.)

This is where the topical primer tells the agent what it’s for.


5. Bind to reasoning patterns from the meta-primer

Use the Method / Reasoning Templates section.

For each question family, select one or more reasoning patterns already defined in the meta-primer, e.g.:

  • Descriptive profiling (summaries, distributions)
  • Quality scoring (e.g. Incomplete → Improving → Functional → Professional)
  • Comparative trend analysis (Period 1 vs Period 2 vs Period 3…)
  • AHP/ANP style structuring for trade-offs
  • TIOC / TTIOC style temporal pattern analysis
  • Counterfactual or scenario analysis (“What changes if tariff X is added?”)

Then, for each pattern, specify:

  • When it should be used (pre-conditions)
  • What to output (numbers, charts, narrative)
  • Any built-in sense checks (“If data coverage < 60%, flag low confidence and stop escalation”).

6. Define scoring, scales, and thresholds

Use the Scoring & Evaluation section.

  • Choose or define scales (0–4, 1–5, RAG, quartiles, etc.).
  • Map scales to meaning:
    • e.g. 0 = Not Present, 1 = Incomplete, 2 = Improving, 3 = Functional, 4 = Professional.
  • Set thresholds for concern and praise:
    • e.g. “If more than 30% of actions are overdue, raise a priority flag.”
  • Add data-quality modifiers:
    • e.g. “If linkage density < 0.5, downgrade confidence one level.”

This is where “what good looks like” lives for that topic.


7. Wire in agentic behaviour

Use the Agent Roles & Handoffs section (our IFAAMAS/agentics glue).

  • Define roles, e.g.:
    • Ingestor: checks and cleans data
    • Profiler: builds descriptive baseline
    • Analyst: applies scoring & methods
    • Critic: challenge & bias checks
    • Reporter: crafts output for humans
  • Specify:
    • Allowed tools for each agent (code, web, DBs, simulations)
    • Handoff conditions (“Only send to Reporter once Critic passes”)
    • Romer points where the agent may choose between alternative methods based on data quality or job type.

This is how the topical primer becomes agent-ready, not just a static rubric.


8. Safety, limits, and red lines

Use the Safety, Ethics & Guardrails section.

For this topic, state clearly:

  • What the agent must not do (e.g. “No legal determinations”, “No personal medical advice”, “No circumventing copyright”).
  • High-risk situations that trigger stop or escalate:
    • e.g. self-harm signals, major legal exposure, systemic control failure.
  • Required disclaimers.

This ensures the topical primer can be used safely without re-arguing basics every time.


9. Learning & feedback hooks

Use the Learning / Reflection section.

Define:

  • What should be captured after each run:
    • “What surprised us?”
    • “Which indicators mattered most?”
    • “Where did the method struggle?”
  • Rules for adapting the topical primer:
    • What can be auto-tuned (e.g. thresholds, weights)
    • What needs human approval (e.g. adding new scales, using new legal sources).
  • How insights are added back into:
    • This topical primer
    • The meta-primer (if it reveals a reusable pattern).

This is your “take the learning” mechanism.


10. Package, version, and test

Finally:

  1. Assign a version ID: e.g. XPlain-Recursive.GRC.v1.0 or XPlain-A.Trade.v1.0.
  2. Run at least 3–5 worked examples with:
    • Ideal/clean data
    • Messy/low-quality data
    • Edge case / borderline scenario
  3. Capture:
    • Where the agent did the right thing
    • Where it failed or overreached
    • What you changed in the primer as a result.

Once you’re comfortable, you publish that as the topical primer and treat it as the single source of truth for that domain.