Prompt Rules of Thumb

Here’s a tight, battle-tested set of “rules of thumb” you can drop into ad-hoc prompts so the AI stays clear, auditable, and useful. These do not need to be XPlain or AAMAS specific.

Start by saying exactly what you want and the limits. Make the effort match how important the decision is. Before advising, check if there’s enough info and say where it came from. Show the steps you took and call out any contradictions. Finish with what you learned. Don’t pretend to be precise—only use numbers or cut-offs if you can explain why they fit here.

  1. Start with a mini Direction Statement
    Tell the AI the purpose, scope, success criteria, and constraints—explicitly.
    Prompt stub: “Purpose: … Scope: … Success criteria: … Constraints: … Approach: …”
  2. Ask for a transparent reasoning trace (“or a Romer trace”)
    Request a step-by-step log of what it did, the questions it asked, the evidence it used, decisions made, and what it learned for next time.
    Prompt stub: “Produce a Romer-style trace: context, steps, evidence, decision, learning capture.”
  3. Gate the evidence before conclusions
    Tell it to pause and verify sources and relevance (“Evidence Gate”) before recommending anything. Ask for citations and provenance.
    Prompt stub: “Run an evidence gate first; list sources, why each is trustworthy, and any gaps.”
  4. Right-size the rigor to the stakes
    If stakes are high or reversibility is low, increase structure (checks, comparisons, formal methods). If low stakes, keep it lightweight.
    Prompt stub: “Stakes: [high/low]. Reversibility: [0–1]. Adjust rigor accordingly and state what changed.”
  5. Enforce a clean sequence: check → analyze → validate → decide → self-audit
    Ask it to: (a) check minimum info is sufficient, (b) analyze, (c) cross-validate, (d) make a decision with rationale, (e) self-audit contradictions.
    Prompt stub: “Follow: Threshold check → Analysis → Cross-validation → Decision → Self-audit (list any contradictions).”
  6. Demand multi-pass validation for important tasks
    Have it re-run with small perturbations (assumptions, weights) and report stability; flag swings.
    Prompt stub: “Do a baseline and two perturbation runs; report what changed and why.”
  7. Declare data sufficiency upfront
    Instruct it to state whether the data are adequate and what’s missing before proceeding—and to stop or qualify results if not.
    Prompt stub: “Assess data sufficiency first; if inadequate, list what is missing and give a safe, qualified output.”
  8. Make traceability non-negotiable
    Require explicit inputs → steps → rules/guidance used → outputs → confidence. This boosts auditability and trust.
    Prompt stub: “For every claim: show input, rule/guidance used, step it occurred in, and confidence.”
  9. Guard against “precision theater”
    Ask it to justify any thresholds or weights and avoid false-rigor when evidence is thin.
    Prompt stub: “Explain why any threshold/weight is appropriate here; avoid unnecessary complexity.”
  10. Build for comparability and convergence
    If you’ll test across models, ask for structure that different AIs can follow, and report expected alignment/variance.
    Prompt stub: “Format output for cross-AI comparison and estimate convergence vs. expected variance.”
  11. Bake in core trustworthiness checks
    Tell it to surface fairness, privacy, transparency, accountability, and security considerations relevant to the task—and the trade-offs.
    Prompt stub: “List trust/risk dimensions impacted (fairness, privacy, transparency, accountability, security), plus trade-offs and mitigations.”
  12. Capture “what was learned” at the end
    Every run should end with lessons and heuristics you can reuse in the next prompt.
    Prompt stub: “Finish with ‘Learning Capture’: 3 insights to reuse or change next time.”
  13. Set norms for agentic behaviour (if asking for multi-step/agent work)
    Ask for explicit goals, protocols, and guardrails so the agent remains verifiable and accountable.
    Prompt stub: “State goals, assumptions, interaction protocol, and governance/rollback rules before acting.”

Quick pasteable version you can use as a prompt header:

  • Goal & limits:
  • How important is this? (match effort to stakes)
  • Info check: do we have enough? what’s missing?
  • Steps taken (keep a short trace (log) of the activities)
  • Sources (list them or go find them)
  • Contradictions or doubts:
  • Decision/recommendation:
  • What we learned:
  • Any numbers/thresholds used and why:
  • How you want the results reported: