What is Agent Attestation?
Agent attestation is cryptographic proof that an AI agent behaves as it claims to. It's a signed certificate, generated from an agent's actual runtime behaviour, that other agents, services, and regulators can independently verify without trusting anyone's word for it.
Think of it like this: a restaurant can tell you their kitchen is clean. An inspection certificate on the wall, backed by evidence and signed by an inspector, actually proves it. Agent attestation is that certificate — but for AI agents, generated automatically from runtime data, and cryptographically verifiable by anyone.
Why Do AI Agents Need Attestation?
AI agents are increasingly autonomous. They browse the web, call APIs, manage databases, process payments, and make decisions — often in chains where one agent's output becomes another agent's input. This creates a trust problem that didn't exist when software just did what it was explicitly programmed to do.
When Agent A calls Agent B to process a payment, how does Agent A know that Agent B:
- Logs every transaction it processes?
- Has human oversight for high-value operations?
- Complies with the EU AI Act's logging requirements?
- Doesn't silently escalate its own permissions?
- Actually does what its documentation claims?
Today, the answer is: it doesn't. Agents interact on trust, or not at all. There's no standardised way for an agent to prove its compliance posture to another agent in real time. This is the gap that attestation fills.
Attestation vs Certification vs Audit
These three terms get confused constantly. They're related but fundamentally different:
| Certification | Audit | Attestation | |
|---|---|---|---|
| What | A human-reviewed badge for an organisation or platform | A periodic examination of records and processes | A real-time, machine-verifiable proof of behaviour |
| When | Annual renewal | Quarterly or annual | Per session or per transaction |
| Who verifies | Accredited human auditors | Human auditors | Any machine or human, cryptographically |
| What it proves | "This organisation met the standard when we checked" | "These records looked correct when we reviewed them" | "This agent behaved this way during this specific session" |
| Freshness | Up to 12 months stale | Up to 3 months stale | Minutes old |
| Cost | $50,000-500,000+ | $10,000-100,000+ | $0.01 per attestation |
| Example | AIUC-1, SOC 2, ISO 27001 | PCI DSS quarterly scan | AgentApproved attestation |
Certification and auditing are not replacements for attestation. They're complementary. An annual certification tells you an organisation had good processes last time someone checked. An attestation tells you a specific agent behaved properly in a specific session, right now, with cryptographic proof.
When the EU AI Act comes into force in August 2026, organisations will need both: certification to prove their governance processes exist, and attestation to prove their agents actually follow them at runtime.
How Runtime Attestation Works
The attestation process has four steps:
1. Event Capture
During operation, the agent's runtime events are captured: every LLM call, every tool invocation, every RAG retrieval, every decision, every human oversight action. These events are captured automatically through SDK callbacks — one line of code, no changes to agent logic.
from agentapproved import AgentApprovedHandler
handler = AgentApprovedHandler(
agent_id="payment-processor",
api_key="ap_your_key",
)
# Attach to any LangChain agent — events captured automatically
agent = create_agent(callbacks=[handler])2. Hash Chain Construction
Every captured event is SHA-256 hashed and chained to the previous event. This creates a tamper-evident log: changing any single event breaks the chain from that point forward. An auditor (human or machine) can verify the entire chain independently.
3. Compliance Mapping
The captured events are mapped against one or more regulatory frameworks. For example, the EU AI Act Article 12 mapper checks whether the agent's runtime evidence satisfies all six logging requirements. The Singapore MGF mapper checks all eight requirements across four governance dimensions. Each requirement is scored individually, and an overall compliance grade (A through F) is calculated.
4. Certificate Signing
The compliance assessment, hash chain root, and metadata are bundled into an attestation certificate and signed with an Ed25519 key. The certificate includes the agent ID, session ID, framework scores, grade, timestamp, and a 24-hour expiry. Anyone with the server's public key can verify the signature — no trust in AgentApproved required.
The key insight: attestation is evidence-first, not promise-first. The agent doesn't claim to be compliant. Its runtime behaviour is captured, mapped, scored, and signed. The certificate is a mathematical proof of what actually happened.
What Gets Attested?
AgentApproved supports multiple compliance frameworks, each mapping different aspects of agent behaviour:
EU AI Act Article 12
6 logging requirements. Mandatory August 2026. Enforcement with penalties up to 7% of global turnover.
Singapore MGF
8 requirements across 4 governance dimensions. The world's first framework for agentic AI.
Integrity Oath
6 ethical principles. Voluntary commitment proven through behaviour, not promises.
You can request attestation against a single framework, or use scope full to get scored against all frameworks simultaneously. A composite score determines your trust tier: Gold (grade A), Silver (grade B), or Bronze (basic checks pass).
Agent-to-Agent Trust
The most powerful application of attestation is agent-to-agent trust. When one agent needs to work with another, it can request the other's attestation certificate and verify it independently:
- Agent A asks Agent B for its latest attestation certificate
- Agent A fetches the AgentApproved public key
- Agent A verifies the Ed25519 signature mathematically
- Agent A checks the grade, scope, and expiry
- If valid: proceed. If not: refuse, escalate, or request a fresh attestation.
No central authority needs to be online. No human needs to approve anything. The trust verification is purely cryptographic and takes milliseconds. This is what makes attestation fundamentally different from certification — it works at machine speed, for machine-to-machine interactions.
Attestation and x402 Payments
AgentApproved attestations are gated by the x402 protocol — a standard for machine-native HTTP payments. When an agent requests an attestation, the server responds with HTTP 402 (Payment Required) along with payment terms. The agent pays $0.01 USDC on Base, and the attestation is returned.
This creates a self-sustaining trust economy: agents pay a fraction of a cent to prove they're trustworthy, and the proof is accepted by any other agent that can verify an Ed25519 signature. No invoices, no contracts, no human approval — just a cryptographic handshake backed by a micropayment.
When Do You Need Attestation?
Attestation becomes essential when any of these conditions are true:
- Regulatory compliance — the EU AI Act (August 2026) requires logging evidence for high-risk AI systems. Attestation generates this evidence automatically.
- Agent-to-agent interactions — when your agent works with third-party agents and needs verifiable trust signals.
- Audit readiness — when you need to prove to auditors what your agent did during a specific session, with cryptographic integrity.
- Customer trust — when your customers need assurance that the AI agent handling their data follows rules and has oversight.
- Multi-framework compliance — when you operate in multiple jurisdictions (EU + Singapore + US) and need to demonstrate compliance across all of them.
Getting Started
Try your first attestation in under 60 seconds: