EU AI Act enforcement: August 2, 2026

Compliance evidence
for your AI agents

One-line SDK integration captures every agent action as a tamper-proof, Ed25519-signed audit trail. Automatically mapped to EU AI Act Article 12. When the auditor asks, you have the proof.

pip install agentapproved
agent.py
from agentapproved import AgentApprovedHandler

# One line. That's it.
handler = AgentApprovedHandler(
    agent_id="my-agent",
    api_key="ap_...",
)

# Attach to any LangChain agent
agent = create_agent(callbacks=[handler])
agent.invoke({"input": "..."})

# Record human oversight (Art 12(2)(d))
handler.record_oversight(
    reviewer_id="jane",
    decision="approved",
)
handler.end_session()
EU AI Act Article 12 Chain Verified
100%

Full Compliance

All 6 requirements satisfied

Art 12(1) — Automatic logging capability 47 events
Art 12(2)(a) — Period of each use 2 events
Art 12(2)(b) — Reference database 4 events
Art 12(2)(c) — Input data leading to match 8 events
Art 12(2)(d) — Human oversight verification 3 events
Art 12(3) — Post-market monitoring 12 events

The compliance clock is ticking

The EU AI Act is the world's first comprehensive AI regulation. If your agents handle anything "high-risk," you need provable compliance evidence — not a checklist, not a policy doc. Actual, timestamped, tamper-proof audit data.

5

Months until enforcement

EU AI Act Article 12 requires automatic logging of every AI system action. Enforcement begins August 2, 2026. Penalties up to 7% of global revenue.

74%

Of companies unprepared

Most organisations deploying AI agents have zero compliance infrastructure. When the auditor asks for Article 12 evidence, they scramble through CloudWatch logs for weeks.

0

Lines of code to fix it

AgentApproved captures everything automatically from your existing LangChain agent callbacks. No instrumentation. No refactoring. One import, one line, done.

Three steps to compliance evidence

From pip install to your first compliance report in under 10 minutes.

1

Install & attach

Add AgentApprovedHandler to your LangChain agent's callbacks. Every LLM call, tool use, RAG retrieval, and agent decision is captured automatically.

2

Events are hash-chained & signed

Every event is linked via SHA-256 hash chain and Ed25519 signed. Tamper with one event and the entire chain breaks. This is the cryptographic integrity auditors require.

3

Get your compliance score

Events are automatically mapped to EU AI Act Article 12 requirements. See your score, identify gaps, and export a regulation-mapped evidence packet for your auditor.

Everything your agent does. Recorded.

The SDK hooks into LangChain's callback system. You don't instrument anything — we capture it all.

LLM Calls

Model, prompts, outputs, tokens, latency

Tool Calls

Tool name, inputs, outputs, success/failure

RAG Retrieval

Queries, matched docs, source URIs

Agent Decisions

Action selection, reasoning, tool routing

Human Oversight

Review, approve, reject, override events

Session Lifecycle

Start/end timestamps, event counts, duration

Cryptographic Integrity

Evidence that can't be forged

Every event is linked to its predecessor via SHA-256 hash chain and signed with Ed25519. Delete an event, modify a field, or reorder the sequence — the chain breaks and the tampering is detectable.

This isn't just logging. This is the kind of cryptographic proof that satisfies SOX, EU AI Act, and ISO 42001 requirements for evidence integrity.

SHA-256 hash chain links every event
Ed25519 digital signatures (same as SSH keys)
Independent verification — auditor can check without our platform
Server-side signing — you can't forge your own audit trail
integrity.json
{
  "chain_valid": true,
  "event_count": 47,
  "chain_start": "afab1f...b812f5",
  "chain_end":   "e66854...f9e7dc",
  "public_key":  "7bc6ba...15beec",
  "algorithm":   "Ed25519",

  // Each event links to its predecessor
  "events": [
    {
      "event_id":      "019d0b6c-9ee...",
      "action_type":   "llm_call_end",
      "previous_hash": "48177e...324025",
      "event_hash":    "a3f2c1...8c1d4e",
      "signature":     "9c4b2a...f71e03"
    }
  ]
}

Start free. Scale when ready.

Every tier includes EU AI Act Article 12 compliance mapping, hash chain integrity, and evidence export.

Free

For developers evaluating

$0 /month
Get Started
  • 1 agent
  • 1,000 events/month
  • EU AI Act compliance mapping
  • Hash chain + Ed25519 signing
  • Evidence packet export
  • Dashboard access
Most Popular

Pro

For teams shipping agents

$500 /month
Contact Sales
  • Up to 10 agents
  • 100,000 events/month
  • Everything in Free
  • Email + webhook alerting
  • Custom S3 for data residency
  • Priority support

Enterprise

For regulated industries

Custom
Talk to Us
  • Unlimited agents & events
  • On-premise deployment
  • SOC 2 + HIPAA mappings
  • SSO / SAML
  • SLA + dedicated support
  • Custom regulation templates

Your agents are running.
Can you prove they're compliant?

EU AI Act enforcement begins August 2, 2026. Start capturing compliance evidence now — it takes one line of code.

Open source SDK. No credit card required. Free tier is permanent.