One-line SDK integration captures every agent action as a tamper-proof, Ed25519-signed audit trail. Automatically mapped to EU AI Act Article 12. When the auditor asks, you have the proof.
pip install agentapproved
from agentapproved import AgentApprovedHandler
# One line. That's it.
handler = AgentApprovedHandler(
agent_id="my-agent",
api_key="ap_...",
)
# Attach to any LangChain agent
agent = create_agent(callbacks=[handler])
agent.invoke({"input": "..."})
# Record human oversight (Art 12(2)(d))
handler.record_oversight(
reviewer_id="jane",
decision="approved",
)
handler.end_session()
Full Compliance
All 6 requirements satisfied
The EU AI Act is the world's first comprehensive AI regulation. If your agents handle anything "high-risk," you need provable compliance evidence — not a checklist, not a policy doc. Actual, timestamped, tamper-proof audit data.
Months until enforcement
EU AI Act Article 12 requires automatic logging of every AI system action. Enforcement begins August 2, 2026. Penalties up to 7% of global revenue.
Of companies unprepared
Most organisations deploying AI agents have zero compliance infrastructure. When the auditor asks for Article 12 evidence, they scramble through CloudWatch logs for weeks.
Lines of code to fix it
AgentApproved captures everything automatically from your existing LangChain agent callbacks. No instrumentation. No refactoring. One import, one line, done.
From pip install to your first compliance report in under 10 minutes.
Add AgentApprovedHandler to your LangChain agent's callbacks. Every LLM call, tool use, RAG retrieval, and agent decision is captured automatically.
Every event is linked via SHA-256 hash chain and Ed25519 signed. Tamper with one event and the entire chain breaks. This is the cryptographic integrity auditors require.
Events are automatically mapped to EU AI Act Article 12 requirements. See your score, identify gaps, and export a regulation-mapped evidence packet for your auditor.
The SDK hooks into LangChain's callback system. You don't instrument anything — we capture it all.
LLM Calls
Model, prompts, outputs, tokens, latency
Tool Calls
Tool name, inputs, outputs, success/failure
RAG Retrieval
Queries, matched docs, source URIs
Agent Decisions
Action selection, reasoning, tool routing
Human Oversight
Review, approve, reject, override events
Session Lifecycle
Start/end timestamps, event counts, duration
Every event is linked to its predecessor via SHA-256 hash chain and signed with Ed25519. Delete an event, modify a field, or reorder the sequence — the chain breaks and the tampering is detectable.
This isn't just logging. This is the kind of cryptographic proof that satisfies SOX, EU AI Act, and ISO 42001 requirements for evidence integrity.
{
"chain_valid": true,
"event_count": 47,
"chain_start": "afab1f...b812f5",
"chain_end": "e66854...f9e7dc",
"public_key": "7bc6ba...15beec",
"algorithm": "Ed25519",
// Each event links to its predecessor
"events": [
{
"event_id": "019d0b6c-9ee...",
"action_type": "llm_call_end",
"previous_hash": "48177e...324025",
"event_hash": "a3f2c1...8c1d4e",
"signature": "9c4b2a...f71e03"
}
]
}
Every tier includes EU AI Act Article 12 compliance mapping, hash chain integrity, and evidence export.
For developers evaluating
For teams shipping agents
For regulated industries
EU AI Act enforcement begins August 2, 2026. Start capturing compliance evidence now — it takes one line of code.
Open source SDK. No credit card required. Free tier is permanent.