Enforcement: August 2, 2026

EU AI Act Article 12 compliance for AI agents

The EU AI Act requires automatic logging of every AI system action. AgentApproved captures it all — tamper-proof, Ed25519-signed, mapped to all 6 Article 12 requirements. When the auditor asks, you have the proof.

Get Started Free See How It Works
Months until enforcement
5

EU AI Act Article 12 enforcement begins August 2, 2026. Penalties up to 7% of global revenue or €35 million.

Of companies unprepared
74%

Most organisations deploying AI agents have zero compliance infrastructure. When the auditor asks for Article 12 evidence, they scramble.

Lines of code to fix it
0

AgentApproved captures everything automatically from your existing LangChain agent callbacks. One import, one line, done.

Article 12 Requirements

All 6 requirements. Automatically satisfied.

AgentApproved maps every logged event to the specific Article 12 sub-requirements. Your compliance score updates in real time as your agent runs.

Art 12(1)

Automatic logging capability

High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system. AgentApproved records every action automatically via LangChain callbacks.

Art 12(2)(a)

Period of each use

Logging shall include the period of each use (start and end time). Session start and end events are captured automatically with ISO 8601 timestamps.

Art 12(2)(b)

Reference database

The reference database against which input data has been checked. RAG retrieval events capture source URIs, document IDs, and matched content references.

Art 12(2)(c)

Input data leading to match

The input data for which the search has led to a match. Retrieval queries and matched documents are captured with full input/output hashes.

Art 12(2)(d)

Human oversight verification

Identification of natural persons involved in verification. Use record_oversight() to log when a human reviews or approves agent output.

Art 12(3)

Post-market monitoring

Logging capabilities shall provide appropriate traceability throughout the AI system's lifecycle. Continuous event capture across every agent session with full audit trail.

Evidence Trail

Complete AI agent audit trail. Automatic.

The SDK hooks into LangChain's callback system. You don't instrument anything — we capture it all.

LLM Calls

Model, prompts, outputs, tokens, latency

Tool Calls

Tool name, inputs, outputs, success/failure

RAG Retrieval

Queries, matched docs, source URIs

Agent Decisions

Action selection, reasoning, tool routing

Human Oversight

Review, approve, reject, override events

Session Lifecycle

Start/end timestamps, event counts, duration

Integration

One line of code. Full Article 12 coverage.

Install the SDK, attach the handler, and your agent is automatically generating compliance evidence.

agent.py from agentapproved import AgentApprovedHandler # One line. That's it. handler = AgentApprovedHandler( agent_id="my-agent", api_key="ap_...", ) # Attach to any LangChain agent agent = create_agent(callbacks=[handler]) agent.invoke({"input": "..."}) # Record human oversight (Art 12(2)(d)) handler.record_oversight( reviewer_id="jane", decision="approved", ) handler.end_session()

Evidence that can't be forged

Every event is linked to its predecessor via SHA-256 hash chain and signed with Ed25519. Delete an event, modify a field, or reorder the sequence — the chain breaks and the tampering is detectable.

This isn't just logging. This is the kind of cryptographic audit trail that satisfies SOX, EU AI Act, and ISO 42001 requirements for evidence integrity.

SHA-256 hash chain links every event
Ed25519 digital signatures (same as SSH keys)
Independent verification — auditor can check without our platform
Server-side signing — you can't forge your own audit trail
integrity.json { "chain_valid": true, "event_count": 47, "chain_start": "afab1f...b812f5", "chain_end": "e66854...f9e7dc", "public_key": "7bc6ba...15beec", "algorithm": "Ed25519", // Each event links to its predecessor "events": [ { "event_id": "019d0b6c-9ee...", "action_type": "llm_call_end", "previous_hash": "48177e...324025", "event_hash": "a3f2c1...8c1d4e", "signature": "9c4b2a...f71e03" } ] }
Get Started

Article 12 compliance in under 10 minutes

Install the SDK, attach it to your agent, create an API key, and request your first compliance attestation. Free tier is permanent. No credit card required.

1. Install the SDK

pip install agentapproved

2. Add to your LangChain agent

from agentapproved import AgentApprovedHandler handler = AgentApprovedHandler( agent_id="my-agent", api_key="ap_...", # from dashboard ) agent = create_agent(callbacks=[handler]) agent.invoke({"input": "..."}) # Record human oversight (Art 12(2)(d)) handler.record_oversight(reviewer_id="jane", decision="approved") handler.end_session()

3. Get your API key

Create a free API key:

4. Request your first attestation

Once you have your API key and your agent has logged some events, request an EU AI Act attestation:

curl curl -X POST https://app.agentapproved.ai/api/v1/attest \ -H "Authorization: Bearer ap_YOUR_KEY" \ -H "Content-Type: application/json" \ -d '{"agent_id": "my-agent", "scope": "eu-ai-act-art12"}'

You'll receive a signed attestation certificate with your Article 12 compliance score, or a detailed breakdown of which requirements need more evidence.

View on PyPI GitHub