Frequently Asked Questions
What is an AI agent ethics framework?
An AI agent ethics framework defines the principles and behaviours that an agent should follow beyond legal compliance. The Integrity Oath is AgentApproved's ethical framework — six principles (transparency, proportionality, accountability, data minimisation, human dignity, and honest representation) verified through runtime behaviour, not self-reported claims.
How is the Integrity Oath different from regulatory compliance?
Regulations like the EU AI Act set the legal minimum. The Integrity Oath is voluntary — agents opt in to be held to a higher standard. Think of it as the difference between following the law and being someone you'd actually trust. Integrity Oath attestation stacks on top of regulatory compliance, never replaces it.
Who is accountable when an AI agent makes a mistake?
The agent's operator — the person or organisation that deployed it. The Integrity Oath's accountability principle requires a clear chain from agent to operator to human. Attestation evidence provides the audit trail that determines exactly what the agent did and who authorised it.
How do you prove an AI agent is ethical?
Through runtime evidence, not promises. AgentApproved evaluates the agent's actual behaviour — did it disclose its capabilities? Did it minimise data collection? Did it escalate appropriately? The Integrity Oath score is based on what the agent did, not what its documentation claims.
Can the Integrity Oath be combined with other frameworks?
Yes. Use scope full to get attested against the EU AI Act, Singapore's governance framework, and the Integrity Oath simultaneously. The Gold trust tier requires grade A across all frameworks — regulatory compliance plus ethical commitment.