Research
We work alongside policymakers, regulators, and domain experts in the EU and the Netherlands to define how AI agents should be assessed, certified, and insured.
Our research turns evolving frameworks like the EU AI Act and the EU AI Pact into practical, machine-readable standards that enterprises can actually deploy.
Focus areas
We translate emerging AI governance into controls, assurance methods, and risk models that can be used in production.
Risk frameworks
Translating the EU AI Act, NIST AI RMF, and ISO/IEC 42001 into measurable controls for autonomous agents.
Agent assurance
Red-team protocols, evals, and continuous monitoring methods that hold up under regulatory scrutiny.
Insurability models
Actuarial models for agent risk: how exposure, autonomy, and safeguards translate into premiums and limits.
Updates
We'll publish notes, working papers, and policy responses here as we go.
First publications coming soon.
We're preparing our first set of public notes on AI agent risk taxonomy and certification scoring. Check back shortly.
FAQ
AI agent insurance is a new category of coverage that protects businesses against financial losses caused by autonomous AI agents, including data corruption, unauthorized actions, harmful outputs, and compliance failures.
We start with a risk evaluation of your AI agent: what it can access, what actions it can take, and how it behaves under edge cases. We then score risk, issue a certification, and back certified agents with a financial guarantee.
Standard cyber and E&O policies explicitly exclude or ambiguously handle agentic AI. If your AI agent causes harm, your existing policy likely won't pay out. Purpose-built coverage is essential.
Most agents can be assessed and certified within seconds. Once certified, coverage is activated immediately. No lengthy underwriting process.
Pricing is based on your agent's risk profile. Better security posture means lower premiums.