
Why assurance-based underwriting can turn AI safety into enforceable, financial incentives
By Phil Dawson and Griffin Wahl
Policymakers are working to define guardrails for AI, but guardrails don't exist only in statutes and guidance, they're also embedded in the incentives that shape what organizations can confidently ship, buy, and deploy. That's why it's encouraging to see the Center for Democracy & Technology exploring a parallel lever: AI insurance, and the potential for underwriting based on governance, assurance and evaluation data to incentivize safe AI development and use. In work submitted to NeurIPS 2025, Miranda Bogen and Tina Park argue that private insurance can function as an alternative governance mechanism, shaping what gets insured, on what terms, and what risk mitigation is required to keep coverage in force.
As Bogen and Park put it, the risk mitigation efforts required to secure and retain coverage will inevitably shape the field of AI. That resonates because insurers don't underwrite intention, they underwrite evidence, and the question now is what kind of evidence will matter.
The CDT paper maps the insurance ecosystem into the actors who define insurability: actuaries, lawyers, and underwriters. Actuaries build the models that estimate likelihood and cost, often forced to rely on assumptions and proxy data when historic loss data is thin. Lawyers negotiate terms and define coverage boundaries, especially when precedent and AI-specific regulation are still evolving. Underwriters operationalize it all, deciding when to decline coverage, how to price, what controls to require, and what to exclude, often through bespoke processes because field-wide data remains scarce.
This is where AI governance becomes concrete. The paper points to the kinds of controls that will increasingly matter in underwriting and in disputes: governance processes, technical tests and assessments, and integrating safeguards.
At Armilla AI, we've built our underwriting around this reality. When underwriting is grounded in structured measurement of model performance, controls, and governance, AI insurance becomes more than a backstop, it rewards prevention by translating quantified risk into coverage terms and pricing that leadership teams can act on. That's why we integrate third-party assurance, evaluation, and certification evidence directly into underwriting, including alignment with ISO/IEC 42001. The result is that governance investments become legible and portable, enabling insurability and creating clear incentives for measurable AI risk management.
The paper also highlights a practical tension that many enterprises are already feeling. Organizations often start by looking to general liability coverage, but it is not yet clear whether the use of AI tools will be similarly excluded as other technology risks. In a world of shifting exclusions, that uncertainty becomes a governance problem as much as a coverage problem. If AI activity is excluded, organizations will need additional coverage, similar to how cyber liability evolved.
The paper notes that certain AI developers, such as the frontier labs, may be able to self-insure in the absence of clear AI-specific provisions, though that path may not be viable for smaller organizations. The paper also observes that specialized AI insurers are responding with more rigorous underwriting, including model evaluations to confirm insurability. Put those threads together and you get a simple conclusion: the market is moving toward an evidence economy for AI risk, where compliance and coverage alike depend on demonstrable governance rather than stated intention.
This is where AI insurance and AI governance reinforce each other. When organizations can show credible assurance evidence, underwriting becomes clearer. When coverage reflects documented controls, governance stops being a cost center and becomes a measurable advantage. Policymakers are still defining the regulatory guardrails for AI, but in the meantime, the insurance market is already creating its own accountability infrastructure, and that infrastructure will increasingly shape what AI organizations can confidently build, buy, and deploy.