The future of insurance coverage for risks associated with AI technology is unclear. Conversations with key figures in the insurance industry shed some light on its future and the role of the legislator.
Artificial intelligence is transforming industries, presenting both opportunities and challenges. Among these is the question of liability for AI-related risks, a critical issue for insurers, regulators, and businesses alike. AI liability policies address how responsibility is assigned and mitigated when AI systems cause harm or fail to perform as expected. To further this conversation, the WTW Research Network has worked with Anat Lior, Assistant Professor at Drexel University's Thomas R. Kline School of Law, to produce a report that delves into the emerging practices and frameworks shaping this vital intersection of technology and risk management.
The importance of this research lies in its ability to shed light on an underexplored area of the insurance industry that has far-reaching implications. As AI technologies proliferate across sectors, understanding how to assess and mitigate their risks becomes increasingly urgent. Insurers play a pivotal role in fostering trust and enabling innovation by creating safety nets for AI-driven solutions. This work contributes to that effort by identifying gaps in current policies, highlighting emerging practices, and exploring how regulation and industry initiatives can align to address challenges. By examining these issues through empirical research and stakeholder insights, the piece is positioned to elevate the discourse and provide actionable pathways for progress.
Here we provide a brief overview of some of the key discussion points in the report.
Several insurers have ventured into AI-specific coverage, tailoring products for emerging risks. Munich Re’s aiSure™ pioneered AI insurance in 2018, offering performance guarantee coverage for AI technologies. This initiative addressed market demand for trust in AI reliability, driving innovation. Armilla AI provides warranties for AI model performance, backed by reinsurers like Swiss Re. Their risk assessment evaluates training data, testing methods, and usage scenarios. Vouch launched AI insurance in 2024, focusing on start-ups. Coverage includes AI-related errors, discrimination claims, IP infringement, and regulatory defense. CoverYourAI offers business interruption coverage for AI-induced operational delays. Their underwriting emphasizes predictive models rather than historical data. Relm Insurance announced three AI-specific policies in January 2025. Start-ups like AiShelter and Testudo focus on developing innovative policies and risk assessment tools for generative AI.
Deloitte forecasts AI insurance premiums could reach $4.7 billion globally by 2032, reflecting a market growing at an 80% annual rate. This creates opportunities and challenges for insurers navigating AI’s risks and regulatory environment.
Insurers grapple with limited historical data for underwriting AI risks. While some build predictive models, others are adopting traditional practices until data matures. Companies like Armilla AI develop proprietary datasets to quantify AI risks. Their underwriting process assesses specific product dimensions and usage contexts. Generative AI poses unique challenges, including false information, bias, and IP violations. While not fundamentally new, these risks amplify in scale. Many businesses lack tailored AI policies, relying on enhanced wording in existing Cyber and Tech E&O products. The absence of AI-specific exclusions often results in “silent coverage,” which can lead to grey areas in claims. Large-scale insurers cautiously explore AI-specific policies, while smaller players innovate. Existing policies—like Cyber and Tech E&O—often adapt to AI risks without creating new standalone products.
AI regulation intersects with insurance through three key lenses. First, insurance law governs how insurers operate, setting guidelines that can either facilitate or constrain innovation. Excessive regulation may limit insurers' ability to offer comprehensive coverage. Second, insurers influence policyholder behavior through underwriting criteria, claims management, and loss prevention services. Encouraging safe AI practices aligns with insurers’ broader risk management objectives. Third, legislations like the EU AI Act influence insurers’ risk appetite and the creation of new insurance markets. High statutory fines and enforcement mechanisms shape underwriting decisions, often leading to stricter coverage terms, higher premiums and in some cases exclusion from traditional policies leading the creation of new ones. The EU AI Act exemplifies how regulation can guide and change risk classification. Categorizing AI applications as high, medium, or low risk assists insurers in defining their coverage boundaries.
Insurers navigate several challenges in AI risk management. Data scarcity limits claims data, hindering accurate risk pricing. Market dynamics show smaller firms innovating faster than larger insurers, focusing on niche markets and predictive tools. Collaborations with regulators and stakeholders are advocated by some insurers to enhance coverage consistency. Participants suggest a need for balanced regulation that avoids stifling innovation while encouraging responsible AI development. Experimentation and collaboration between insurers, policymakers, and AI developers are vital.
AI insurance represents a burgeoning market with significant potential. However, insurers must develop expertise in AI technologies to improve risk assessment and product design. Balancing innovation with caution, they should adapt existing policies while exploring standalone solutions. Engaging with regulators and stakeholders is essential to shaping a proactive, sustainable insurance framework. The evolving AI landscape requires insurers to align with technological advancements, fostering trust and resilience in the face of uncertainty.
Originally published by Willis Towers Watson (WTW)