Armilla Review - Key Developments in AI: Safety, Regulation, Insurance, and Inclusivity

This week's Armilla Review highlights significant developments in AI, including the U.S. Commerce Secretary's vision for AI safety and global collaboration, the transformative impact of AI on the insurance industry, and emerging regulations in the EU and U.S. Additionally, it addresses legislative efforts to combat deepfake misinformation, advancements in AI interpretability, and the need for diversity and gender equality in AI development.
May 29, 2024
5 min read

U.S. Commerce Secretary Unveils AI Safety Vision and Global Collaboration Plan

At the AI Seoul Summit, U.S. Secretary of Commerce Gina Raimondo presented a strategic vision for the U.S. Artificial Intelligence Safety Institute (AISI) under President Biden's leadership. The vision emphasizes the importance of AI safety and outlines steps to advance AI safety science, foster responsible AI innovation, and support national and global coordination. Raimondo announced plans for a global network of AI Safety Institutes, with a future convening in San Francisco, aiming to collaborate on strategic research and ensure AI systems' safety, security, and trustworthiness worldwide.

Source: U.S. Department of Commerce

Testing Times Ahead: Meeting the EU AI Act’s requirements for model testing

🇪🇺 The EU AI Act, mandates extensive model testing and risk assessments for high-risk and general-purpose AI systems, emphasizing the need for bias testing, accuracy, robustness, and cybersecurity evaluations. Companies must also establish quality management systems and maintain detailed records of their evaluatory activities. Compliance challenges include data access, testing scalability, and lack of clear guidelines. Significant investments in testing capabilities and the use of third-party evaluators will be crucial for companies to meet these stringent requirements and avoid severe penalties.

For key insights, read our latest blog post by Stephanie Cairns, Data scientist and Responsible AI Assessment Lead, and Philip Dawson, Head of Global AI Policy at Armilla AI.

Insurance Industry Faces Paradigm Shift with HUD Ruling on Fair Housing Act

A recent court ruling has mandated that the Fair Housing Act's requirements take precedence over actuarial data in the insurance industry. The Property Casualty Insurance Association of America (APCIA) had argued for over a decade that insurance rates should be based solely on actuarial data to reflect risk accurately. However, the court sided with the Federal Department of Housing and Urban Development (HUD), enabling homeowners to sue insurers for disparate-impact claims. This decision places fairness of access over merit, compelling insurers to balance actuarial data with anti-discrimination laws. This ruling is expected to increase costs for insurers and consumers, and it signifies a broader shift towards prioritizing equitable access in insurance practices.

Source: Ethics and Insurance

AI Transforms Insurance Industry: Balancing Benefits and Risks

The insurance industry is undergoing a significant transformation due to the increasing adoption of AI, necessitating rapid adaptation to manage new risks such as algorithmic bias and decision-making errors. The Swiss Re Institute's report highlights that while AI offers opportunities for tailored insurance solutions, it also introduces complex challenges, particularly in sectors like healthcare and transport. The report also cites Armilla as leading the way by offering specific coverage for AI risks, including third-party coverage indemnifying AI model performance and providing verification services. AI’s potential to revolutionize risk assessments and customer service must be balanced with vigilant risk management and ethical considerations.

Source: Swiss Re

EU and US Diverge on Regulating General-Purpose AI Amid Rapid Technological Advances

The European Union and the United States are taking distinct approaches to regulating general-purpose AI, as highlighted by recent legislative developments. The EU's AI Act introduces binding rules for general-purpose AI models and establishes a centralized governance structure through a European AI Office. In contrast, the U.S. Executive Order on AI, signed by President Biden, outlines a comprehensive but less stringent framework, emphasizing guidelines and voluntary commitments. Both regions aim to address the risks and opportunities posed by advanced AI, with a focus on international cooperation, as seen in their support for the G7's voluntary AI code of conduct. These efforts reflect a shared goal of ensuring safe, secure, and trustworthy AI systems while accommodating differing regulatory philosophies.

Source: Brookings

Arizona Lawmaker Uses ChatGPT to Craft Deepfake Legislation for Elections

Arizona state representative Alexander Kolodin utilized ChatGPT to help define "deepfake" in a new law regulating their use in elections. The bipartisan bill, which passed unanimously, allows candidates or residents to seek a court declaration on the authenticity of alleged deepfakes, providing a tool to counter AI-generated misinformation. The bill includes exceptions for satire, artistic expression, and criticism. Kolodin's approach differs from other states by focusing on judicial review rather than outright bans or mandatory disclaimers, aiming to balance regulation with free speech concerns. This law could serve as a model for other states navigating the regulation of AI in elections.

Source: The Guardian

Understanding AI: How Anthropic Taught a Model to Explain Its Thoughts

The Anthropic interpretability team successfully scaled sparse autoencoders to extract high-quality, interpretable features from their medium-sized model, Claude 3 Sonnet. These features span a range of abstract concepts, including famous people, geographical locations, and programming patterns, and are multilingual and multimodal. Importantly, some features relate to safety concerns like security vulnerabilities, bias, deception, and dangerous content, though further research is needed to understand their implications fully. This study demonstrates the potential of sparse autoencoders to interpret complex models and steer their behaviour, providing a foundation for improving AI safety.

Source: Anthropic

Meta's AI Council Sparks Diversity Concerns

Meta recently announced an AI advisory council composed solely of white men, sparking criticism about the lack of diversity. This council, unlike Meta's more diverse board of directors, includes only business executives and entrepreneurs, excluding ethicists and researchers. Critics argue that AI development must include diverse perspectives to avoid perpetuating biases and harming marginalized communities. Women and people of color have long faced exclusion in AI, leading to technology that often fails them. The controversy highlights the need for inclusive AI development to ensure equitable and safe technological advancements.

Source: Tech Crunch

Addressing Gender Bias in AI: Challenges and Solutions

Artificial Intelligence (AI) reflects societal gender biases, exacerbating gender inequality. Despite increasing internet access, a significant gender digital divide remains, especially in low-income countries. Studies reveal that many AI systems exhibit gender and racial biases, perpetuating stereotypes and leading to inequitable outcomes in areas like healthcare and employment. The development of AI, largely dominated by men, often overlooks women's experiences, resulting in biased technology. To combat this, it's crucial to prioritize gender equality in AI development, diversify AI teams, and ensure data represents diverse perspectives. Global governance frameworks, such as the upcoming Global Digital Compact, offer opportunities to integrate gender perspectives and address AI's broader social impacts.

Source: Reliefweb

Was this forwarded to you? The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia. It's free to subscribe.