Armilla Review - Key Developments in AI Regulation and Industry Trends

This week, the EU approved the first comprehensive AI regulation, setting a global standard, while Colorado enacted a landmark AI regulation law. Additionally, the Biden administration released new AI guidelines for employers, and OpenAI integrated its safety team's duties into broader research amidst leadership changes, highlighting ongoing efforts to balance innovation with safety and ethics in AI development.
May 22, 2024
5 min read

EU Approves Artificial Intelligence Act

The Council of the European Union has approved the Artificial Intelligence (AI) Act, the first comprehensive AI regulation in the world. This legislation adopts a risk-based approach, imposing stricter rules on higher-risk AI systems to ensure safety and protect fundamental rights. The Act fosters the development and use of trustworthy AI across the EU while prohibiting certain high-risk practices like cognitive behavioural manipulation and predictive policing. It also establishes governance bodies for enforcement and introduces transparency and innovation-friendly measures. The AI Act is a significant milestone in setting global standards for AI regulation.

Source: European Council

OpenAI Dissolves Safety Team Following Leadership Departures

OpenAI has disbanded its superalignment team, originally established to ensure the safety of future ultra-capable AI systems, following the exits of its leaders, Ilya Sutskever and Jan Leike. The responsibilities of this team will now be integrated into broader research efforts across the company. This decision comes amidst internal disagreements and resource challenges, highlighting the ongoing tension between rapid development and safety concerns within OpenAI. John Schulman and Jakub Pachocki have been appointed to lead the alignment work and scientific efforts, respectively, as the company remains committed to developing safe and beneficial AGI.

Source: Bloomberg

Colorado Enacts Landmark AI Regulation Law

Governor Polis signed the Colorado AI Act (CAIA) into law on May 17, establishing comprehensive rights and protections concerning high-risk AI systems, effective February 1, 2026. The CAIA, spearheaded by Colorado Senate Majority Leader Rodriguez and Connecticut Senator Maroney, builds on best practices and extensive stakeholder input to guard against discriminatory AI outcomes. It introduces obligations for AI developers and deployers, such as risk management policies and impact assessments, and grants consumers rights including transparency, appeal, and data correction. The law aims to balance AI functionality with robust consumer protection, marking a significant step in U.S. AI regulation.

Source: Future of Privacy Forum

White House Issues AI Guidelines for Employers to Protect Workers

The Biden administration has released eight principles for employers using AI in the workplace, emphasizing human oversight and limiting data collection to business purposes. The guidelines, announced by the White House and the US Labor Department, aim to ensure AI enhances job quality and protects worker rights. They address concerns such as worker displacement, surveillance, and discrimination, calling for governance systems, training, and transparency in AI use. Major companies like Microsoft and Indeed have committed to integrating these principles, which serve as a guiding framework for responsible AI implementation across industries.

Source: Bloomberg Law

The Growing Influence of AI in the Insurers Industry

The rapid rise of AI across industries has prompted insurers, brokers, and legal experts to assess the new risks and exposures this technology may introduce. While AI-related insurance claims have not yet reached critical mass to trigger major policy adjustments, the industry is beginning to respond with measures like AI-specific endorsements and clearer policy language. Concerns include potential biases in AI training data, which could lead to misdiagnoses or other significant impacts on decision-making processes. Insurers are exploring how to affirmatively cover AI-related exposures while maintaining clarity and addressing the evolving landscape of AI technology.

Source: Business Insurance

International Report Highlights AI Safety Concerns and Collaborative Efforts

The International Scientific Report on the Safety of Advanced AI provides a comprehensive, evidence-based understanding of the risks and capabilities of general-purpose AI systems. Commissioned by the UK government and chaired by AI expert Yoshua Bengio, the report represents a significant international collaboration among 30 countries. Key findings include the rapid advancement of AI capabilities, potential harms from malicious use and biases, and systemic risks such as economic disruption. The report emphasizes the need for improved understanding and technical methods to mitigate risks, and calls for ongoing international cooperation to address the evolving challenges of AI safety.

Source: UK Government

Global Survey Reveals Diverse Public Opinions on AI

The Schwartz Reisman Institute for Technology and Society (SRI), in collaboration with the Policy, Elections and Representation Lab (PEARL) at the University of Toronto, published a comprehensive report on global public opinion about AI. Surveying over 24,000 individuals across 21 countries, the study revealed varying attitudes toward AI, with positive views prevalent in China, India, Indonesia, and Kenya, and more skepticism in countries like France, the U.S., Canada, and the U.K. Key concerns include AI regulation, job loss, and safety, while trust in tech companies for self-regulation remains low. The survey also highlighted widespread awareness of AI applications like ChatGPT, though deepfake awareness is limited.

Source: Schwartz Reisman Institute for Technology and Society (SRI)