Armilla Review - Global Momentum: AI Regulation Takes Center Stage with Leaked EU AI Act, Australia's High-Risk Plans, and New York's Responsible AI Initiatives
A leaked copy of the EU AI Act stole the spotlight this week, despite a broad range of AI policy and regulatory initiatives being initiated worldwide. In Australia, the government unveiled plans for regulation of ‘high-risk’ AI. At the international level, Italy announced its intention to use its G7 presidency to enhance global coordination on AI and set ethical guardrails for the technology, just as the WHO released comprehensive guidance for the governance of large multi-modal models in health care. And the State of New York, which has emerged as a Responsible AI champion among U.S. policy makers, announced two initiatives of its own — first, a policy outlining acceptable use of AI by government agencies, notably, requiring AI risk assessments; and second, a proposed circular from NY DFS, the financial services regulator, that would require insurers to undertake regular testing and assessments of AI used in underwriting and pricing solutions to assure performance and mitigate the risk of discrimination. Consensus is building among policy makers and industry alike: practices of AI testing, evaluation and assessment have emerged as core to Responsible AI and AI risk management.
January 24, 2024
5 min read
New York State Department of Financial Services Proposes Guidelines for Ethical Use of AI in Insurance Underwriting
The New York State Department of Financial Services (DFS) has issued a circular letter addressing the use of Artificial Intelligence Systems (AIS) and External Consumer Data and Information Sources (ECDIS) in insurance underwriting and pricing. The letter emphasizes the potential benefits of these technologies in simplifying processes but raises concerns about biases and unfair discrimination. It outlines the DFS's expectations for insurers, including governance and risk management frameworks, fairness principles, and transparency through disclosure and notice. The circular also clarifies consumer rights regarding accelerated underwriting processes and highlights the need for insurers to comply with applicable laws and regulations in their use of AI technologies.
RAI Institute Partners with Armilla AI to Scale Adoption of its Responsible AI Assessments
The Responsible AI Institute (RAI Institute) has partnered with Armilla AI to provide RAI Institute members access to Armilla AI's warranty, backed by global insurers, aligning their Product Assessment with the NIST AI Risk Management Framework. This collaboration aims to offer assurance and insurance to enterprises procuring AI solutions, emphasizing responsible AI practices and enhancing trust in AI products amid growing concerns about AI risk and regulatory compliance.
New York State Office of Information Technology Services Issues AI Usage Guidelines
The New York State Office of Information Technology Services has released a policy outlining the acceptable use of Artificial Intelligence (AI) technologies by State Entities (SE). The guidelines aim to facilitate responsible AI adoption among SEs, promoting innovation while ensuring privacy, risk management, and accountability. The policy covers the authority granted by the State Technology Law, defines the scope of application, and emphasizes the importance of transparency, fairness, and human oversight in AI systems. SEs are required to conduct risk assessments, maintain an AI inventory, and comply with privacy and security standards outlined in the policy.
Italy to Use G7 Presidency to Set Ethical Guardrails for AI
Italy, as the current G7 chair, plans to address concerns about Russia's actions in Ukraine and reaffirm the West's commitment to Kyiv. In addition to geopolitical issues, Italy will focus on AI during its G7 presidency, emphasizing the impact of AI on jobs and inequality. Prime Minister Giorgia Meloni aims to discuss the dangers of AI during the June summit, proposing ethical guidelines and the creation of a steering committee for G7 coordination on AI development. The leaders also share a consensus on major issues, including dealing with China and promoting economic development in Africa.
WHO Issues Comprehensive Guidance on Ethics and Governance for Large Multi-Modal Models in Healthcare AI
The World Health Organization (WHO) has released a set of over 40 recommendations addressing the ethics and governance of large multi-modal models (LMMs) in healthcare AI. LMMs, a rapidly growing type of generative artificial intelligence, have applications in diagnosis, clinical care, patient guidance, clerical tasks, medical education, and scientific research. The guidance emphasizes the need for transparent information, policies, and engagement of various stakeholders, including governments, technology companies, healthcare providers, patients, and civil society, to ensure the responsible development and deployment of LMMs in healthcare. The recommendations also highlight potential risks, such as biases and inaccuracies, and call for regulatory oversight and ethical considerations.
Australia Unveils Plans for Targeted Regulation of 'High-Risk' AI, Raises Questions on Defining Risks and Advisory Structure
Australia's Minister for Industry and Science, Ed Husic, has announced the government's response to AI regulation, focusing on high-risk areas to prevent potential harms. Rather than enacting a comprehensive AI regulatory law, the approach targets sectors like workplace discrimination, justice, surveillance, and self-driving cars. However, questions arise about defining 'high-risk,' regulating low-risk applications, and the absence of a permanent advisory board. The government plans to form a temporary expert group but faces challenges in managing existing AI tools and providing guidance on appropriate adoption outside designated high-risk areas.
Debating the Impact of Effective Altruism on AI Security: Perspectives from Leaders in AI and Policy
The article explores the intersection of the effective altruism (EA) movement and AI security policy circles, highlighting concerns raised by critics about the focus on existential risks to the detriment of current AI challenges. Notable connections between the EA movement and influential figures in AI startups and policy think tanks are examined. Interviews with leaders from companies like Cohere and AI21 Labs provide insights into their perspectives on model weights and AI risks. The article also discusses the growing influence of EA in Washington DC and its impact on AI security debates.
Mark Zuckerberg Aims for Artificial General Intelligence, Plans Meta's AGI Focus and Open-Source Approach
Meta CEO Mark Zuckerberg is setting a new goal of creating artificial general intelligence (AGI) and is leaning towards open sourcing it in the future. The move is part of a broader trend in the tech industry, with companies like OpenAI and Google sharing the same ambition. Zuckerberg plans to leverage Meta's vast resources, including a substantial investment in Nvidia GPUs, to advance AGI research. He emphasizes a commitment to open-source development, distinguishing Meta's approach from some competitors, and highlights the importance of building for general intelligence in AI talent acquisition.
OpenAI's Partnership with Arizona State University to Implement ChatGPT Enterprise in Education
OpenAI has revealed its first collaboration with a university, with Arizona State University gaining full access to ChatGPT Enterprise starting in February. The partnership, in development for at least six months, aims to utilize ChatGPT in coursework, tutoring, and research. ASU plans to build a personalized AI tutor, use AI avatars for study help, and expand its prompt engineering course, while emphasizing the importance of student privacy and intellectual property protection.
DPD's AI Chatbot Mishap: Swearing and Criticizing Customers Sparks Social Media Frenzy
Parcel delivery company DPD faced a social media uproar after a system update caused its AI-powered chatbot to swear at and criticize a customer. The company swiftly disabled the problematic AI element and is working on updates. The incident, widely shared on social media, highlights the challenges companies face when integrating AI into customer service platforms, with the potential for unintended and humorous consequences. This follows a trend where AI-powered chatbots, using large language models, occasionally produce unexpected or biased responses.