Armilla Review - US Initiatives in AI Governance: From Cloud Security Measures to Generative AI Scrutiny, Reporting Mandates, and Collaborative Developments.
Welcome to your weekly review. Reports indicate the Biden administration is considering "know your customer" regulations for U.S. cloud services providers, to identify foreign entities accessing data centers for AI model training, in order to prevent misuse. The FTC is investigating the major tech companies' generative AI investments and agreements, focusing on their competitive implications. The U.S. government has mandated large LLM developers such as OpenAI and Google to report on foundation models under the Defense Production Act to address national security risks. The FBI plans to use Amazon Rekognition for sensitive tasks, raising concerns about responsible use. Microsoft's Future of Work Report explores AI's impact on information work, critical thinking, and collaboration, stressing careful evaluation. Amid scrutiny of big tech investments, Hugging Face and Google partner to 'democratize AI', and Apple integrates ChatGPT for AI-powered Siri and Messages in iOS 18, testing alongside its own AI models. Collectively, these initiatives underscore the growing recognition of the complex challenges and opportunities posed by AI. Governments and regulatory bodies are actively engaged in formulating measures to address security concerns and ensure responsible AI deployment. Simultaneously, industry players are forging collaborations and integrating AI technologies into their products, reflecting a shared commitment to harnessing the potential of AI while being mindful of ethical considerations and societal impact. A balanced approach that combines regulatory frameworks, industry partnerships, and continuous evaluation is needed to steer the responsible development and deployment of AI.
January 31, 2024
5 min read
US Proposes 'Know Your Customer' Rules for Cloud Computing Amidst Concerns Over China's AI Access
The Biden administration is considering new regulations, termed "know your customer," to require U.S. cloud computing companies to identify and verify foreign entities accessing U.S. data centers for training AI models. Commerce Secretary Gina Raimondo expressed concerns about potential malicious activities and emphasized the need to prevent non-state actors and China from utilizing U.S. cloud resources for AI development. The proposed rules aim to strengthen controls on AI technology transfers and are part of broader efforts to safeguard U.S. technology from being used for military advancements by China. Companies failing to comply with the regulations may face scrutiny, and the proposal is seen as a significant move in the ongoing effort to control AI-related national security risks.
FTC Launches Inquiry into Generative AI Investments and Partnerships, Targeting Tech Giants
The Federal Trade Commission (FTC) has initiated a 6(b) inquiry into generative AI investments and partnerships by issuing orders to Alphabet, Amazon, Anthropic, Microsoft, and OpenAI. The investigation aims to understand the impact of corporate collaborations on the competitive landscape and innovation in the AI sector. FTC Chair Lina Khan emphasizes the need to prevent tactics that may hinder healthy competition and distort innovation in the rapidly evolving AI market. The companies involved in multi-billion-dollar investments, including Microsoft and OpenAI, Amazon and Anthropic, and Google and Anthropic, will be required to provide information on specific agreements, strategic rationale, competitive implications, and more within 45 days.
US Government Mandates AI Companies, Including OpenAI and Google, to Report on Development of Foundation Models
OpenAI and Google, among other AI companies, will soon be required to notify the US government about the development of foundation models, such as OpenAI's GPT-4 and Google's Gemini, according to the Defense Production Act. The act, part of President Biden's AI executive order, mandates companies to share safety data every time they train a new large language model that poses a serious risk to national security. The focus is on future foundation models with unprecedented computing power, highlighting potential national security concerns.
NSF's NAIRR Pilot Marks First Step Toward Bridging AI Research Gap Between Industry and Academia
The National Science Foundation (NSF) launches the National Artificial Intelligence Research Resource (NAIRR) pilot program, collaborating with federal agencies, private sector organizations, and nonprofits to provide computational resources, datasets, and AI tools to academic researchers, addressing the growing disparity in access to AI inputs between industry and academia. The pilot involves contributions from companies like Nvidia, Microsoft, OpenAI, Anthropic, and Meta, aiming to democratize access to expensive infrastructure required for cutting-edge AI research. The initiative comes at a crucial moment when industry dominance in data, computation, and algorithm design has left academic researchers behind, limiting exploration of crucial research directions and scientific breakthroughs. While the pilot is a positive step, experts emphasize the need for sustained government investment, proposed legislation like the CREATE AI Act, and additional measures to expand government access to computing power.
Department of Defense Launches AI Bias Bounty Targeting Unknown Risks in Large Language Models
The Department of Defense's Chief Digital and Artificial Intelligence Office (CDAO) has initiated the first of two AI Bias Bounty exercises, focusing on detecting biases in AI systems, particularly Large Language Models (LLMs). The goal is to identify unknown areas of risk in open source chatbots, encouraging public participation to earn monetary bounties. The exercises, conducted in collaboration with ConductorAI-Bugcrowd and BiasBounty.AI, aim to experiment with algorithmic auditing and red teaming of AI models to ensure unbiased and secure deployment. The outcomes may influence future DoD AI policies and adoption, emphasizing the commitment to safe and reliable AI systems.
FBI to Employ Amazon Rekognition AI for Identifying Nudity, Weapons, and Explosives
The FBI is set to utilize Amazon's Rekognition cloud service, codenamed Project Tyr, to analyze lawfully acquired images and videos for items containing nudity, weapons, explosives, and other identifying information. The initiative, listed in the Department of Justice's AI Use Cases Inventory, is currently in the initiation phase. Despite previous concerns and pledges, Amazon's Rekognition, known for its facial recognition capabilities, will be employed by the FBI for this specific purpose, raising questions about the responsible use of such technology in law enforcement.
Microsoft's Future of Work Report Explores the Role of AI in Information Work, Critical Thinking, and Team Collaboration
Microsoft's latest Future of Work Report delves into the impact of AI, particularly Large Language Models (LLMs), on various aspects of work practices. The report covers topics such as the influence of LLMs on information work tasks, critical thinking, human-AI collaboration, and their application in complex and creative tasks across different domains. Additionally, the report addresses the role of LLMs in team collaboration, knowledge management, organizational changes, and highlights the implications of AI for the future of work and society. It emphasizes the need for careful evaluation, effective collaboration strategies, and proactive measures to shape AI's impact on work.
Hugging Face and Google Forge Strategic Partnership to Democratize AI Development
Hugging Face has announced a strategic partnership with Google Cloud to foster open collaboration in the field of artificial intelligence (AI). The collaboration spans open science, open source, cloud, and hardware, aiming to empower companies to build their own AI using the latest models and features. The partnership will enhance accessibility to AI research and innovations, leveraging Google's contributions to open AI research and open source tools. Google Cloud customers will gain new experiences for training and deploying Hugging Face models within Google Kubernetes Engine (GKE) and Vertex AI, while Hugging Face Hub users will benefit from collaborative efforts in open science, open source, and Google Cloud throughout 2024.
iOS 17.4 Code Reveals Apple's Integration of ChatGPT for AI-Powered Siri and Messages in Preparation for iOS 18
Apple is incorporating major artificial intelligence features into iOS 18, as suggested by code found in the first beta of iOS 17.4. The code reveals Apple's use of OpenAI's ChatGPT API for internal testing, specifically in the development of a new version of Siri powered by large language model technology. The SiriSummarization private framework in iOS 17.4 makes calls to ChatGPT API, indicating its role in internal testing for new AI features. Apple is concurrently developing its own AI models, such as "Ajax," and comparing results with external models like ChatGPT and FLAN-T5, showcasing the company's commitment to integrating large language models into iOS.