Armilla Review - AI Scrutiny: Antitrust Investigations, Employee Warnings, and Privacy Concerns

In this week's issue of the Armilla Review we cover: 🚨 U.S. Regulators to Investigate Antitrust Concerns in AI Industry πŸ”” Employees Allege OpenAI and Google DeepMind Conceal AI Dangers for Financial Gain πŸ”’ AI's Use of Children's Photos Raises Serious Privacy Concerns ⚠️ Banks' Growing Dependence on Big Tech for AI Raises New Risks πŸ“ AI Data 'Gold Rush' Faces Imminent Shortage of Human-Written Text πŸ“‰ MIT Study Reveals GPT-4's Performance on Bar Exam Overstated πŸ”§ ChatGPT and Other AI Tools Experience Outages πŸ“± Apple to Integrate ChatGPT in iOS 18
June 12, 2024
β€’
5 min read

U.S. Regulators to Investigate Antitrust Concerns in AI Industry

‍

Federal regulators, including the Justice Department and the Federal Trade Commission (FTC), have agreed to investigate the dominant roles of Microsoft, OpenAI, and Nvidia in the AI industry. The Justice Department will focus on Nvidia's practices, while the FTC will examine Microsoft and OpenAI. This move marks increased regulatory scrutiny on AI technology, emphasizing proactive measures to address potential antitrust issues. This development follows a broader trend of the Biden administration's efforts to regulate major tech companies and ensure competitive practices in emerging technologies.

‍

Source: New York Times

‍

‍

FEATURED

‍

‍

Testing Times Ahead: Meeting the EU AI Act’s requirements for model testing

‍

πŸ‡ͺπŸ‡Ί The EU AI Act, mandates extensive model testing and risk assessments for high-risk and general-purpose AI systems, emphasizing the need for bias testing, accuracy, robustness, and cybersecurity evaluations. Companies must also establish quality management systems and maintain detailed records of their evaluatory activities. Significant investments in testing capabilities and the use of third-party evaluators will be crucial for companies to meet these stringent requirements and avoid severe penalties.

‍

For key insights, read our latest blog post by Stephanie Cairns, Data scientist and Responsible AI Assessment Lead, and Philip Dawson, Head of Global AI Policy at Armilla AI.

‍

‍

THE HEADLINES

‍

‍

Employees Allege OpenAI and Google DeepMind Conceal AI Dangers for Financial Gain

‍

A group of current and former employees from OpenAI and Google DeepMind has published a letter warning that advanced AI systems pose serious risks, accusing their companies of prioritizing financial gains over public safety. The letter, signed by thirteen individuals, highlights potential harms such as manipulation, misinformation, and even human extinction. The group claims that AI companies have critical information about the dangers of their technology but are not required to disclose it, leaving only employees to hold them accountable, despite restrictive confidentiality agreements. They demand the cessation of non-disparagement agreements, the establishment of anonymous reporting processes, and a culture of open criticism without retaliation. This letter underscores the urgent need for stronger regulatory measures as governments worldwide work to keep pace with rapid AI advancements.

‍

Source: TIME

‍

‍

AI's Use of Children's Photos Raises Serious Privacy Concerns

‍

Photos of Brazilian children, spanning their entire childhoods, have been used without consent to train AI models, posing significant privacy risks, according to Human Rights Watch (HRW). HRW found 170 photos of children in the LAION-5B dataset, prompting the removal of links to these images, although the problem persists with data still available on the web. This issue highlights the need for robust policies to protect children's data from AI misuse, with HRW advocating for legislative changes in Brazil to prevent scraping and non-consensual use of children's personal data in AI systems.

‍

Source: Ars Technica

‍

‍

Banks' Growing Dependence on Big Tech for AI Raises New Risks

‍

European banking executives have expressed concerns that the increasing reliance on artificial intelligence (AI) will make banks more dependent on a few major U.S. tech firms, creating new risks. At a fintech conference in Amsterdam, ING's Chief Analytics Officer highlighted the need for Big Tech's computational power for AI technologies, which is not feasible for banks to develop independently. This dependency could lead to vendor lock-in, where banks are tied to specific tech providers, limiting flexibility. Britain's proposed regulations aim to mitigate the risks of financial firms' heavy reliance on external tech companies. The European Union's securities watchdog emphasized the legal responsibility of banks to protect customers when using AI.

‍

Source: Reuters

‍

‍

AI Data 'Gold Rush' Faces Imminent Shortage of Human-Written Text

‍

A study by Epoch AI warns that AI systems like ChatGPT may soon exhaust the supply of publicly available human-written text, which could significantly slow AI development. The research predicts that by 2026-2032, tech companies will run out of new high-quality training data, likening this to a β€œgold rush” depleting natural resources. To sustain AI progress, companies are now securing data sources like Reddit and news media, but the long-term reliance on synthetic data or sensitive private information poses new challenges. As AI development faces these constraints, experts emphasize the need for sustainable data sources and careful evaluation of training methods.

‍

Source: AP News

‍

‍

MIT Study Reveals GPT-4's Performance on Bar Exam Overstated

‍

A new study by MIT doctoral student Eric MartΓ­nez suggests that claims about GPT-4's high performance on the bar exam were exaggerated. While OpenAI had announced that GPT-4 scored in the top 10% of test takers, MartΓ­nez's research shows that this was only true when compared to repeat test takers. In reality, GPT-4 placed in the 69th percentile of all test takers and the 48th percentile among first-time test takers. The study highlights significant methodological issues in the original grading process, particularly in the essay-writing section, where GPT-4's performance was notably weaker.

‍

Source: Live Science

‍

‍

ChatGPT and Other AI Tools Experience Outages

‍

On Tuesday, several AI tools, including ChatGPT, Gemini, Claude, and Perplexity, faced accessibility issues. OpenAI's ChatGPT experienced two significant outages, with the first occurring early in the morning and the second starting around 10:30 AM ET. By 1:17 PM ET, OpenAI confirmed the issue was resolved, and other affected services like Google Gemini and Perplexity AI were also restored. The recent outages follow previous disruptions, including a DDoS attack in November and a Microsoft-related outage affecting ChatGPT's search features last month.

‍

Source: The Verge

‍

‍

Apple to Integrate ChatGPT in iOS 18

‍

Apple is set to incorporate OpenAI's ChatGPT into its upcoming iOS 18, which debuted at the WWDC developer event. This integration is part of a broader AI initiative that includes enhanced search features, emoji creation, and AI-generated playlists. Apple chose OpenAI over Google’s Gemini due to better deal terms and perceived superiority of OpenAI's models. This will also give OpenAI access to iPhone users who haven't used ChatGPT yet. These AI features will be available on an opt-in basis for users.
‍

Learn more about Apple Intelligence here.

‍

Read about Apple's partnership with OpenAI here.

‍

Source: SiliconANGLE

‍

‍

Was this forwarded to you? The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia. It's free to subscribe.

‍