Armilla Review - AI Oversight and Risks: FTC's Expanded Authority, UK School Pupils, Three Lines of Defense, and Canadian Wildfires

The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia tailored to the interests of our community: regarding AI evaluation, assurance, and risk.
November 29, 2023
5 min read

Top Story

FTC Empowers Oversight: Expanded Authority and Implications for AI in Diverse Sectors

The Federal Trade Commission (FTC) has expanded its investigative authority over AI products and services, signalling increased scrutiny in the industry. This move allows the FTC to utilize civil investigative demands (CIDs) to collect information, emphasizing compliance and enforcement. Tech firms are urged to organize internal records and substantiate AI claims with evidence, while also addressing concerns of fairness, bias, and compliance throughout the development process. The FTC's involvement in initiatives like the Voice Cloning Challenge and discussions on AI-generated content within the U.S. Copyright Office reflects its commitment to balancing innovation with responsible oversight in the dynamic AI landscape.

Source: Venture Beat

Top Articles

Warning Raised as UK School Pupils Utilize AI for Creating Illicit Imagery of Children

Child protection experts in the UK have issued warnings about school pupils using AI technology to produce realistic but indecent images of other children, constituting child sexual abuse material. Urgent action is advocated to educate children about the risks associated with creating such images, emphasizing their potential spread across the internet and their misuse by strangers and predators. Calls have been made for improved blocking systems in schools to prevent the circulation of abusive content, as the quality of AI-generated imagery has become alarmingly comparable to genuine photos, posing a serious challenge to identifying and combating these materials.

Source: The Guardian

AI Risk: Implementing the Three Lines of Defense Model in AI Companies

The article explores AI risk management via the three lines of defense (3LoD) model, aiming to address economic, legal, and ethical concerns. It assesses the model's applicability within AI companies, outlining its implementation strategies and potential benefits. The focus lies on assigning risk management roles, the model's societal impact, and a comprehensive evaluation of its effectiveness in mitigating AI-related risks. While academic literature on this topic remains limited, the article aims to bridge the gaps by answering crucial research questions and providing detailed guidance for AI companies implementing the 3LoD model.

Source: Springer

Harnessing AI's Predictive Power: Addressing Homelessness Challenges in Canada

AI is being utilized in Canada to address homelessness, aiding organizations and government agencies in predicting future trends and understanding the population at risk of chronic homelessness. Algorithms analyze various data points to forecast areas likely to experience increased homelessness, providing insights for targeted preventative measures and resource allocation. Despite its potential, experts caution that AI's efficacy in addressing homelessness is constrained by challenges in accurately quantifying the homeless population and individuals' reluctance to engage with government programs, highlighting that AI is a helpful tool but not a standalone solution.

Source: CTV

OpenAI's Q* Model Sparks AGI Speculation: Debunking the Hype

Reports surfaced revealing internal tensions at OpenAI prior to CEO Sam Altman's temporary removal, as staff researchers cautioned the board about the AI discovery called Q*, expressing concerns about its potential risks to humanity. This undisclosed warning and the emergence of the Q* algorithm reportedly influenced Altman's removal, raising apprehensions regarding its implications for superintelligence. Despite Q*'s capacity to solve mathematical problems, albeit modestly compared to human capabilities, internal optimism about its potential contributions to artificial general intelligence (AGI) persists. However, experts warn against overestimating its significance, noting that while math proficiency is essential for AI reasoning, mastering basic arithmetic doesn't indicate the advent of superintelligent AI, echoing previous cycles of exaggerated expectations in the field.

Source: MIT Tech Review

Resignation Amid AI Turmoil: Ed Newton-Rex's Departure and Copyright Dispute at Stability AI

Ed Newton-Rex's recent resignation from his role as vice president of audio at Stability AI occurred amid the upheaval at OpenAI but attracted comparatively little attention. His departure stemmed from a principled disagreement regarding the fair use doctrine and AI companies' approach to copyright, particularly concerning the training of models using data scraped from the internet without explicit consent. Newton-Rex's stance reflects broader concerns about the impact of generative AI on creativity and the need for a swift societal conversation and potential regulation to safeguard creators' rights in this evolving landscape.

Source: Fast Company

Meta Disbands Responsible AI Team Amid Shifting Priorities

In a significant shift, Meta has dissolved its Responsible AI (RAI) team, reallocating its resources towards the development of generative artificial intelligence. The move, reported by The Information, indicates that most RAI members will join Meta's generative AI product team or contribute to the company's AI infrastructure. This restructuring follows prior reports of layoffs and challenges within the RAI team, raising questions about Meta's commitment to responsible AI development amidst economic headwinds and global regulatory efforts in the field.

Source: The Verge

AI and Satellites: Innovations in Fighting Canadian Wildfires

Researchers in Moncton, Canada, are utilizing artificial intelligence (AI) to combat wildfires, aiming to detect and predict their spread swiftly and accurately. This AI-driven technology, leveraging satellite and drone imagery, identifies fires with over 99% accuracy, surpassing human capabilities, and predicts fire expansion, aiding firefighting resource allocation. While in early stages, this innovative approach has already shown promise for early detection and modelling fire spread, supplementing traditional firefighting methods. Additionally, forthcoming satellite initiatives like WildFireSat, slated for launch in 2029, are anticipated to provide more precise real-time data, enabling proactive fire management strategies.

Source: CBC