Armilla Review - Global Shifts in AI and Technology: Ethical Development, Regulatory Actions, and Industry Dynamics

In this week's Armilla Review: new global frameworks, regulatory probes in the EU, warnings in the US, to local policy initiatives and Big Tech shake ups. The United Nations has adopted a landmark resolution for AI's ethical development, while the European Union has launched its first probes under the Digital Markets Act against leading tech firms such as Apple, Alphabet, and Meta, highlighting the push for ethics and fair competition in the digital realm. In the United States, Gary Gensler, Chairman of the Securities and Exchange Commission (SEC), has voiced concerns over the financial industry's overreliance on AI. Concurrently, Tennessee is at the forefront of defending artists against AI's potential copyright infringements. Amid these regulatory progresses, the corporate sector is witnessing changes, with Microsoft strengthening its position in AI leadership and Stability AI facing shifts in management and financial difficulties. These narratives collectively signal a global trend towards a more regulated, morally guided, and competitive environment in AI and technology, showcasing the intertwined future of technology policy, creativity, and governance.
March 27, 2024
5 min read

UN Adopts Historic Global Resolution for Ethical AI Development

The UN General Assembly has unanimously passed its first-ever global resolution on artificial intelligence (AI), co-sponsored by over 120 countries, setting global standards for the safe, ethical development and use of AI technologies. This landmark resolution, which was adopted by all 193 member states, emphasizes the development of AI systems that are safe, secure, trustworthy, and in alignment with human rights and fundamental freedoms. It covers various critical aspects, including raising awareness about AI, encouraging research and development, ensuring AI system transparency, and addressing diversity and bias in AI. Moreover, the resolution encourages the creation of national policies and international cooperation to manage AI's opportunities and challenges, emphasizing the importance of ethical principles, human rights, and equitable access in the AI development process. This move reflects a growing global effort to regulate the AI industry amid rising concerns over ethics and security, marking a significant step towards responsible global AI governance.

Source: AI News

EU Targets Tech Giants Under New Digital Markets Act: Apple, Alphabet, and Meta Under Investigation

The European Union has launched its first investigations under the new Digital Markets Act (DMA) into tech giants Apple, Alphabet, and Meta, probing into potential anti-competitive practices and violations of the legislation. The inquiries are focused on issues such as anti-steering rules, self-preferencing in search results, and Meta's pay or consent model. The EU's competition chief, Margrethe Vestager, highlighted concerns over these companies' adherence to DMA regulations, which aim to foster a more competitive and fair digital market. Apple has already faced a hefty fine for restricting app developers from informing users about cheaper alternatives, indicating the EU's commitment to enforcing these new rules. The investigations, which could result in fines up to 20% of a company's total worldwide turnover for repeated infringements, underscore the EU's efforts to regulate the power of major tech firms and ensure consumer choice and fair market practices.

Source: CNBC

Shaping the Future of Technology: The Vital Role of International Standards in AI and Emerging Technologies

This analysis emphasizes the critical role of Standards Development Organizations (SDOs) and government engagement in shaping international standards for artificial intelligence (AI) and other emerging technologies. Highlighting the complex, technical nature of standards, it underscores their importance in ensuring safety, interoperability, and compliance across national and technological borders. The paper discusses the approaches of the United States, European Union, and China towards standardization, noting their efforts to influence global standards through increased participation and leadership in SDOs. It stresses the need for broader stakeholder participation and transparency in the standards development process to address both the technical and socio-technical aspects of emerging technologies. The analysis concludes with recommendations for SDOs to lead in increasing participation, for governments to support standards development, and for like-minded countries to align on AI standards development. It calls for a balance between government involvement and the research-driven, voluntary consensus model that has historically driven effective technology policy and standardization.

Source: Brookings

Gensler's Alert: The Risk of AI-Induced Financial Meltdown and the Call for Regulatory Reform

SEC Chair Gary Gensler warned of a potential financial meltdown sparked by the unchecked reliance on artificial intelligence (AI) in the financial sector. He highlighted the danger of many financial institutions depending on a small number of AI algorithms for investment decisions, which could lead to a concentration of risk that current regulatory approaches might not adequately address. Gensler emphasized the need for regulators to rethink their strategies and consider the systemic risks posed by AI across the entire financial system. He also pointed out the challenges of ensuring that AI-driven financial advice prioritizes client interests, referencing the SEC's actions against misleading AI claims by investment firms and its efforts to regulate the use of AI and predictive analytics in financial advice.


Navigating the GenAI Revolution: Enterprises Increase Investment and Focus on Open-Source Models

In 2023, generative AI (genAI) significantly impacted the consumer market, and in 2024, it's poised to make even greater strides within the enterprise sector. Enterprises are shifting their genAI investments from innovation budgets to more permanent software budgets, with many planning to increase their spend by 2x to 5x to support more production workloads. There's a trend towards a multi-model future, with enterprises preferring open-source models for their control, customization, and cost benefits. Additionally, enterprises are focusing on building applications in-house, targeting both internal productivity and customer-facing use cases, while remaining cautious of potential issues related to AI hallucinations and public relations. The total spend on model APIs and fine-tuning is expected to reach over $5 billion by the end of 2024, indicating a significant opportunity for AI startups that can navigate enterprise needs and challenges effectively.

Source: a16z

Tennessee Pioneers Artist Protection Against AI with the ELVIS Act

Tennessee has set a precedent as the first U.S. state to enact legislation safeguarding musicians and other artists from the unauthorized use of their work by artificial intelligence. Signed into law by Governor Bill Lee, the Ensuring Likeness, Voice, and Image Security Act, or "ELVIS Act," aims to protect the unique intellectual property of artists by preventing AI from replicating an artist’s voice without consent. Effective from July 1, this law introduces new civil actions against unauthorized publication or performance of an individual's voice, as well as the misuse of technology to produce an artist's name, photos, voice, or likeness without proper authorization. Amidst the digital age's challenges, this bipartisan effort underscores Tennessee's commitment to its rich musical heritage and the rights of artists in the evolving landscape of AI technology. The law is a significant step in addressing the concerns of artists over AI's potential to infringe on their creative expressions and intellectual property.

Source: Financial Post

Microsoft Bolsters AI Leadership: Mustafa Suleyman to Head New Microsoft AI Division

Microsoft CEO Satya Nadella announced significant organizational updates focusing on AI innovation, particularly emphasizing the development of consumer AI products like Copilot. Mustafa Suleyman, co-founder of DeepMind and Inflection, is appointed as EVP and CEO of the newly formed Microsoft AI, joining Microsoft's senior leadership team and reporting directly to Nadella. Alongside Suleyman, Karén Simonyan, co-founder and Chief Scientist of Inflection, joins as Chief Scientist. This move is part of Microsoft's strategy to lead in AI platform shifts, building on its partnership with OpenAI and aiming to accelerate its pace in AI technology development. The changes underline Microsoft's commitment to becoming an AI-first company and enhancing its consumer AI research and products.

Source: Microsoft

A New Chapter at Stability AI: Leadership Changes Amidst Development and Financial Challenges

Stability AI, the powerhouse behind the celebrated AI text-to-image generator Stable Diffusion, is navigating a period of significant upheaval. Emad Mostaque has resigned as CEO and from the company's Board of Directors to pursue interests in decentralized AI, amidst a backdrop of key developers departing and financial turbulence. The Board has appointed Shan Shan Wong and Christian Laforte as interim co-CEOs while searching for a permanent leader to steer the company into its next phase of growth. These changes occur as Stability AI grapples with the fallout from several original researchers leaving the firm, alongside challenges related to cash flow and accusations of misrepresenting contributions to Stable Diffusion's development. Despite these obstacles, and having achieved hundreds of millions of downloads with its leading models across various modalities, Stability AI is committed to continuing its mission of developing and commercializing premier generative AI products, all while maintaining its vision for open, multi-modal generative AI.

Source: Stability AI