Sign up to get the Armilla Review sent directly to your inbox!
Sign up π https://lnkd.in/gAtQaNUY
β
β
TOP STORY
β
β
The secure deployment and operation of AI systems necessitate adapting security measures based on the system's complexity and infrastructure. The report, authored by various national and international cybersecurity agencies, highlights the need for continuous updates to security practices to address emerging threats and maintain system integrity. It provides detailed best practices for securing deployment environments, including managing governance, securing APIs, and monitoring system behaviour. A Zero Trust approach is recommended, alongside robust incident detection and response mechanisms. Stakeholders are urged to engage in continuous education and training to promote a security-aware culture. Ultimately, these guidelines aim to protect AI systems from theft and misuse while enhancing their confidentiality, integrity, and availability.
β
Source: NSA
β
β
NEWS
β
β
The National Association of Insurance Commissioners (NAIC) has created a new Third-Party Data and Models Task Force dedicated to proposing regulatory frameworks for the use of third-party data, predictive models, and AI within the insurance industry. This initiative is in response to the increasing integration of AI/ML technologies in insurance practices such as rating, underwriting, and claims handling, prompting the need for updated regulatory guidelines. The Task Force is set to research current AI/ML usage in the industry and evaluate the adequacy of existing regulations with an eye toward drafting new model laws or modifying existing ones by 2025. Additionally, eight states have already adopted the NAIC's Model Bulletin on AI usage, with others like California, Colorado, and New York instituting specific regulatory requirements. The NAIC is actively encouraging further adoption and seeking feedback on the proposed regulatory changes to ensure that insurers remain compliant with laws while using third-party data and models.
β
Source: FENWICK
β
β
β
In a comprehensive field experiment conducted with Boston Consulting Group, the use of artificial intelligence (AI), specifically Large Language Models (LLMs) like GPT-4, was analyzed to assess its impact on knowledge worker productivity and quality. The study involved 758 consultants and tested their performance across a variety of complex, realistic tasks under three different conditions: no AI, AI access, and AI access with a prompt engineering overview. The results indicated that AI significantly increased productivity and quality within the tasks deemed within AI's capabilities, enhancing task completion speed by over 25% and quality by 40%. However, for tasks outside AI's capabilities, performance notably declined. This study highlights a "jagged technological frontier" in AI application, suggesting that while AI can vastly improve outcomes in certain areas, its effectiveness drops sharply in tasks beyond its current reach, necessitating careful selection and integration of AI tools in professional settings.
β
Source: Harvard
β
β
β
Recent guidance from the U.S. Patent and Trademark Office (USPTO) regarding the use of artificial intelligence in the patent application process has raised concerns among intellectual property attorneys. The guidance suggests that overpopulating disclosures with irrelevant material due to AI usage could burden examiners, potentially leading applicants to limit the disclosure of prior art. This inferred limitation may expose patents to future legal challenges if key prior art references are omitted. Additionally, the introduction of new fees for information disclosure statements containing numerous items may further discourage the inclusion of comprehensive prior art. Attorneys are urged to carefully scrutinize their submissions to ensure relevance and accuracy, reinforcing the need for a careful approach to AI-generated patent applications amidst fears of inequitable conduct accusations if too much or too little prior art is cited.
β
Source: Bloomberg Law
β
β
β
As AI technology rapidly progresses, shareholders of major corporations like Alphabet, Meta, and Apple are increasingly advocating for greater transparency and ethical handling of AI risks. Investor proposals are pushing these companies to disclose potential harms that AI could inflict on society and their businesses, particularly concerning misinformation and AI's impact on elections. These proposals have gained significant traction, as evidenced by nearly 40% of investors supporting a similar bid at Apple's annual meeting. The AFL-CIO has successfully influenced companies like Walt Disney Co. and Comcast Corp. to reveal more about their AI usage through withdrawn shareholder bids, and continues to promote similar initiatives at Amazon and Netflix. Amidst this backdrop, new regulations like the EU's AI Act are set to impose stringent guidelines to ensure AI systems are used safely and ethically, indicating a growing global push for governance in AI deployment and usage across industries.
β
Source: Bloomberg Law
β
β
β
The former FTC policy director has called for comprehensive scrutiny of Microsoft by the Federal Trade Commission (FTC), emphasizing the potential anti-competitive risks posed by Microsoft's aggressive AI strategies. Microsoft has been distinctive in its approach to monopolize AI talent and intellectual property through significant investments like the $13 billion in OpenAI, effectively integrating OpenAI into its Azure ecosystem. Such moves have raised concerns about creating an AI "walled garden" that could stifle competition and innovation in the sector. The FTC has initiated an inquiry into AI and cloud service partnerships, recognizing the need to understand and potentially regulate these interconnections to prevent monopolistic practices. As Microsoft maneuvers to dominate both AI and cloud markets by leveraging existing infrastructures and customer bases, it is imperative for the FTC to utilize both enforcement and non-enforcement actions to ensure a competitive and fair market environment.
β
Source: Bloomberg Law
β
β
β
Meta Llama 3, the latest open-source large language model from Meta, has been introduced with enhancements that elevate its performance significantly beyond its predecessors. Available on major cloud platforms like AWS, Google Cloud, and Microsoft Azure, this model is designed to be more accessible and capable, featuring advanced reasoning, code generation, and instruction-following capabilities with 8B and 70B parameter versions. Meta emphasizes responsible AI development, incorporating tools like Llama Guard 2 and CyberSec Eval 2 to ensure safe usage. The model architecture improvements include a new tokenizer and Grouped Query Attention for better inference efficiency. Looking ahead, Meta plans to expand Llama 3's capabilities with multilingual and multimodal functionalities. The rollout of Llama 3 promises to boost innovation and application across the tech landscape, supporting developers with new tools like the torchtune library for easier customization and deployment.
β
Source: Meta
β
β
FEATURED
β
β
Join Armilla AI's Head of AI Policy, Philip Dawson, for an informative discussion on AI regulation, risk mitigation, and responsible AI in the global insurance industry, featuring insights from regulators and industry leaders.
β
Key themes:
π Impact of new regulations and guidance in the US, EU, and UK.
π‘οΈ Insurers' responsibilities under new AI regulations.
π Strategies for AI risk management and compliance.
β
Who Should Join?
π’ Insurance Executives
π Compliance Officers
π‘οΈ Risk Managers
π§ AI/Data Scientists
β
π
May 8th - Donβt miss this opportunity to gain expert insights. Register now!
https://lnkd.in/esM35bgD
β