US Launches Global AI Safety Institute to Tackle Frontier AI Risks

Secretary of Commerce Propels International Collaboration with UK Safety Institute

Attention India
3 Min Read

2 November 2023, Mumbai: The United States is embarking on a significant endeavor, as Secretary of Commerce Gina Raimondo announces the establishment of an AI safety institute. This institute will focus on assessing both existing and emerging risks associated with cutting-edge AI models often referred to as “frontier” AI. Raimondo expressed her intention to foster international collaboration, particularly with the United Kingdom Safety Institute.

Collaborative Approach Emphasized in AI Safety

In her address at the AI Safety Summit in Britain, Secretary Raimondo highlighted the necessity of teamwork. She urged the audience, comprising experts from academia and industry, to join forces and become part of the consortium. The overarching message is clear: addressing the multifaceted challenges of AI safety requires a collective effort that spans borders.

NIST Leads the Charge in AI Safety

The newly established AI safety initiative falls under the purview of the National Institute of Standards and Technology (NIST), showcasing the U.S. government’s commitment to AI safety. This institute’s core mission is to set industry standards in several critical areas: safety, security, and AI model testing. Additionally, it will play a pivotal role in devising standards for verifying AI-generated content and provide the necessary testing environments for researchers.

Biden’s AI Safety Executive Order

The momentum for AI safety gained further traction when President Joe Biden signed an executive order focused on artificial intelligence. Under this order, developers of AI systems that could pose risks to national security, the economy, public health, or safety must share their safety test results with the U.S. government. This initiative aligns with the Defense Production Act and will pave the way for heightened accountability and transparency in AI development.

Addressing a Wide Spectrum of Risks

The executive order also tasks federal agencies with the responsibility of establishing rigorous standards for AI testing. It goes beyond AI alone, addressing related risks, including chemical, biological, radiological, nuclear, and cybersecurity. This multifaceted approach aims to ensure that the deployment of AI technology aligns with national interests while safeguarding various aspects of security and well-being.

By Yashika Desai

Share This Article
Leave a comment

Leave a Reply