Skip to content

US secures voluntary AI safety commitments from 7 industry giants

  • by
  • 3 min read
Photo: Koshiro K / Shutterstock.com

Photo: Koshiro K / Shutterstock.com

To address the rapid advancements of the AI industry and its potential risks, the Biden administration has secured voluntary commitments, including watermarking, vulnerability reporting, and information sharing from seven leading AI developers.

The companies involved in this non-binding agreement are OpenAI, Anthropic, Google, Inflection, Microsoft, Meta, and Amazon. These tech giants will meet President Biden at the White House to discuss shared safety and transparency goals, reported The Verge.

The voluntary commitments made by these companies encompass various areas aimed at promoting responsible AI development and usage. Some key aspects include conducting internal and external security tests on AI systems before release, including adversarial “red teaming” by external experts. Additionally, the companies have agreed to share AI risk information with the government, academia, and civil society to enhance understanding and mitigation techniques.

To protect private model data and prevent malicious exploitation, the companies will invest in cybersecurity and implement insider threat safeguards. Furthermore, they will encourage third-party discovery and reporting of vulnerabilities potentially through bug bounty programs or domain expert analysis.

Photo: Tada Images / Shutterstock.com
These non-binding commitments can pave the way for future government intervention in the AI industry. | Photo: Tada Images / Shutterstock.com

Another crucial aspect of the commitments involves developing robust watermarking or other methods to mark AI-generated content, which the EU has already asked the companies to do. This move aims to ensure transparency and accountability when AI systems generate content.

Additionally, the companies have pledged to report on AI systems’ capabilities, limitations, and appropriate and inappropriate uses, though this might prove challenging to achieve unequivocally.

The commitments emphasise prioritising research on societal risks, such as addressing systematic bias or privacy issues associated with AI deployment.

Though these commitments are voluntary, an Executive Order may be introduced to encourage compliance. The White House is actively working on such an order, and it could direct regulatory agencies to scrutinize AI products for robust security and unbiased development.

It seems that the Biden Administration has been taking a keen interest in addressing the impact of AI technology on society. The President and Vice President have held meetings with industry leaders to discuss a national AI strategy.

This announcement comes following Senate Majority Leader Chuck Schumer’s own plan to regulate AI technology without stifling innovation. Schumer’s SAFE (Security, Accountability, Foundations, Explain) Framework aims to create rules to address AI’s potential to impact national security, job loss, and misinformation.

In the News: Adobe releases ColdFusion update to fix critical flaws

nv-author-image

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: [email protected]

>