Skip to content

Google expands bug bounty to generative AI and AI supply chain

  • by
  • 2 min read

Google is taking steps to further enhance the security and safety of generative AI systems by expanding its Vulnerability Rewards Program (VRP) and strengthening the AI supply chain.

Under the expanded VRP, Google will offer rewards for exploits specific to generative AI. Google also aims to incentivise research in AI safety and security.

Alongwith that, Google is revising the bug categorisation and reporting methodology to counter issues like “unfair bias, model manipulation and misinterpretations of data.” The goal is to shed light on potential issues and ensure that AI becomes safer for all users in response to the unique challenges posed by generative AI.

To further engage the security community, Google is sharing guidelines that specify what aspects of AI are in scope for bug hunting. The giant intends to encourage researchers to submit more vulnerabilities and bugs that will eventually be fixed. This process will eventually lead to a safer and more secure generative AI ecosystem.

Photo: Tada Images / Shutterstock.com
The recent expansion of a multitude of generative AIs has prompted companies and governments to expand their policies. | Photo: Tada Images / Shutterstock.com

Furthermore, Google is taking action to bolster the AI supply chain’s security by introducing the Secure AI Framework (SAIF) and its first principle to establish a strong security foundation for the AI ecosystem.

To protect against specific machine learning supply chain attacks, Google is expanding its open-source security work and collaborating with the Open Source Security Foundation. Google’s Open Source Security Team (GOSST) employs the Software Bil of Materials (SBMM) standard and Sigstore to ensure the Ai supply chain is not compromised.

“Today, we’re expanding our VRP to reward attack scenarios specific to generative AI. We believe this will incentivize AI safety and security research and bring potential issues to light that will ultimately make AI safer for everyone. We’re also expanding our open source security work to make information about AI supply chain security universally discoverable and verifiable,” said Royal Hansen, VP of privacy, Safety and Security Engineering.

In the News: StripedFly malware has affected one million systems: Research

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>