A group of influential industry leaders is set to issue a warning highlighting the potential existential threat posed by artificial intelligence (AI) technology. They argue that AI should be considered a risk to society on par with pandemics and nuclear wars.
The Center for AI Safety, a non-profit organization released a one-sentence statement emphasizing the need for global prioritisation in mitigating the risks associated with AI.
The statement, which has already been signed by over 350 executives, researchers, and engineers working in the AI field, includes notable figures such as Sam Altman (CEO of OpenAI), Demis Hassabis (Google DeepMind CEO), and Dario Amodei (CEO of Anthropic). Additionally, Geoffery Hinton and Yoshua Bengio, renowned AI researchers and recipients of the Turing Award, have also signed the statement.
The concerns about AI stem from recent advancements in large language models like ChatGPT, which have raised fears regarding the potential for the widespread dissemination of misinformation and the elimination of white-collared jobs. Some believe that AI could cause societal-scale disruptions within a few years if left unchecked.
Surprisingly, industry leaders actively developing AI technologies advocate for stricter regulations due to the risks associated with their creations.
Sam Altman, in his Senate testimony, emphasized the gravity of the risks posed by advanced AI systems and called for government intervention to mitigate potential harm.
The release of the open letter by the Center for AI Safety signifies a significant shift in perspective, with industry leaders publicly acknowledging their concerns about the technology they are developing. It challenges the misconception that only a few individuals express concerns about AI’s potential dangers.
While sceptics argue that AI technology is still too immature to pose an existential threat, others contend that AI is advancing rapidly, surpassing human-level performance in certain areas and potentially surpassing it in others. The concept of artificial general intelligence (AGI), where AI matches or exceeds human-level performance across various tasks, is gaining attention.
With the statement, the Center for AI Safety aims to unite experts who may differ in their views on specific risks and preventive measures. By emphasising shared concerns about powerful AI systems, the message is not diluted with a long list of interventions.
“We didn’t want to push for a very large menu of 30 potential interventions,” Dan Hendrycks, the executive director of the Centre for AI Safety told the New York Times. “When that happens, it dilutes the message.”
In March, Elon Musk and others urged to suspend AI development past GPT-4 in an open letter published by the Future of Life Institute. Recently, Apple and Samsung banned the use of ChatGPT and other AI bots in their offices.
With big names like Elon Musk, Sam Altman, and Yoshua Bengio demanding government control over AI, it will be interesting to see the future of AI.