OpenAI is training a new flagship model to succeed its GPT-4 Large Language Model (LLM) and has also formed a Safety and Security Committee led by Sam Altman, Adam D’Angelo, and Bert Taylor.
Other members of the committee include Aleksander Madry, Lilian Wang, John Schulman, Matt Knight, and Jakub Pachocki. OpenAI will also retain Rob Joyce and John Carlin, former cybersecurity officials.
With this new model, the company aims to take a step towards artificial general intelligence (AGI), a theoretical model that aims to create an AI with human-like intelligence capabilities.
AGI can be used to power assistants such as Siri, search engines, image generators, reports NYT.
Accompanying this development, OpenAI also announced the establishment of a Safety and Security Committee tasked with navigating potential risks associated with the new model and future AI advancement. After 90 days, the committee will submit a detailed report to the full Board for review.
“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI,” said the company. “While we are proud to build and release industry-leading models on both capabilities and safety, we welcome a robust debate at this important moment.”
After the review, the company will publicly share the steps taken based on the adopted recommendations.
OpenAI’s proactive stance mirrors broader industry trends, with tech giants like Google, Meta, and Microsoft pushing the AI envelope while grappling with ethical and societal concerns.

In January, it was reported that Meta is buying hundreds of thousands of Nvidia’s H100 GPUs as the company develops AGI.
The current GPT-4, introduced in March 2023, has demonstrated its versatility in various domains, from data analysis to content generation. An updated version, GPT-4o, promises even more capabilities, including image generation and natural conversational interactions.
As AI usage increases, several security concerns arise. In January, it was reported that some GPT accounts reported unauthorised access.
In November last year, cybersecurity researchers discovered a flaw in ChatGPT that allows hackers to extract several megabytes of data.
Also, in April 2024, reports came out that AI-generated NSFW app ads are being shown to users on several Meta platforms.
As tech giants continue to research and develop AGI, a technology that still exists in theory, new challenges will arise. Right now, it is a wait-and-watch situation, and the industry and cyber experts will be keenly waiting for any breakthrough in this field by Meta, Google, or OpenAI.
In the News: Microsoft CoPilot might come to Telegram and other messaging apps