Photo: Koshiro K/Shutterstock.com
After years of being a non-profit organisation, OpenAI is planning to convert to a for-profit company and give CEO Sam Altman equity in the company. This has been the topic of much controversy lately, with mass resignations at OpenAI and whistleblowers expressing concerns about the ChatGPT-maker’s focus on security.
What’s going on at OpenAI?
OpenAI started in 2015 as a non-profit research organisation and later transitioned to a for-profit structure in 2019 to keep investors happy. However, the company’s current capped-profit structure limits investor returns to a hundred times the original investment, and any overhead goes back to OpenAI’s main mission — to advance AI for all humanity. So while OpenAI has been a profit-making organisation, the structure of the profits has always incentivised AI research and development, as well as structured adequate safeguards to protect the privacy and security of users and the AI model itself.
The move to a full-profit-based structure is still in the works. While the exact structure hasn’t yet been revealed, chances are OpenAI is going to pivot to a public benefit corporation (PBC). Instead of maximising profits for its investors like a traditional for-profit business, the company will be legally required to consider the broader impact of its decisions on the stakeholders, environment, and society as a whole.
So no, OpenAI isn’t turning into a profit-hungry, data-devouring AI company with no morals. However, the transition to an uncapped profit model means that the non-profit board won’t control the company anymore, which could mean a shakeup in the company’s interests. Thanks to ChatGPT and several other product offerings, the company already has access to a massive amount of potentially sensitive data. It’s paramount that users remain protected and not sold out for more profits.
Uncapped profits putting safety at risk?
If you’ve also integrated ChatGPT as one of the tools you use to go about your day, the change can impact how your interactions with ChatGPT are processed and how safe using ChatGPT itself becomes. However, these changes might not be apparent at once.
Technically, OpenAI’s transformation into a PBC shouldn’t impact you as a user. However, shifting to a PBC structure will likely formalise the company’s ethical obligations, force it to disclose more about its decision-making processes, and balance profit-making with the product’s societal benefit.
But if you look at OpenAI’s initial objective, it was to build an artificial general intelligence (AGI) system — one it describes as an AI system that’s “generally smarter than humans.” None of the many companies in the AI industry have developed an AGI yet, and there’s much debate on when a system like this might finally be a reality. However, it is one of the breakthroughs that worries industry experts.
With the tech sector led by the likes of Google, Microsoft, Meta, Anthropic, and, of course, OpenAI, experts worry that the race to develop an AGI means security will take a back seat. A reality where a smarter-than-human AI takes over the world might be a distant reality, but if development continues unchecked at its current pace, this reality can come true.
William Saunders, a former safety researcher at OpenAI, has stated in his written testimony to the US Senate that there’s “a real risk they will miss important dangerous capabilities in future AI systems,” adding that he’s “lost faith” in OpenAI’s abilities to make responsible decisions regarding its AGI. With top-level execs like CTO Mira Murati resigning left and right, Saunders’ concerns seem legitimate.
OpenAI on the other hand, claims that its approach is safety first, even announcing that its safety and security committee will become independent. The company’s non-profit wing will also remain in existence, but it’s too soon to say whether it’ll have any real impact on the future of the company and its products.
In the News: Snap faces lawsuit over alleged failure to address sextortion risks