Skip to content

OpenAI faces lawsuit for alleged data theft in training ChatGPT

  • by
  • 2 min read

Photo: Camilo Concha/Shutterstock.com

OpenAI, the company responsible for the popular ChatGPT tool, has been hit with a proposed class action lawsuit, accusing the company of unlawfully acquiring and utilising extensive amounts of personal data from the internet to train its AI models.

The lawsuit raises concerns about data privacy and the responsible use of information in developing AI technologies.

The lawsuit, filed in a California federal court, alleges that OpenAI conducted secretive and massive data scraping activities, collecting personal data from various sources online without individuals’ knowledge, consent, or appropriate compensation. The complaint, spanning nearly 160 pages, claims that OpenAI seized an unprecedented volume of personal data, including virtually all internet exchanges, violating privacy rights.

OpenAI and its major investor, Microsoft, have yet to respond to the lawsuit or comment on the allegations. The lawsuit also seeks to hold Microsoft accountable as a defendant in the case.

Timothy K. Giordano, a partner at Clarkson, the law firm representing the plaintiffs, expressed concerns about OpenAI’s actions, stating that the company’s data scraping practices and the development of untested AI technologies without proper data protection and consent put everyone at an unacceptable level of risk.

Photo: tada images / shutterstock. Com
There has been a growing number of lawsuits against companies involved in AI tool development. | Photo: Tada Images / Shutterstock.com

The lawsuit further asserts that OpenAI’s products utilise stolen private information, including personally identifiable data from hundreds of millions of internet users, without informed consent or knowledge, potentially including minors.

In addition to injunctive relief, which would temporarily halt OpenAI’s commercial use of its products, the lawsuit also seeks “data dividends” as financial compensation for individuals whose information was used to train OpenAI’sAI tools.

This legal challenge adds to the growing number of lawsuits faced by companies involved in AI development and highlights the need for increased transparency and accountability regarding data usage in AI models. New regulations are being discussed to enforce greater transparency, and court cases may compel companies like OpenAI to disclose information about the data they have employed.

In the News: EU launches four labs to test AI before market launch

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>