Photo: Tada Images/Shutterstock.com
An Austrian privacy advocacy group, NOYB, has filed a lawsuit against OpenAI, claiming that the AI company provided inaccurate birth details of a public figure client.
This situation has raised significant questions about the reliability of AI-generated data and the extent to which such systems comply with the EU’s General Data Protection Regulation (GDPR).
OpenAI says that it can block data on certain prompts, but that will lead GPT to filter all information about the complainant. Also, the company denied the client’s access request.
Under GDPR guidelines, personal data must be accurate, and individuals have the right to rectify and request the deletion of any inaccuracies.
The problem with Large Language Models (LLMs) like GPT is that OpenAI openly admits that these LLMs might generate factually incorrect information and don’t provide information on where the data comes from. The company is also silent on the data ChatGPT stores about individual people.
ChatGPT, a well-known AI model developed by OpenAI, predicts responses based on user inputs. However, recent data inaccuracies, often called ‘hallucinations’ in AI parlance, have prompted security from privacy advocacy groups like NOYB. These advocates stress the critical importance of data accuracy, particularly when handling sensitive personal information as per GDPR standards.
In this specific case, NOYB’s client alleges that ChatGPT was showing incorrect birth details, which breaches GDPR standards and underscores broader concerns about the reliability and accountability of AI-generated data.
“Making up false information is quite problematic in itself. But, when it comes to false information about individuals, there can be serious consequences. It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals,” says Maartje de Graaf, data protection lawyer at NOYB.
OpenAI has not been on good terms with several European countries. Last year, the Italian DPA imposed a temporary restriction on the chatbot. Later, in April 2023, Canadian authorities also opened an investigation against OpenAI for data use and disclosure.
In June 2023, a court case was filed in California alleging that OpenAI stole data while training ChatGPT. Finally, Elon Musk filed a lawsuit against OpenAI, alleging a breach of the original contractual agreements.
In the News: Is Meta AI a privacy threat?