Skip to content

Italian data watchdog knocks on DeepSeek’s door

  • by
  • 2 min read

Italy’s data protection authority, GPDP, has asked Chinese AI firm DeepSeek to disclose information on its data collection practices, reasons, sources, legal basis, and data storage policies. The GPDP requires contacted companies to respond within 20 days of the request.

Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, the firms that provide DeepSeek’s web and mobile apps, have received this notice from the Italian data watchdog. Such notices are becoming standard practice for data protection agencies to ask of new and upcoming AI chatbots to protect users, in this case, millions in Italy, as claimed in GPDP’s announcement, from having their data used as training data for the underlying AI model.

The authority also enquired about the training data fueling DeepSeek’s AI models. Additionally, if user data is scraped from the web, they want to know DeepSeek’s plan on informing both registered and unregistered users about the processing of their data if it hasn’t already.

GPDP is one of the stricter data authorities in Europe and has had experience dealing with massively popular AI chatbots in the past. In April 2023, it temporarily banned ChatGPT in the country after it was discovered that OpenAI was illegally collecting user data and had no measures to check if minors were using its services, despite ChatGPT being made for users above 13 years of age. It also pointed out that there was no legal basis for OpenAI to justify the massive data collection and processing of personal data from users to train its AI models.

After meeting the regulators ‘ demands, OpenAI got ChatGPT working in Italy again. However, regulatory demand is just one of the items on DeepSeek’s plate. After being hit with a large-scale DDoS attack, it has already been forced to disable new registrations for its DeepSeek V3 chat platform.

Additionally, a misconfigured ClickHouse database associated with the company was publicly accessible without authentication, exposing highly sensitive information such as internal logs, chat histories, and secret authentication keys, which could compromise user security and proprietary company data.

In the News: India to develop LLM AI within 10 months

Yadullah Abidi

Yadullah Abidi

Yadullah is a Computer Science graduate who writes/edits/shoots/codes all things cybersecurity, gaming, and tech hardware. When he's not, he streams himself racing virtual cars. He's been writing and reporting on tech and cybersecurity with websites like Candid.Technology and MakeUseOf since 2018. You can contact him here: yadullahabidi@pm.me.

>