A hacker identified as ‘Gloomer’ has reportedly breached OmniGPT, an AI-powered chatbot and productivity platform, exposing sensitive user data. The breach was disclosed on Breach Forums, affected 30,000 users and compromised over 34 million chat messages, spanning multiple regions, including Brazil, Italy, India, Pakistan, China, and Saudi Arabia.
According to Hackread, the leaked data consists of user email addresses, phone numbers, chat logs, and links to uploaded documents. Some files contain confidential information, including credentials, billing details, and API keys, which could be exploited for malicious activities.
The hacker also claimed in a forum post that the dataset includes valuable information such as authentication credentials and corporate documents.
This could be one of the largest AI breaches involving conversation data if authenticated. This data can be used for future attacks, including phishing, identity theft, and financial fraud. Also, it could lead to severe legal and regulatory consequences, especially in Europe, where stringent GDPR policies mandate data protection and data transparency regarding security breaches.
Failure to comply with these regulations could result in fines and reputational damage for OmniGPT.

Samples from the leaked data indicate that conversations on OmniGPT include technical discussions, office projects, university assignments, market analysis reports, and police verification certificates. These files were uploaded to the AI’s servers.
This shows why users shouldn’t upload sensitive documents on AI platforms.
OmniGPT is a service that integrates multiple AI models, including ChatGPT-4, Claude 3.5, Perplexity, Google Gemini, and Midjourney, into a single interface. The platform offers features like document management, image analysis, team collaboration tools, and encrypted data storage.
OmniGPT has yet to issue an official statement. Nonetheless, OmniGPT users should change their passwords, enable two-factor authentication, and monitor their emails. Also, users should not engage in any email or other form of communication with any stranger as it increases the chance of phishing attacks.
Recently, a data breach in DeepSeek revealed highly sensitive information such as internal logs, chat histories, and secret authentication keys. Also, a threat actor claimed to have breached over 20 million OpenAI account access codes.
In the News: Hacker found selling 20 million OpenAI credentials; AI firm claims no breach