Photo: Camilo Concha/Shutterstock.com
OpenAI’s ChatGPT, a widely-used language model, is grappling with security concerns after a user reported unauthorised access to their account originating from Sri Lanka.
As per Ars Technica, Chase Whiteside, the affected user, initially dismissed suspicions of a compromised account. However, a subsequent investigation by OpenAI revealed unauthorised logins from Sri Lanka during the same time frame as conversations created within the account.
OpenAI representatives have classified the incident as an account takeover, suggesting a pattern consistent with activities contributing to a shared pool of identities.
“Based on our findings, the users’ account login credentials were compromised and a bad actor then used the account. The chat history and files being displayed are conversations from misuse of this account, and was not a case of ChatGPT showing another user’s history,” said OpenAI.
Whiteside, who accessed his from Brooklyn, New York, claimed to have implemented a robust nine-character password combining upper and lower-case letters and special characters. Despite changing his password, concerns persist about the lack of standard security features on ChatGPT.
Notably, the absence of two-factor authentication (2FA) and login IP tracking raises questions about the platform’s commitment to user security.
Whiteside’s screenshots revealed some confidential details related to a pharmacy prescription drug, including usernames and passwords connected to the pharmacy’s support system and conversations showing an employee troubleshooting customer issues with the help of ChatGPT. Moreover, the leaked information also contained the app name, store number and interactions.
Other leaked information was related to an unpublished research proposal and a presentation in progress, alongwith a script written in PHP. The users in these conversations appear to be unrelated.
To protect your account from divulging sensitive details, users should not include any sensitive information in ChatGPT prompts.
This security incident is not the first for ChatGPT. In November 2023, reports showed that threat actors can use divergence attacks to extract GPT’s training data. In May, it was reported that social media platforms like Meta are trying to counter ChatGPT-themed scams.
In February, reports claimed hackers were using ChatGPT to steal credentials. All these problems led to companies like Samsung and Apple banning the use of ChatGPT by their employees.
In the News: Microsoft Edge is importing Chrome data without your consent