OpenAI’s ChatGPT macOS app stored chats in plaintext on users’ computers, making them easily accessible to malicious actors or apps with access to the machine. The app was announced in OpenAI’s Spring Update in May 2024 and was available to all in June.
Cybersecurity expert Pedro José Pereira Vieito discovered the issue and showcased the vulnerability in real time. He provided a detailed demonstration, revealing how simply changing the file name allowed access to these stored chats. This finding raised significant concerns about the security of sensitive information exchanged through the ChatGPT app.
In response, OpenAI released an update to the app, implementing encryption to protect user chats.
“We are aware of this issue and have shipped a new version of the application which encrypts these conversations,” Taya Christianson, OpenAI spokesperson told The Verge.
After the update, Vieito’s app could no longer access the conversations, indicating that the encryption measures were effective. Vieito explained his initial discovery, noting that his curiosity about OpenAI’s choice to bypass app sandbox protections led him to investigate where the app data was stored.

Sandboxing is a security technique in which the app is isolated so that it uses only the essential resources and user data. This reduces the risk of malware infections affecting the entire system.
OpenAI’s distribution of the ChatGPT app solely through its website exempted from adhering to Apple’s sandboxing requirements applicable to apps distributed via the Mac App Store.
The incident underscores the importance of stringent security measures for applications handling sensitive user data. While OpenAI routinely reviews ChatGPT conversations for safety and model training purposes, this access should not extend to unauthorised third parties.
In June 2024, the Centre for Investigative Reporting sued Microsoft and OpenAI for copyright violation. Recently, it was reported that AI tools were trained on 190 Australian children’s pictures.
Also, recently, a novel technique, Skeleton Key, was discovered that could allow users to generate forbidden answers. This was the second such technique after the ‘many-shot jailbreaking‘ technique, discovered in April.
In the News: NCA shuts down 593 Cobalt Strike servers in a global operation