Skip to content

Malicious AI model in Hugging Face creates backdoor on a user’s device

  • by
  • 3 min read

The Hugging Face machine learning platform is vulnerable to model-based attacks, allowing hackers to compromise Hugging Face users’ environments through code execution.

JFrog Security Research has identified the malicious model within the platform, showcasing a potential backdoor that, when activated, could grant attackers full control over compromised machines.

According to cybersecurity researchers, the primary concerns revolve around code execution when loading specific ML models from untrusted sources. For instance, using the ‘pickle’ format, common for serialising Python objects, can introduce arbitrary code execution upon loading. While Hugging Face has implemented security measures such as malware, pickle, and secrets scanning, a recent incident shed light on the platform’s vulnerability to real threats.

Malicious models in Hugging Face repositories. | Source: JFrog

JFrog’s Security Research team conducted rigorous scans on Hugging Face repositories, focusing on scrutinising the model files. PyTorch models, by a significant margin, and TensorFlow Keras models pose the highest potential risk for executing malicious code.

Researchers identified around 100 models housing harmful payloads, excluding false positives, emphasising the distribution of efforts in producing malicious models for PyTorch and TensorFlow on Hugging Face.

A PyTorch model uploaded by a user named baller423, which was later deleted, served as a crucial discovery in this ongoing investigation. The model, residing in the repository ‘baller423/goober2,’ harboured a payload that initiated a reverse shell connection to an IP address within the KREOnet range. This connection points towards potential security threats rather than mere demonstrations of vulnerability, highlighting the need for stringent security measures when dealing with ML models from untrusted sources.

Source: JFrog

To further understand the attackers’ intentions, researchers established a HoneyPot, luring potential attackers to interact with it. While a connection to the attacker’s server was established, no commands were received before the connection was abruptly terminated. This shows the ongoing challenges in identifying and neutralising emerging threats within AI ecosystems.

The incident underscores the susceptibility of AI ecosystems to supply chain attacks, with recent vulnerabilities in transformers amplifying the risk of transitive attacks.

Initiatives like Huntr, a bug bounty platform for AI CVEs can play a crucial role in fortifying the security posture of AI models.

In the News: Adobe unveils AI tool to generate music from text prompt

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>