Fake OpenAI AI Model on Hugging Face Spreads Infostealer Malware to Thousands
Fake OpenAI AI Model on Hugging Face Spreads Infostealer Malware to Thousands
A malicious repository on Hugging Face disguised as an OpenAI AI model reportedly delivered infostealer malware to Windows systems and accumulated around 244,000 downloads before it was removed, according to cybersecurity researchers from HiddenLayer.
The fake repository, named “Open-OSS/privacy-filter,” imitated OpenAI’s legitimate Privacy Filter release by copying much of the original project description and setup information. HiddenLayer said the attackers added a malicious loader.py file designed to secretly download and execute credential-stealing malware on infected devices.
Researchers noted that the unusually high number of downloads and rapid rise in popularity may have been artificially manipulated to make the project appear legitimate. The repository reportedly reached Hugging Face’s trending list and gathered hundreds of likes within less than 18 hours.
The attack highlights growing concerns around public AI model registries becoming part of the software supply chain threat landscape. Many developers and companies directly clone AI repositories into corporate environments that may contain sensitive source code, cloud credentials, and internal systems, making compromised AI projects especially dangerous.
According to HiddenLayer, the fake project instructed users to run a Windows batch file or execute python loader.py on Linux and macOS systems. The malicious script initially appeared to function like a normal AI model loader before launching a hidden infection chain.
The malware reportedly disabled SSL verification, decoded a hidden URL, retrieved remote commands, and executed PowerShell instructions on Windows machines. Attackers allegedly used external infrastructure to dynamically change malware payloads without modifying the public repository itself.
Researchers said the malware established persistence on infected systems by creating scheduled tasks disguised as legitimate Microsoft Edge update processes. The final payload was identified as a Rust-based infostealer targeting Chromium and Firefox browsers, Discord data, cryptocurrency wallets, FileZilla configurations, and system information. The malware also attempted to disable certain Windows security protections.
HiddenLayer additionally discovered six other Hugging Face repositories using nearly identical malicious code and infrastructure, suggesting a broader coordinated campaign targeting AI development environments.
Security experts warn that AI repositories are becoming attractive attack vectors because they often contain executable scripts, dependency files, notebooks, and setup instructions—not just model files themselves. Traditional software supply chain scanning tools may struggle to detect these threats because the malicious behavior is often hidden in setup logic rather than standard dependencies.
HiddenLayer advised anyone who downloaded or executed files from the malicious repository on a Windows device to consider the system compromised and recommended fully re-imaging affected machines. Researchers also warned that browser sessions and authentication tokens could have been stolen, potentially allowing attackers to bypass multi-factor authentication in some cases.
Hugging Face has since removed the repository from its platform.
