Pickle Panic: Malicious ML Models Sneak Past Hugging Face’s Security with NullifAI Trickery

Cybersecurity researchers have found two malicious machine learning models on Hugging Face using “broken” pickle files to dodge detection. Dubbed nullifAI, this sneaky approach exploits a gap in Picklescan’s defenses, allowing malicious payloads to execute while causing decompilation errors. Hugging Face has since updated their tools to address this issue.

Pro Dashboard

Hot Take:

Who knew that adding a touch of “pickling” to machine learning models could make them so, well, rotten? Hackers have taken up culinary arts with a side of tech-savvy mischief, and it seems like Hugging Face is in for a sour surprise. Just when you thought your AI was safe, it turns out it might be harboring a secret: a love for pickles. But not the tasty kind.

Key Points:

  • Cybersecurity researchers found two malicious ML models on Hugging Face using “broken” pickle files.
  • The method has been dubbed “nullifAI” for its clever evasion of existing safeguards.
  • The models act as proof-of-concept rather than an active supply chain attack.
  • Pickle serialization is a known security risk, allowing arbitrary code execution.
  • Hugging Face’s security tool Picklescan failed to detect these sneaky models.

Membership Required

 You must be a member to access this content.

View Membership Levels
Already a member? Log in here
The Nimble Nerd
Confessional Booth of Our Digital Sins

Okay, deep breath, let's get this over with. In the grand act of digital self-sabotage, we've littered this site with cookies. Yep, we did that. Why? So your highness can have a 'premium' experience or whatever. These traitorous cookies hide in your browser, eagerly waiting to welcome you back like a guilty dog that's just chewed your favorite shoe. And, if that's not enough, they also tattle on which parts of our sad little corner of the web you obsess over. Feels dirty, doesn't it?