Keras Vulnerability Exposes AI Models to Sneaky File Heists and SSRF Shenanigans
A Keras vulnerability, CVE-2025-12058, lets attackers load files or perform SSRF attacks. This deep learning API flaw allows malicious models to sneak into your system like a cat burglar with a PhD in AI. Before you know it, your SSH keys could be in the hands of an attacker. Yikes!

Hot Take:
Looks like Keras took the idea of “open source” a bit too literally! With attackers sneaking in through its backdoor, it’s like a surprise party—but with hackers and less cake. Somebody get this API a bouncer!
Key Points:
- Keras vulnerability allows attackers to load arbitrary local files or conduct SSRF attacks.
- The flaw exists in the StringLookup and IndexLookup preprocessing layers.
- Exploitation involves malicious Keras models with crafted vocabulary parameters.
- Potential impact includes compromised SSH access and cloud resources.
- Fixed in Keras version 3.11.4 with improved loading restrictions.
Already a member? Log in here
