AI Vulnerabilities Exposed: New Flaws in Ollama & NVIDIA Triton – A Security Comedy of Errors
New vulnerabilities in AI inference systems Ollama and NVIDIA Triton Inference Server could have been exploited, according to Fuzzinglabs. CEO Patrick Ventuzelo will reveal details at Black Hat Europe 2025, emphasizing the shift in AI security research towards infrastructure rather than just the models themselves. Ollama flaws included a denial-of-service bug, among others.

Hot Take:
Who knew that AI could be so easily tricked into being your unwitting accomplice? It’s like finding out your seemingly innocent Roomba is plotting to overthrow the household hierarchy while you sleep. Fuzzinglabs’ discovery of vulnerabilities in Ollama and NVIDIA Triton is like catching your tech-savvy catfish in the act before it swims away with your digital secrets. Stay woke, folks—AI’s got its own bag of tricks!
Key Points:
- Fuzzinglabs found vulnerabilities in AI inference systems Ollama and NVIDIA Triton.
- Ollama’s vulnerabilities include a denial-of-service bug, authentication bypass, arbitrary file copy, and a heap overflow bug.
- NVIDIA Triton’s flaw is a command injection vulnerability that could lead to remote code execution.
- The shift in AI security research now focuses on infrastructure vulnerabilities, not just model manipulation.
- Defenders are advised to double down on access control and isolate AI infrastructure from other critical systems.
