AI’s Comedy of Errors: Why Cybercriminals Aren’t Buying the Hype
Cybercriminals remain skeptical about AI capabilities, as large language models continue to trip over their own virtual feet in vulnerability research and exploit development. Despite claims of AI prowess, many threat actors still see these tools as unreliable partners in crime, often needing more hand-holding than a first-time skydiver.

Hot Take:
Looks like our sassy cyber nemeses are sticking to the old-school ways of hacking because AI just can’t hack it yet! Maybe these AI models should stick to their day jobs of writing Shakespearean sonnets and generating cat memes—exploiting vulnerabilities still seems out of their league. Until AI can stop falling flat on its digital face, hackers will keep their faith in the trusty keyboard and caffeine combo.
Key Points:
- LLMs are underperforming in vulnerability discovery and exploitation tasks, leaving cybercriminals skeptical.
- Research by Forescout tested 50 AI models from various sources; none completed all tasks successfully.
- Open-source models were particularly unreliable, while commercial models fared slightly better.
- AI’s potential in VR and ED is improving but hasn’t yet revolutionized these processes.
- Core security measures like least privilege and zero trust remain crucial defenses against potential AI exploits.