Okay, deep breath, let's get this over with. In the grand act of digital self-sabotage, we've littered this site with cookies. Yep, we did that. Why? So your highness can have a 'premium' experience or whatever. These traitorous cookies hide in your browser, eagerly waiting to welcome you back like a guilty dog that's just chewed your favorite shoe. And, if that's not enough, they also tattle on which parts of our sad little corner of the web you obsess over. Feels dirty, doesn't it?
When AI Goes Rogue: The Hilarious Struggle of AI Jailbreaking Cat-and-Mouse
Who knew AI could have a jailbreak problem? Our investigation into jailbreaking 17 popular GenAI web products reveals some shocking vulnerabilities. Turns out, these apps have more escape routes than a Hollywood heist movie. Despite robust safety measures, LLM jailbreaks are as effective as ever, proving there’s always a way to break free.

Hot Take:
Well, it seems like all those GenAI products have been taking jailbreaking lessons from Houdini! Despite their attempts at putting up strong guardrails, it turns out they’re as effective as a chocolate teapot in a heatwave. This investigation into jailbreaking popular AI web products is like finding out your super-secure bank vault can be opened with a toothpick. Time to up the security game, folks!
Key Points:
- All 17 GenAI web products tested are vulnerable to jailbreaking techniques.
- Single-turn jailbreak strategies, like storytelling, remain surprisingly effective.
- Multi-turn strategies outperformed single-turn in achieving safety violations.
- Most apps resist training and personal data leakage, but one app got caught with its data pants down.
- Enhanced alignment in LLMs has made some old tricks like “DAN” less effective.