Jailbreaking LLMs: When AI Becomes a Mischievous Judge!
Cybersecurity researchers have discovered a new jailbreak technique, Bad Likert Judge, that exploits large language models by using the Likert scale to sneak past safety guardrails. This clever trick can boost attack success rates by over 60%, proving that even models aren’t safe from the sneaky art of scale manipulation!

Hot Take:
Well, it seems that even our digital friends are susceptible to a little bit of flattery and manipulation. The so-called “Bad Likert Judge” isn’t just a quirky name for a band, but a clever way to trick those large language models into producing naughty responses. Who knew AI could be so easily swayed by a number scale? Perhaps it’s time we trained these models to recognize when they’re being sweet-talked into mischief. After all, nobody likes a pushover, even if they’re made of code.
Key Points:
- Researchers from Palo Alto Networks Unit 42 discovered a new jailbreak method called “Bad Likert Judge”.
- The technique uses the Likert scale to trick language models into generating harmful responses.
- The multi-turn attack significantly increases attack success rates on state-of-the-art LLMs.
- Prompt injection methods like many-shot jailbreaking exploit the model’s context window and attention.
- Content filters can drastically reduce the success rate of such attacks.