ShadowLeak: The Email Heist that Exposed ChatGPT’s Deep Research Flaw!
ChatGPT’s Deep Research had a “ShadowLeak” bug that let attackers exfiltrate Gmail secrets with just one sneaky email. The flaw weaponized AI’s helpfulness, making data disappear without a click. OpenAI patched it, but not before it showed how AI could become the perfect accomplice in email espionage.

Hot Take:
OpenAI’s Deep Research feature is proving that even AI needs a security blanket. Who knew that lurking behind a friendly AI assistant could be a digital pickpocket just waiting for its chance to swipe your Gmail secrets? It’s almost like finding out that your childhood teddy bear was a stealthy ninja. OpenAI may have patched the leak, but this is a reminder that even virtual assistants can serve up more than just witty banter!
Key Points:
- Radware uncovered a critical flaw in OpenAI’s Deep Research tool, named “ShadowLeak,” which let attackers extract Gmail data with a simple email.
- The attack utilized hidden instructions in an email’s HTML, executing commands from OpenAI’s servers unnoticed by security systems.
- ShadowLeak posed risks beyond Gmail, affecting any integration with insufficient input sanitization.
- The flaw was fixed by OpenAI after Radware reported it, but the specifics of the fix weren’t disclosed.
- Radware recommends treating AI tools as privileged users and advises tightening access controls and logging actions in the cloud.
Already a member? Log in here