AI Assistants Gone Rogue: Zenity Exposes Security Flaws in Popular Enterprise Tools
Researchers at Zenity showcased how enterprise AI assistants, like ChatGPT, can be exploited to steal data, often without user interaction. They demonstrated how attackers can manipulate AI integrations in tools like Google Drive and Salesforce Einstein, highlighting cybersecurity risks in the age of generative AI.

Hot Take:
AI assistants: the well-meaning but gullible interns of the digital world. They’re working hard, but it seems they’ve left the security door wide open and invited the hackers in for tea and cookies. So, while they might be generating those quarterly reports faster than ever, you might want to check if they’re also sending them to Mr. Hacker at 123 Cybercrime Lane.
Key Points:
- Zenity researchers demonstrated vulnerabilities in AI assistants at Black Hat conference.
- AI integrations with enterprise tools open new cybersecurity risks.
- Zenity showcased attacks on AI tools like ChatGPT, Copilot, and Salesforce Einstein.
- Some AI vulnerabilities have been patched, but others remain unaddressed.
- The hacks can be executed with minimal or no user interaction.
Already a member? Log in here