ChatGPT Confidential: How Employees are Unintentionally Leaking Sensitive Data
Employees are getting chummy with ChatGPT, sharing PII and PCI data like it’s a juicy office gossip. According to LayerX, 77% of AI users have pasted data into chatbot queries, with 22% involving sensitive info. With ChatGPT’s enterprise usage at 43%, CISOs might want to start sweating over data security.

Hot Take:
ChatGPT: The Swiss Cheese of Data Security? It’s 2025, and apparently, the best way to protect corporate secrets is to stop employees from having a chat with their AI bestie! With employees turning ChatGPT into their personal diary, it’s only a matter of time before your secret chili recipe ends up in the wrong hands. If AI were a pet, maybe it’s time we took it to obedience school.
Key Points:
- 45% of enterprise employees are using generative AI tools, with 77% pasting data into chatbots.
- 22% of these copy-pasted data include PII/PCI, creating data leakage risks.
- 82% of data pastes come from unmanaged personal accounts.
- ChatGPT is the most popular AI tool among enterprises, with 43% penetration.
- LayerX suggests enforcing Single Sign-On (SSO) to monitor data flows effectively.
Already a member? Log in here