AI’s Dirty Laundry: Sensitive Data Spills & Supply Chain Thrills in 2025
Sensitive information disclosure takes center stage as the second biggest risk to large language models according to OWASP’s updated Top 10 list. As AI adoption skyrockets, developers discover their faith in the inherent privacy of LLMs might be misplaced. Turns out, AI has a knack for spilling the beans—unintentionally, of course.

Hot Take:
Move over, celebrity gossip! The real drama is unfolding in the world of AI as sensitive information disclosure leaps from sixth to second place on OWASP’s Top 10 List for LLMs. It’s like watching a reality TV show where the contestants are algorithms, and the prize is your personal data. Who knew artificial intelligence could be so scandalous?
Key Points:
- Sensitive information disclosure is now the second biggest risk for LLMs and Generative AI.
- Supply chain vulnerabilities have risen to the third spot, showcasing real-world impacts.
- Prompt injection remains the top risk, allowing manipulation of LLM outputs.
- New risks like vector and embedding weaknesses, and system prompt leakage make their debut.
- The AI/LLM security landscape is rapidly evolving, with more tools now available.
Already a member? Log in here