AI-Nomis Scandal: A Terrifying Peek into the Dark Side of AI Image Generation
An exposed database from South Korea-based GenNomis revealed over 95,000 explicit AI-generated images, including child sexual abuse material. The database lacked password protection, leaving sensitive content accessible online. The incident highlights the dark side of AI image-generation tools and their potential for misuse.

Hot Take:
Looks like the AI-hive mind has decided to skip the part where it helps humanity and instead went down the path of becoming a creepy stalker. With great power comes… zero responsibility, apparently.
Key Points:
- Security researcher Jeremiah Fowler discovered an open database containing over 95,000 explicit AI-generated images, including CSAM.
- The database belonged to South Korean firm GenNomis, which quickly shut down its website after the exposure.
- AI tools are increasingly being used to create harmful content, including deepfakes and non-consensual imagery.
- The exposed database highlights the risks of AI-generated content outpacing regulatory and safety measures.
- Experts stress the need for stricter controls and more pressure on all elements of the tech ecosystem to prevent abusive AI use.
Already a member? Log in here