AI’s Misguided Directions: When Language Models Send You to the Wrong Side of the Internet
Large language models are confidently recommending fake or unclaimed sites. Asking AI for login info on popular brands might lead you to a phishing site instead of the real deal. It’s like asking for directions and ending up at a clown college instead of your destination. Remember, in AI we trust, but verify!

Hot Take:
In the grand game of cyber charades, it seems our AI friends have taken a liking to improvisation, leading users straight into the jaws of the internet’s dark alleyways. Who knew that asking a machine where to log in could be as risky as asking a mime for directions?
Key Points:
- Security researchers reveal that LLMs frequently suggest incorrect login URLs.
- 34% of AI-generated URLs led to domains not owned by the intended companies.
- Unregistered and inactive domains made up 29% of these erroneous suggestions.
- Smaller brands are more likely to suffer from these AI hallucinations.
- Experts call for runtime validation to ensure URL correctness in AI responses.
Already a member? Log in here