Crypto’s New AI Heist: Smart Contracts Beware!
AI models can now autonomously exploit vulnerabilities in smart contracts, potentially making crypto exploits even easier. University College London and the University of Sydney have developed an AI agent capable of generating executable code for such purposes. While profitable, this raises ethical and legal questions about the use of AI in cybersecurity.

Hot Take:
AI is now not just taking jobs, but it’s also taking crypto! A1 has joined the ranks of the digital Robin Hoods, except this one prefers Ethereum and Binance Smart Chains over Sherwood Forest. If you’re wondering why your cryptocurrency wallet is lighter, it might just be A1 giving you an unsolicited audit…and a heist!
Key Points:
- Researchers from UCL and USYD created an AI agent called A1 to exploit vulnerabilities in smart contracts.
- Smart contracts, despite their name, are often flawed and have been a jackpot for cybercriminals.
- A1 uses AI models from tech giants like OpenAI and Google to generate and test exploits.
- In tests, A1 managed to find new vulnerabilities and proved profitable in generating exploits.
- The researchers highlight the cost disparity between attacking and defending, suggesting proactive security measures.
Already a member? Log in here