Crypto’s New AI Heist: Smart Contracts Beware!

AI models can now autonomously exploit vulnerabilities in smart contracts, potentially making crypto exploits even easier. University College London and the University of Sydney have developed an AI agent capable of generating executable code for such purposes. While profitable, this raises ethical and legal questions about the use of AI in cybersecurity.

Pro Dashboard

Hot Take:

AI is now not just taking jobs, but it’s also taking crypto! A1 has joined the ranks of the digital Robin Hoods, except this one prefers Ethereum and Binance Smart Chains over Sherwood Forest. If you’re wondering why your cryptocurrency wallet is lighter, it might just be A1 giving you an unsolicited audit…and a heist!

Key Points:

  • Researchers from UCL and USYD created an AI agent called A1 to exploit vulnerabilities in smart contracts.
  • Smart contracts, despite their name, are often flawed and have been a jackpot for cybercriminals.
  • A1 uses AI models from tech giants like OpenAI and Google to generate and test exploits.
  • In tests, A1 managed to find new vulnerabilities and proved profitable in generating exploits.
  • The researchers highlight the cost disparity between attacking and defending, suggesting proactive security measures.

Membership Required

 You must be a member to access this content.

View Membership Levels
Already a member? Log in here
The Nimble Nerd
Confessional Booth of Our Digital Sins

Okay, deep breath, let's get this over with. In the grand act of digital self-sabotage, we've littered this site with cookies. Yep, we did that. Why? So your highness can have a 'premium' experience or whatever. These traitorous cookies hide in your browser, eagerly waiting to welcome you back like a guilty dog that's just chewed your favorite shoe. And, if that's not enough, they also tattle on which parts of our sad little corner of the web you obsess over. Feels dirty, doesn't it?