AI Model Theft: NC State’s Shocking Side-Channel Attack Exposes Google’s Edge TPU Vulnerabilities
NC State researchers have developed a method to copy AI models on Google Edge TPUs by measuring electromagnetic intensity. This side-channel attack can recreate hyperparameters, allowing for near-perfect model reproduction. The study raises concerns about the vulnerability of Google’s Edge TPU, despite its seeming invincibility.

Hot Take:
Who knew stealing secrets could be as simple as eavesdropping on a lightbulb? Move over James Bond, the new spies are wielding oscilloscopes and measuring electromagnetic waves. I guess Google’s Edge TPUs have an ‘edge,’ but not the kind you’d want in a knife fight with cyber bandits. Time to dim those AI brainwaves, folks!
Key Points:
- NC State researchers developed a side-channel attack to copy AI models on Google Edge TPUs.
- The attack measures electromagnetic emissions to infer hyperparameters of AI models.
- Reproducing an AI model using stolen hyperparameters is significantly cheaper than the original training.
- The method recreates models with 99.91% accuracy, layer by layer.
- Google is aware of the findings but hasn’t publicly commented on the implications.
Already a member? Log in here