AI’s Attention: Unpacking the Quirks and Quarks of Tech’s Most Mysterious Mechanism
AI is like a magician with secret tricks, and even its creators don’t fully get how it works. Professor Neil Johnson, armed with first-principle physics theory, is trying to decipher AI’s Attention mechanism. Think Harry Potter meets Einstein, but with less magic and more math, tackling AI’s hallucinations and biases.

Hot Take:
Who would’ve thought that the solution to understanding AI’s mysterious brain farts would come from physics class? If only someone could also explain my morning coffee’s fleeting concentration powers using quantum mechanics!
Key Points:
- AI’s “Attention mechanism” is being analyzed through the lens of first-principle physics theory.
- Neil Johnson and Frank Yingjie Huo’s research links AI hallucinations and biases to physics concepts.
- The Attention mechanism is compared to physics’ “spin baths” and 2-body Hamiltonians.
- Current AI models are like narrow-gauge railways—they work, but there might be better options.
- Johnson proposes a risk management approach to predict when AI might go haywire.
Already a member? Log in here