Watch how technical AI terms lose precision as they travel through citations
Original paper defines the term carefully with metrics, benchmarks, and caveats. "We define 'understanding' as achieving >80% on reading comprehension benchmark X under conditions Y."
Secondary sources cite the paper but drop qualifiers. "Recent research shows models can understand text." The benchmark and conditions vanish.
Popular media and public discourse interpret the term psychologically. "AI now understands language like humans do." Technical achievement becomes philosophical claim.
Technical: Weighted sum of value vectors based on query-key similarity scores.
Popular: "AI focuses on what's important, just like human attention."
Technical: Gradient descent optimization of loss function parameters.
Popular: "AI learns and grows from experience like a child."
Technical: Key-value storage in attention layers; fixed context window.
Popular: "AI remembers conversations and builds relationships."
Technical: Chain-of-thought prompting improves accuracy on benchmarks.
Popular: "AI can think through problems logically."
The rigorous definition in the original paper provides "cover" for the sweeping claims made later by people who only read the abstract. Each citation is a game of telephone where technical precision gets replaced by intuitive (but misleading) psychological interpretation.
The solution isn't to avoid metaphors—they're necessary for communication. It's to remember they are metaphors, and to check what the original paper actually measured before making claims about machine minds.
Based on "The Accidental Semiotics Shuffle" from Adventure Capital