AI has been superhuman at narrow things for decades. Chinook was the first program to win a human world championship in 1994. Chess fell in 1997. Go in 2016. Poker in 2017. Each time, we moved the goalposts and said “but that’s not real intelligence.”
The thing is, capabilities don’t arrive uniformly. They poke through human-level performance one domain at a time. If you imagine a radar chart with hundreds of axes - one for each cognitive task - AI has always been this spiky shape. Superhuman in some directions, useless in others.
What’s happening now is that software engineering is pushing through. Claude Opus 4.5 can do work that would have genuinely shocked people a few years ago. Not perfect, not senior engineer level, but solidly useful in ways that matter. Another spike poking past the human baseline.
This is why I don’t think there’s going to be a clean “AGI moment.” No press conference where someone announces we’ve crossed the threshold. It’s just going to be this spiky ball, expanding outwards, with different capabilities crossing human-level at different times. Some domains will stay stubborn for years. Others will fall fast.
The interesting bit - and this is something Dario Amodei has talked about - is that coding might be the spike that matters most. If you can automate software engineering, you can accelerate everything else. You can build better training infrastructure, better data pipelines, better evaluation tools. The system starts improving itself.
So we’re not waiting for some unified “general” intelligence to appear fully formed. We’re watching a spiky ball expand in an N-dimensional capability space. Some spikes are already way past human. Others haven’t started growing yet. The ball just keeps getting bigger and pointier.
The question isn’t “when will we have AGI?” It’s “which spike crosses next, and what does that unlock?”