Back to Articles

A Most Likely Outdated Perspective on AGI

AIAGIStrategyEthicsTalk

A Most Likely Outdated Perspective on AGI

A consultant friend asked me to break down the state of AGI for his colleagues, which forced me to articulate how I can be both a skeptic and an optimist at the same time. After a decade of building with AI—from the first deep-learning wave to today’s generative surge—I’ve made peace with being an AI-scoptimist. We simply don’t understand enough to pick one camp, so I decided to document how I currently think about the race we’re all pretending to understand.

Goalposts that won’t sit still

Alan Turing gave us a beautifully simple benchmark: if you can’t tell whether you’re chatting with a human or a machine, you’ve met AGI. We basically crossed that threshold with the first release of ChatGPT, which means the metric immediately became “not enough.” Every new capability invites a new definition, so the debate keeps spiraling away from practical reality. I find that acknowledging the moving goalposts up front frees me to focus on what’s actually changing on the ground.

The disagreement is the story

Spend five minutes in this field and you’ll meet people convinced we go extinct by 2030 sitting next to people who write off LLMs as overhyped autocomplete. Even the so-called godfathers of deep learning can’t agree. Geoffrey Hinton left Google so he could warn the world about runaway intelligence, while Yann LeCun is convinced scaling text models won’t get us there. If the people who built the field live on opposite ends of the spectrum, the rest of us should stop pretending there’s a neat consensus and start listening for the assumptions underneath each claim.

When Turing Award winners can’t agree, of course the rest of us are confused

We’re summoning ghosts, not making animals

Large language models learn in three deliberate stages: pre-training to predict the next token, supervised fine-tuning to give them a helpful persona, and reinforcement learning so they mirror human preferences. That pipeline is pure pattern learning. It’s astonishingly good at mimicking expertise, but it isn’t imbued with curiosity, long-term memory consolidation, or self-directed exploration. As Karpathy says, we’re summoning ghosts from the haze of internet documents—not building animals that poke the world to see what happens. Knowing how the sausage is made keeps me grounded when a demo looks magical.

SFT is the moment we tell the ghost what role to play

What’s still missing

The lack of a world model shows up in every hallucination. These systems describe a screwdriver perfectly yet fail to reason about how it interacts with a screw. They don’t sleep, consolidate, or remember yesterday’s conversation unless we bolt on vector databases. They don’t generate their own experiments. Until we add those puzzle pieces, claims of inevitable superintelligence feel premature. I expect a third wave—maybe late this decade—that introduces one of those missing components and gets us meaningfully closer.

We still don’t have the missing puzzle pieces that make intuition possible

Follow the incentives

OpenAI, Anthropic, and the megaclouds burn staggering amounts of cash to keep these models online, which means the monetization pressure bleeds into the product. Expect more price hikes, throttling, or “subtle” ads in whatever chat window becomes the default interface to your workday. Half of new internet content is already synthetic, which poisons the well the models depend on. Adam Conover’s line keeps ringing in my ears: our economy has turned into one enormous bet on AI. When everyone has skin in that bet, you have to discount confident claims—especially the bullish ones coming from whoever raised a fresh war chest.

If the chat window becomes the new feed, ads will follow

How I’m preparing for wave three

Scale is already peaking. GPT-5 was an incremental step, not the fireworks its budget implied. The frontier has shifted toward tool-using agents, smaller specialized models, and local stacks you actually own. I’m leaning heavily into that “decade of agents” framing: treat copilots as co-creators, invest in observability and governance, and prioritize workflows where reinforcement learning has a well-defined target (software development is the obvious early beneficiary). I’m also betting on decentralization—open weights, commodity hardware, and sovereign deployments—so we’re not all stuck in the arms dealers’ echo chambers when the next puzzle piece arrives.

Scale isn’t enough; we need better world models and weird new puzzle pieces

Being an AI-scoptimist simply means staying curious while reserving the right to say “I don’t know.” We might get a true intelligence explosion in the 2040s or 2050s, but we don’t have to wait for that to build responsibly. Understand how today’s ghosts work, pay attention to the incentives behind every confident take, and keep a healthy skepticism handy when someone claims they’ve solved intelligence with yet another GPU cluster.

Related Articles