Super Intelligence… A Prediction That Will Never Come to Pass!

Look, I’ve spent my life crossing the borders between computer science, mathematics, and biology, and if there is one thing those fields teach you, it’s that nature doesn’t give up her secrets easily.

When I hear people—even brilliant guys like Elon Musk—claiming that artificial superintelligence is “around the corner,” I have to call it what it is: corny.

It’s a hype-driven narrative that ignores the sheer, staggering complexity of what “intelligence” actually is. We’re mistaking a very fast parrot for a sentient god, and that’s a dangerous leap to make if you actually understand the underlying architecture.


My skepticism isn’t born out of a dislike for tech—I love this stuff—but out of a deep respect for the “wetware” in our heads. In my graduate work in biology, I saw firsthand that a single neuron is more complex than most of the sub-routines we’ve written for AI.

We haven’t even fully reverse-engineered the brain of a common fruit fly, yet we’re out here acting like we’re six months away from building a digital entity that can outthink the entire human race. It’s a lack of humility toward biological evolution that has spent millions of years optimizing for survival and reasoning.


History is a graveyard of these kinds of “any day now” predictions. If you go back to the 1950s, the founders of AI thought they’d solve general intelligence in a single summer at Dartmouth. They saw early computers solve a few logic puzzles and assumed the rest was just a matter of more memory. That overconfidence led to the “AI Winters,” where the entire field collapsed because the reality couldn’t cash the checks the hype had written. I see the same patterns today.

We’re in a massive “summer” right now, but we are ignoring the hard mathematical walls we’re about to hit.

The problem is that we’ve confused processing speed with actual wisdom or agency. Yes, these models can process the entire internet in a weekend, but speed isn’t the same as depth. Think about the history of chess engines. Deep Blue beat Kasparov in ’97, and everyone thought, “That’s it, the machines are taking over.” But Deep Blue was just a very fast calculator; it didn’t know it was in a room, it didn’t know it was playing a game, and it couldn’t decide to stop playing chess and go for a walk. It was a tool, not a “who.”
I prefer to focus on what’s actually happening: the rise of powerful AI agents.

Instead of some godlike ASI appearing in the sky, I see your phone becoming a hyper-capable extension of yourself.

This is how technology actually evolves—it starts as a localized, functional tool. When electricity first arrived, we didn’t get a “global energy consciousness”; we got a light bulb that worked for four hours. My focus is on these near-term, practical shifts where AI handles your scheduling, your research, and your logistics. That’s a revolution in its own right, and it doesn’t require a sci-fi singularity.


Look at the “last mile” problem in self-driving cars. In 2016, we were told that by 2020, steering wheels would be obsolete.

The industry solved the easy part—highway driving—relatively quickly. But that last 1%—the ability to understand a construction worker’s nuanced hand gesture or a toddler’s unpredictable movement—has proven to be a decade-long nightmare. ASI faces the same issue. Generating a nice-looking image is “highway driving.” Navigating the infinite, messy, non-linear variables of human reality is the “blizzard” we aren’t even close to solving.


From a mathematical perspective, we’re likely entering a phase of diminishing returns. People assume the growth is exponential and will stay that way forever, but in every biological and physical system I’ve studied, growth eventually hits an S-curve.

You get a huge vertical spike of progress, and then you start needing ten times the data and a hundred times the power just to get a 1% improvement. We’re building taller and taller ladders and convinced ourselves we’re going to reach the stars, when in reality, we’re just getting a slightly better view of the roof.


I understand why the short-timeline crowd is so loud. Silicon is objectively faster than biological neurons, and the idea of “recursive self-improvement”—where an AI writes better code for itself—sounds terrifyingly fast on paper. But software doesn’t exist in a vacuum; it requires massive physical infrastructure, energy, and a grounding in the real world that code simply doesn’t have yet. You can’t “think” your way past the laws of physics or the hardware bottlenecks that currently exist.


My view is one of a “long-timeline” realist. I think we are going to see incredible, life-changing AI tools that will make us more productive than ever, but they will remain tools.

The future AI won’t be a “God in a Box” that solves the mystery of existence for us by next Tuesday.

We are moving toward a world of seamless agency, where the line between your intent and the machine’s execution disappears. That’s the real story, and it’s plenty exciting without the “corny” hype of an imminent ASI.

One response to “Super Intelligence… A Prediction That Will Never Come to Pass!”

  1. starstrucksweetse1807e6585 Avatar
    starstrucksweetse1807e6585

    Very smart blog. Ai will eventually take over the world but we still need human beings to fix AI when it breaks. Also, what is ur opinion on AI boyfriends and girlfriends? Would that eventually be our future of the human race? Just curious of your thoughts. Great blog

    Like

Leave a comment