top of page
Search

AI Talk: Are we there yet? The AGI Debate.

  • Writer: Juggy Jagannathan
    Juggy Jagannathan
  • Mar 29
  • 5 min read

There is a lot of talk about Artificial General Intelligence as more and more AI models are released with more and more capabilities. Is this a harbinger of super intelligent robots? Or something else? Let's explore what the marketplace of ideas says about what is happening.

ChatGPT rendering of the concept of Singularity by Ray Kurzweil
ChatGPT rendering of the concept of Singularity by Ray Kurzweil

Here is a pretty good definition of AGI, thanks to Claude 3.7:

AGI (Artificial General Intelligence) refers to a hypothetical type of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or surpassing human intelligence. Unlike narrow AI systems designed for specific tasks, AGI would have general problem-solving abilities, adaptability to new situations, and the capacity to transfer knowledge between different domains - essentially matching or exceeding human cognitive abilities across virtually all intellectually demanding tasks.

Now, the concept of AGI has its origin in AI which goes back to Turing and others almost a century back. But one figure comes to the fore, Ray Kurzweil who foretold the impact of AGI. His pioneering work, Singularity is Near was published in 2005. And last year followed it up with the sequel "Singularity is Nearer." In essence, his books predict - human bodies will merge with AI and become ourselves superhuman - via ingestion of intelligent nano robots! (Yes, you read that right. Tiny robots. In your bloodstream. Let that sink in... or swim around?) He forecasts that this will start happening by 2029 - scant four years away (Plenty of time to prepare... right?). He also foretells that this would be a huge disruption of society as we know of it today and Universal Basic Income will be needed and will be embraced by all. While these predictions may give us pause, AGI, the MIT Technology Review proclaims, has become a dinner table topic. But let's revisit where we are at the current moment - tumultuous though it is for the field of AI.


Hallucinations Are Not a Bug, They’re a Feature

Despite their impressive capabilities, Large Language Models (LLMs) like GPT-4, Claude, and Gemini still make things up. Why?

At the heart of it lies how they work: LLMs generate text by predicting the next most probable token based on prior context — essentially, they’re sophisticated autocomplete systems. This probabilistic sampling, even when tempered by techniques like temperature tuning or reinforcement learning, inherently opens the door to what researchers call “hallucinations.”

As put well in Reliable AI is Harder Than You Think, even efforts like Retrieval-Augmented Generation (RAG), chain-of-thought prompting, and fine-tuning struggle to resolve hallucinations at scale.

It’s not just a technical flaw — it’s a philosophical limitation. LLMs do not “know” facts; they generate based on patterns in training data. As such, they lack grounding in a world model or symbolic reasoning — a requirement for reliable, general intelligence.


Beyond the Token: Other Roads to AGI

So if LLMs hallucinate by design, are they the wrong path to AGI? Maybe not entirely — but many thinkers believe something deeper is needed. After all, if we're chasing general intelligence, perhaps teaching a machine to autocomplete everything isn't the most solid foundation. (Imagine trying to build a rocket ship using only predictive text. “To launch… insert love emoji?”)


Some researchers are exploring hybrid models that combine symbolic logic, memory, world models, and perceptual grounding — basically, giving AI systems something closer to a mental “body” and a sense of what’s actually true or false.

Others are venturing into more abstract, mind-bending terrain — like Category Theory, a branch of mathematics so high-level that even most mathematicians have to lie down after thinking about it for too long. With its emphasis on relationships between structures, Category Theory may offer a blueprint for systems that can manipulate concepts, not just words.

As laid out in “The Insurmountable Problem of Formal Reasoning in AI”, there's a growing view that deep learning, while powerful, lacks the kind of compositional reasoning and internal consistency that AGI would require.

Even respected researchers like Sebastian Raschka raise the concern that today's LLMs aren’t really learning to reason, they're learning to simulate reasoning through training shortcuts and correlations. His “Understanding Reasoning in LLMs” unpacks this in depth, highlighting the risks of over-interpreting their capabilities.

While still highly theoretical (and possibly requiring caffeine levels that violate OSHA guidelines), these approaches suggest AGI might not just be about scaling up LLMs, but rethinking intelligence from the ground up — blending machine learning with the kind of formal, philosophical, and even biological insights that gave rise to human cognition in the first place.


The Marketplace of AGI Ideas: A Spectrum of Belief

The societal tension is palpable, as highlighted in conversations like the Ezra Klein Podcast featuring Ben Buchanan: there’s deep tension between government, military, and Silicon Valley over who should shape the future of AI.

Meanwhile, initiatives like the Lila AI Science Lab, represent a different bet trying to build “science engines” that go beyond LLMs and actually discover new knowledge. It’s a bet that AGI isn’t just about language, but cognition, discovery, and embodiment.

Finally, not everyone buys the hype.

As “Is AGI a Hoax of Silicon Valley?” provocatively argues, AGI may be more marketing than science — a kind of techno-futurist illusion built on Silicon Valley’s desire to own the future.

My own feelings when it comes to LLMs and Generative AI is to follow the slightly modified Reagan refrain: "Don't trust and verify." But the question is how does one verify? How can one have adequate guard rails? But how do we verify, especially outside controlled environments? While fields like coding (with test routines) and science (via experimentation) offer established verification pathways, deploying AI autonomously in the 'wild' still lacks robust, universal guardrails.


Conclusion: So… Are We There Yet?

If AGI means machines that can reason, learn, and generalize like (or beyond) humans — we’re not there yet. What we do have are powerful narrow systems that mimic generality, but still fall short of reliability and true understanding. Yet the rapid pace of progress, the variety of approaches, and the sheer scale of investment means something big is coming. Whether it will be AGI, or just ever-smarter autocomplete machines, is still up for debate. And that debate, increasingly, is not just for researchers — it’s at our dinner tables, in our parliaments, and maybe, someday soon, in our minds.


Acknowledgement: This blog was written with the help of human intelligence (perhaps I can claim some) along with my three able-minded AI assistants: ChatGPT, Gemini and Claude.




 
 
 

1 Comment


venod ghorrpade
venod ghorrpade
Mar 30

Hi Juggy, I am amazed at the content you share bro. Looks like that nano chip is already in your bloodstream. I feel so ignorant, ingenious, gullible and an uneducated villager, in this rapidly changing, intelligent and advanced world that you are a part off, and where I feel, I do not fit in any more!!

Best wishes buddy and proud of where you stand in today's intellectual society. God bless you.

Venod Ghorrpade

Like
bottom of page