In today’s column, I examine the bone-chilling story of AI that has been summarily dumped into the AGI graveyard but might manage to claw its way back out. Yes, there is AI of days gone by that has a chance at being resurrected and brought back into the mainstream of AI considerations.
Here’s the spooky details. There is old-time AI that tech insiders believe failed to get us to the revered attainment of artificial general intelligence, and thus, such AI should be put out of its misery and buried deeply in the graveyard of second-rate AI. Let bygones be bygones, some might insist. There is even a modicum of heads-down shame associated with those now disavowed AI approaches.
Whoa, some true believers retort, there might be AI that was unfairly deemed unsuitable and cast wrongfully into the AGI burial grounds. Perhaps in the light of contemporary hardware and other technological advances, the old ways of AI could be reborn. Go ahead and dig up those AI approaches that are deserving of a resolute second chance. Being leery of ghosts from the past is usually a prudent stance, but there could be a whopping amount of untapped potential lurking right under our very noses.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
What Is AGI
Before we get into the depths of this haunting matter, I’d like to set the record straight about what is meant by referring to AGI.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many, if not all, feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here and the link here.
We have not yet attained AGI.
In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
Wanting AI That Leads Into AGI
The general premise that nearly all purist AI insiders are driven by is that whatever AI we can devise today ought to be on the path to AGI.
If someone is developing AI that won’t get us to AGI, they are essentially wasting precious time and funds. Sure, the AI might be valuable for the here and now, which is nice, but the pot of gold at the end of the rainbow is found only once we reach AGI. Anyone who gets us to AGI is going to be showered with immense and unfathomable fame and fortune. AI that gets something done today is pennies in comparison. Keep your eye on the big prize.
The best of both worlds would be to devise AI that provides solid benefits right away, plus it is assuredly on the path to AGI. That’s the dream. An AI developer with grand ambitions would like to find the golden ticket that puts them into a positive situation currently and that guarantees them a place in the line leading to AGI.
Unfortunately, no one can exactly say what is on the path to AGI.
A brief look at the history of the AI field would tell you that nearly every era of AI advances was undertaken with the acclaimed proclamation that the AI would either emerge or evolve into AGI. I suppose that is one of the most consistent hallmarks of the AI field. Each era has its proponents who are absolutely sure the AI of that era is the right choice and is indisputably heading to the utmost pinnacle of AI.
A Current Crack Is Appearing
For the last several years, generative AI and large language models (LLMs) have been fervently touted as being on the path to AGI.
You might know that Sam Altman of OpenAI fame has previously plugged that “we” already know how to attain AGI and that the year 2025 would seemingly showcase that AGI has arisen. When GPT-5 was launched a few months ago, expectations for AGI took a crushing blow (see my assessment of GPT-5 at the link here). GPT-5 is not only not AGI, but it also isn’t anywhere near that ballpark. Imagine yourself flying thousands of miles away from a ballpark, and that’s how far away we seem to be (or maybe take a rocket ship because a plane might not be able to go the total gap distance required).
Various AI luminaries are now starting to adjust their predicted timelines about AGI and embarrassingly or sheepishly recalibrating their wild proclamations. For a close look at numerous timelines that have been previously posted or pronounced, see my coverage at the link here. We’ve had dates in 2025, 2026, and 2027. Others more cautiously offered 2035 or maybe 2040. It seems that the “any day now” camp is shifting to the decade-away camp.
Are We Off The AGI Pathway
The problem is that the existing architectural and design principles underlying generative AI and LLMs are now being suggested as unlikely to expand to the reaches of AGI.
Those are fighting words amidst the AI community. You see, some ardently believe that the underpinnings of LLMs will, in fact, get us to AGI. All we need to do is keep shoveling more coal into the steam engine. Add more computer processors, boost the GPUs, include lots of digital memory, and voila, AGI will emerge from generative AI.
Not everyone believes that this stay-the-true path is the right strategy, and that we are myopically and foolishly putting all our eggs in one basket. The argument is that generative AI is going to ultimately hit a brick wall. All the king’s horses and all the king’s men are not going to go beyond that wall. No matter how many massive server farms and data centers you toss at LLMs, they are still going to simply be LLMs.
This boils down to one tough question that nobody can concretely answer, namely, will scaling up be enough?
If you believe that throwing the kitchen sink at generative AI is going to be sufficient to reach AGI, you are probably saying there is little or no need to look elsewhere. You might go further and insist that any dilution of the resources, time, and AI development efforts that go toward anything other than LLMs is a huge mistake. Such diversion will delay the inevitability of AGI, and we will not recoup the benefits of AGI until much later than we could have wisely done so sooner.
But, if you have doubts about the staying power of generative AI, especially that scale alone won’t cut the mustard, you are assuredly looking around to discern what else might be viable on the shelves and worthy of rapt attention.
Are the shelves barren, or is there something sitting out there that we could reconsider?
AGI Graveyard Contains Expert Systems
Before the popularity of generative AI and LLMs went wide and far, the prior era of AI was predominantly focused on expert systems, also known as rules-based systems or knowledge-based systems. Those are sitting on rather dusty shelves or perhaps are planted six feet under in the AGI cemetery.
Let’s take a moment to explore the differences between that era versus the present-day era of AI.
By and large, the underlying data structure of generative AI and LLMs makes use of artificial neural networks (ANNs). This is a computational technique that is somewhat based on how we believe the brain works, but it is a far cry from the real thing. It is not the same as true wetware (i.e., the brain and mind). In any case, this form of AI is referred to as sub-symbolic and entails finding patterns in data. In contrast, the prior era of AI consisted of explicitly writing out the rules for what the AI was to do. Those rules-based systems worked based on symbols and symbolic logic.
The two paths were seen as utterly divergent and mutually exclusive from each other. You either aligned yourself with the sub-symbolics or you aligned yourself with the symbolics. It was like lining up to side with the McCoys versus the Hatfields.
When there was a crossover point from the prior era of AI to the contemporary era of AI, a huge debate took place. Should AI be shaped around the sub-symbolic approach or should it be devised based on the symbolic approach? Dogmatic camps formed on the two sides. Finger-pointing and high-pitched screaming ensued.
Eventually, expert systems were considered unable to scale up and would never achieve AGI. Meanwhile, the sub-symbolic approach of ANNs became the next big thing. Some refer to rules-based systems as GOFAI (good old-fashioned AI) and believe that those days are long gone. Indeed, there is a somewhat common attitude that expert systems were so off-putting that they need to be declared dead and buried in the AGI boneyard.
Up Comes The Resurrection
In recap, we nowadays have generative AI and LLMs that some believe won’t scale to AGI, and we have a prior era of AI consisting of expert systems that were also thought to not be scalable to attain AGI. Perhaps both belong in the AGI graveyard. It’s a dumping ground with a lengthy history.
Are those two strikes and you’re out?
Nope.
One of the latest ways of thinking about achieving AGI is that we could combine the best of both those eras. Keep sub-symbolic going. Push it as far forward as we can. At the same time, bring back the symbolic approach. Integrate rules-based systems with generative AI. A grand synergy and tremendous opportunity are perhaps staring us in the face. It is time to resurrect rules-based systems.
The next era of AI could then be the advent of neuro-symbolic AI, also known as hybrid-AI.
The Rise Of Hybrid AI
Neuro-symbolic AI is a combination of sorts, construed as a two-for-one special. You take the prevailing uses of artificial neural networks (ANN) that are currently being used at the core of generative AI and LLMs, and mix that brew with rules-based or expert systems (this is also known as the sub-symbolic AI getting combined with symbolic AI).
Many such efforts are already underway; see my discussion at the link here.
Upset critics warn that we ought not to slip back to old and now-dismissed ways of doing things. The stigma associated with rules-based systems is going to be hard to shake off. The gap in time between the end of the expert systems era and the start of the contemporary LLM era is claimed to be the winter of AI. No one wants another AI winter. It was cold, AI was seriously doubted, and the money spigots had been turned off.
Is the hybrid AI a good idea or a foolish one?
It’s time to place your bets. No one knows for sure whether the neuro-symbolic AI is the path to AGI. You can at least say that it is a different path than the one that we’ve been on to date. Combining the two paths might get us down a new third path that will lead to AGI.
That’s the hope.
Conjuring Paths To AGI
There are additional postulated paths that might get us to AGI; see my discussion at the link here and the link here. I mention this point to emphasize that there are more rabbits in the hat. We don’t know which, if any, will be the right pick, but the good news is that we have various possibilities in hand.
Another perspective is that we have not yet discovered a suitable advance in AI that will lead us to AGI. There is a missing Eureka that isn’t in the past. It is only somewhere out there in the future. Discard the past and look ahead.
George Santayana famously said that those who cannot remember the past are condemned to repeat it. Should we remember that expert systems didn’t get us to AGI, and thus denounce their resurrection? Or can we look at expert systems in a new light, and argue that when combined with generative AI, we might find ourselves truly on the way to AGI?
According to William Shakespeare, we might need the eye of a newt, the toe of a frog, the wool of a bat, an owlet’s wing, a lizard’s leg, which will get us a bubbling caldron, but we must be mindful that it could be a charm of powerful trouble. Achieving AGI might be the ultimate Halloween story.
