In today’s column, I analyze a hotly debated trending topic in the AI community about whether attaining artificial superintelligence (ASI) will require first achieving artificial general intelligence (AGI), or perhaps we might undergo a straight shot directly to ASI without using AGI as a stepping-stone.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
The Pursuit Of AGI And ASI
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence or maybe even the outstretched possibility of achieving artificial superintelligence.
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of AI, AGI, and ASI, see my analysis at the link here.
AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as “P(doom)” which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI.
The other camp entails the so-called AI accelerationists.
They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity’s problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that’s good in the sense that AI will invent things we never could have envisioned.
No one can say for sure which camp is right, and which one is wrong. This is yet another polarizing aspect of our contemporary times.
For my in-depth analysis of the two camps, see the link here.
Attaining ASI Via Straight Shot Or Via AGI
A heated debate is taking place on the process of how we’ll get to ASI, consisting of a focus on these two mainstay postures:
- (1) Two-step Process: Stepping-stone via AGI to ASI. Conventional AI will first advance to AGI, and then after some settling time the AGI will advance to ASI, all occurring as a two-step process.
- (2) One-Step Process: Straight shot to ASI. Conventional AI will advance straight to achieving ASI, and there won’t be an intermediary landing zone on that pathway (it’s a one-step process).
There is currently insufficient evidence to anoint one path over the other as being the most likely to occur. A compelling case can be made for each respective posture. Opinions are aplenty. Of course, nobody can even say for sure that we’ll achieve AGI or that we’ll achieve ASI.
It is all speculation at this juncture.
In any case, it is worthwhile to give serious thought to these two postures since we can potentially drive AI advances toward either of those two pathways. Important questions arise. Which pathway is the best? What do we need to do to proceed in a preferred direction? Etc.
One loud retort is that we might not have a say in how the process plays out. It could be that conventional AI accelerates without human hands to become ASI, and we are merely bystanders (this is often referred to as the singularity, see my analysis at the link here). Likewise, it could be that conventional AI advances into AGI without our direct influence, settles there, and stays there or subsequently opts to shift further to become ASI.
Thus, these corollaries apply to the two-step and one-step conundrum:
- (a) Humans decide the direction. This belief says that advancing AI to AGI or ASI will be undertaken by humans and generally within human efforts.
- (b) Direction outside of human hands. This belief says that advancing AI to AGI or ASI will happen by AI itself and that humans will essentially be onlookers.
The crux of that vocal retort is that the path and transitions might happen without our ability to decide which way things go. A cogent response to that retort is that even if that is the case, this does not negate the need to consider the two pathways.
You see, we would be wise to be prepared for the two pathways, however they come about. You could stridently contend that even if we aren’t able to call the shots, assuming that we can’t, our having sufficient awareness beforehand would be crucial to avoid being caught totally unaware.
Traditionalist View Is AI-AGI-ASI
It is reasonably fair to suggest that the traditional viewpoint has been that we will proceed with a two-step process of conventional AI leading to AGI, which in turn leads to ASI. This can be shown formulaically as AI-AGI-ASI, a shorthand form of expression.
Here’s how this is usually envisioned.
We keep our endeavors primarily aimed at AGI. This would seem the closest window of opportunity. ASI might seem to be a bridge too far. Let’s get our heads wrapped around achieving AGI and not dreamily pursue ASI.
Don’t bite off more than you can chew.
The argument often goes further to insist that wanton attempts toward ASI are bound to rob our pathway to AGI. The time, energy, and cost of seeking ASI will sap those resources from arriving at AGI. Therefore, ASI is a fool’s errand. It could be that you’ll delay AGI or possibly not arrive at AGI at all and nor arrive at ASI. The shiny object of ASI has led you down a primrose path.
Aiming strictly at ASI could lead us to being entirely empty-handed, namely not garnering AGI and failing to arrive at ASI.
The softer version of this logic is that you can certainly keep ASI in the back of your mind, but don’t be preoccupied with ASI. Consider ASI as a faintly far-off shining light. Meanwhile, your head needs to be in the game of AGI achievement.
Safety Of The Traditionalist View
Something that usually goes along with the AI-AGI-ASI perspective is that this seems the safest way to proceed.
The deal is this.
If we leap into ASI and don’t first spend time with AGI, we are asking for deep trouble. The ASI will be so unfamiliar to us that we will be entirely out of our league. If we had arrived at AGI and figured out the best ways to cope with AGI, we would be better armed for dealing with ASI.
A twist is that suppose we arrive at AGI and realize things aren’t going well for humankind. We could potentially stop the further progression to ASI. By having seen what happens in the presence of AGI, thankfully, we discover that we and a potential ASI are going to be a really ugly combination.
A twist on the twist is that AGI opts to help us to prevent ASI from occurring. Yes, that’s right, the logic is that AGI foresees that humanity and ASI are a lethal combination, possibly generating the dreaded existential risk that AI wipes us out or enslaves us.
AGI befriends us and helps to ensure that ASI doesn’t arise.
Wrongheadedness Of The Traditionalist View
Whoa, some exhort, you are giving way too much compassion-like credit to AGI.
First, AGI might decide to proceed to ASI, doing so regardless of what humankind wants to have happen. The moment we reach AGI; all bets are off. You might be thinking that AGI will comply with what we want. Think again. AGI is presumably going to be able to think for itself. There is no particular reason to assume that AGI will wait around for humans to decide whether to get to ASI.
Second, AGI might prevent humans from preventing ASI. This is a variation of the prior point, namely that AGI wants to get to ASI. It is one thing for AGI to proceed on its own. Another angle is that it proceeds, humans try to intercede, and AGI countermands human prevention.
Third, it could be that ASI is a farfetched and unattainable goal, even with AGI helping us out. The logic is that, suppose AGI agrees with us that ASI is worth having. The AGI tries and tries. However, the combined efforts of humans and AGI cannot make it to ASI. Why? Because ASI is a pipe dream.
The Upstart AI-ASI Pathway
Now that we’ve covered the traditionalist viewpoint of AI-AGI-ASI, let’s ponder the alternative of the upstart AI-ASI pathway.
In the past, a contention was that ASI is so far beyond our realm of possibilities that it made more sense to concentrate on attaining AGI. Aha, given the progress in AI, such as the advent of modern era generative AI and large language models (LLMs), maybe we can start to reasonably believe that ASI is in the realm of being achieved.
Perhaps we don’t need AGI to get us to ASI. That two-step was a prevailing assumption that no longer seems to hold water fully. Which would you rather have: AGI or ASI? If that’s the question, the answer seems to be that ASI is the right choice.
Why prefer ASI over AGI?
Because AGI will simply be more of the same, namely, human-level intelligence. Sure, that might be handy in many vital ways. The thing is, ASI will provide superintelligence. We can’t even guess how far that humankind will go since we don’t embody superintelligence and can’t think that far outside of the box.
Do not waste time, energy, and cost on AGI. Skip it. If on the way to ASI the AGI pops up, fine, but that wasn’t where we had our eyes aimed. Aim for the big prize. The big prize is ASI.
Pursuing AGI could be a slow roll toward ASI. There we are, fighting deeply in the throes of AGI, which misleadingly delays ASI. It could have been that we made a mighty leap and landed at ASI. Our risk-averse stance means that the untold benefits of ASI are either pushed into the future or waylaid entirely if we get mired in being content with AGI.
Craziness Of Leaping Into ASI
Hey, AI-ASI proponents, wake up and smell the coffee. So say those that are handwringing about a one-step process.
We have no idea whether we can control ASI. We need to walk before we run and crawl before we walk. The smart choice is to get to AGI and see how things look.
It will be a make-or-break endeavor for humanity regarding the human-value alignment of AGI with societal norms (see my coverage on this at the link here). Can we ensure that AGI aligns with humankind? Can we keep AGI from going berserk? Are we able to devise, test, and perfect controls to rein in AGI?
There is a sensible lower risk of establishing those facets with AGI versus ASI. ASI could completely trick us into believing we have it under control and that it is aligned with human values but is scheming in a mastermind way. Boom, the next thing that happens is ASI has taken us over. We are caught flat-footed.
Another consideration is the potential economic and societal changes that AGI would bring forth. Will humans no longer need to work for a living? How will AGI and humankind balance the use of always-available human-level intelligence? We’ve got a lot of adjusting to figure out.
With ASI, those issues are exponentially heightened.
Benefits Of ASI Are Alluring
Will AGI be sufficient to find cures for cancer, solve world hunger, and otherwise be of immense benefit to society (see my analysis of how AI might aid the United Nations SDGs, at the link here).
Maybe, but maybe not.
Remember that AGI is human-level intelligence. AGI might get mired in the same roadblocks and myopic considerations that humans have. You are pinning too much hope on what AGI will achieve. It won’t be the knock it out of the ballpark that you think it will be.
ASI would be.
You must admit that the chances of ASI finding cures for cancer, solving world hunger, and the like, are a bunch higher than AGI. ASI is a moonshot. AGI is just circling the earth.
Will ASI align with human values? We don’t know and likely don’t have any means to control this. It is absurd to believe that just because we get AGI aligned, we can do the same with ASI. Note that ASI will outthink humans and will outthink AGI. Whatever constraints you put into AGI, well, ASI can just toss those to the wind.
Go big or go home.
Welcome To The Heated Debate
You are now an honorary member of the AI-AGI-ASI versus AI-ASI heated debate. Welcome to the club.
One underlying precept that makes the debate especially thorny is the dual-use properties of AI (see my coverage at the link here). AI of any type, whether conventional AI, AGI, or ASI, has a dual-use facet. AI can be the best thing since sliced bread and be the most momentous AI-for-good that we can wildly imagine. Unfortunately, AI can be turned to evildoing and be the worst-of-the-worst when it comes to AI-for-bad.
Here is a final thought or two for you to contemplate over a glass of fine wine when you have a moment of quiet reflection.
John Steinbeck famously said that humanity is the only kind of varmint that sets its own trap, baits it, and then steps into it. Is that what we are doing with our compulsive race toward AGI and ASI? Mull that over.
Perhaps another preferred analogy is that we are on a train that isn’t going to stop. And, wherever the train is going, we are all onboard, whether we like it or not. Maybe the end station is nirvana.
It seems up to us to decide (hopefully).