In today’s column, I explore the growing supposition that there is an AI aperture of certainty associated with attaining artificial general intelligence (AGI). Here’s what that constitutes. Some assert that as we get closer to reaching AGI, our semblance of certainty that we will do so rises and the uncertainty lessens. This implies that our ability to predict or forecast the arrival of AGI gets stronger during the arduous journey to AGI, progressively so.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
Aperture Of Certainty
Shifting gears, consider a handy rule of thumb about journeys and destinations. Our intuition tells us that the closer you get to some goal or destination, the better you are at predicting when you will arrive there. It’s a nearly universal principle.
Imagine you are on a trek through the wilderness. At the beginning of the hike, you probably have a general notion of when you’ll reach the dreamy campsite adjacent to that peaceful lake. It might take five hours or up to ten hours to make your way through the hills. It all depends on how your legs feel, whether the terrain is reasonable, and a myriad of other factors.
After trekking non-stop for two hours in the hot sun, you take stock of where you are and when you might get to that ice-cold lake. Are you able to better assess the timing of when you will potentially get to the campsite? Probably so. You’ve got some of the distance now under your belt and can better judge how things are likely to proceed.
It could be said that the aperture of certainty is giving you a much clearer sense of the timing to reach your destination. Another way to think of it is that the uncertainty is being reduced. This same idea of gauging the attainment of something applies to all kinds of matters in life. The closer you get, the stronger you seem to be able to guess when the arrival will occur.
AGI Aperture Of Certainty
Does the aperture of certainty apply to the attainment of AGI?
Many in the AI community assume this must be the case. It seems to stand to reason. We are making gradual headway by advancing conventional AI. Each step appears to be moving us closer to the vaunted AGI. Throughout this stepwise progress, we should be able to get a better glimpse of when AGI will be reached.
For example, suppose that the predictions of reaching AGI by the year 2040 are approximately on target (see my analysis of the 2040 and other AGI date predictions at the link here). A common belief is that within five years of AGI, such as by 2035, we would abundantly know whether AGI is going to happen. Meanwhile, at the ten-year distance of 2030, the odds of gauging AGI occurring by 2040 are presumed to be a lot lower.
You can recast that sense of understanding by referring to uncertainty rather than certainty. The uncertainty of aptly predicting 2040 is higher (we are more uncertain) in 2030 than by 2035. Uncertainty reduces as the anticipated target gets nearer.
Gotchas On The AGI Pathway
An idealized world would nicely ensure that the aperture of certainty works all the time. But realistically, we do not live in such a world. Sad face.
Consider again the hike to that wistful campsite. Just because you have already made a two-hour trek doesn’t necessarily tell you anything about what the rest of the pathway might consist of. Unbeknownst to you, perhaps an angry bear is waiting along the trail ahead. The bear is going to indubitably slow down your progress and you might need to wait hours for the beast to move away.
Advances in AI can be akin to that kind of false belief that progress already made will somehow equate to likely future progress.
The AI advances could end up hitting a roadblock. Maybe this delays AI movement toward AGI by several years, perhaps a decade or more. Envision that on the path to a 2040 date, a severe roadblock arises in the year 2036. Whereas in the year 2035, everything looked rosy, the blockage in 2036 spells deep trouble for the anticipated 2040 attainment.
Another derailment could be that the efforts to advance AI are so tightly secretive that it is nearly impossible to gauge how things are coming along. In the year 2035, imagine that all the AI makers are tightlipped and not revealing the status of their AI. It might be difficult to ascertain the status of AI to then project forward to what might arise by 2040.
Sooner Than You Think
I’ve so far highlighted aspects that would delay or stretch out the attainment of AGI. The other side of this coin is that AGI might arise sooner than assumed.
Here’s such a scenario. Some ardently believe that we are going to experience an intelligence explosion, consisting of AI that feeds on other AI and rapidly accelerates toward AGI, see my discussion at the link here. Suppose that we are gradually moving forward with advances in AI and then, wham, out of the blue, we touch upon an intelligence explosion. Minutes or hours later, voila, AGI has been reached.
Nobody can say for sure whether we will somehow bring forth an intelligence explosion. Nor can anyone say for sure whether it might happen on its own, occurring without the human hand at play. Even trying to guess when an intelligence explosion would happen is equally widely and wildly debated.
Going back to the AGI aperture of certainty, envision that the year is 2035 and all predictions are aligned that by 2040 we will reach AGI. In 2036, an intelligence explosion happens. It catches us all by utter surprise. In any case, AGI is suddenly achieved in 2036.
Doing The Best That We Can
The upshot is that though it is worthwhile to make predictions about AGI, including doing so with an aperture of certainty mindset, there are lots of ways that the journey can get shaken up. A big dose of salt and a scrutinizing attitude ought to be sincerely applied to the forecasting of when AGI is going to be attained.
Peter Drucker, the legendary management guru, said it best about the challenges of making predictions: “Trying to predict the future is like trying to drive down a country road at night with no lights while looking out the back window.”
That’s about the same course of events that faces the attainment of AGI.
