In today’s column, I examine a newly posted blog piece by Sam Altman that has generated quite a bit of hubbub and controversy within the AI community. As the CEO of OpenAI, Sam Altman is considered an AI luminary, of which his viewpoint on the future of AI carries an enormous amount of weight. His latest online commentary contains some eyebrow-raising indications about the current and upcoming status of AI, including aspects partially coated in AI-speak and other insider terminology that require mindful interpretation and translation.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
Sam Altman Stirs The Pot
In a new posting on June 10, 2025, entitled “The Gentle Singularity” by Sam Altman on his personal blog, the famed AI prognosticator made these remarks (excerpts):
- “We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”
- “AI will contribute to the world in many ways, but the gains to quality of life from AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present.“
- “Generally speaking, the ability for one person to get much more done in 2030 than they could in 2020 will be a striking change, and one many people will figure out how to benefit from.”
- “This is how the singularity goes: wonders become routine, and then table stakes.”
- “It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year.”
- “We do need to solve the safety issues, technically and societally, but then it’s critically important to widely distribute access to superintelligence given the economic implications.”
There’s a whole lot in there to unpack.
His upbeat-worded opinion piece contains commentary about many undecided considerations, such as referring to the ill-defined and indeterminate AI event horizon, the impacts of artificial superintelligence, various touted dates that suggest when we can expect things to really take off, hazy thoughts about the nature of the AI singularity, and much more.
Let’s briefly explore the mainstay elements.
The Claimed Event Horizon
A big question facing those who are deeply into AI consists of whether we are on the right track to attain AGI and ASI. Maybe we are, maybe we aren’t. Sam Altman’s reference to the AI event horizon alludes to the existing pathway that we are on, and he unequivocally implies and states that in his opinion, we not only have reached the event horizon but that we are avidly past it already. As espoused, the takeoff has started.
Just to note, that’s a claim embodying immense boldness and brashness, and not everyone in AI concurs with that viewpoint.
Consider these vital facets.
First, in favor of that perspective, some insist that the advent of generative AI and large language models (LLMs) vividly demonstrates that we are now absolutely on the path toward AGI/ASI. The incredible semblance of natural language fluency exhibited by the computational capabilities of contemporary LLMs seems to be a sure sign that the road ahead must lead to AGI/ASI.
However, not everyone is convinced that LLMs constitute the appropriate route. There are qualms that we already are witnessing headwinds on how much generative AI can be further extended, see my coverage at the link here. Perhaps we are nearing a severe roadblock, and continued efforts will not get us any further bang for the buck.
Worse still, we might be off-target and going in the wrong direction altogether.
Nobody can say for sure whether we are on the right path or not. It is a guess. Well, Sam Altman has planted a flag that we are incontrovertibly on the right path and that we’ve already zipped down the roadway quite a distance. Cynics might find this a self-serving perspective since it reinforces and reaffirms the direction that OpenAI is currently taking.
Time will tell, as they say.
The Postulated AI Singularity
Another consideration in the AI field is that perhaps there will be a kind of singularity that serves as a key point at which AGI or ASI will readily begin to emerge and keenly showcase that we have struck gold in terms of being on the right pathway. For my detailed explanation of the postulated AI singularity, see the link here.
Some believe that the AI singularity will be a nearly instantaneous split-second affair, happening faster than the human eye can observe. One moment we will be working stridently on pushing AI forward, and then, bam, the singularity occurs. It is envisioned as a type of intelligence explosion, whereby intelligence rapidly begets more intelligence. After the singularity happens, AI will be leaps and bounds better than it just was. In fact, it could be that we will have a fully complete AGI or ASI due to the singularity. One second earlier, we had plain AI, while an instant later we amazingly have AGI or ASI in our midst, like a rabbit out of a hat.
Perhaps though the singularity will be a long and drawn-out activity.
There are those who speculate the singularity might get started and then take minutes, hours, or days to run its course. The time factor is unknown. Maybe the AI singularity will take months, years, decades, centuries, or lengthier to gradually unfurl. Additionally, there might not be anything resembling a singularity at all, and we’ve just concocted some zany theory that has no basis in reality.
Sam Altman’s posting seems to suggest that the AI singularity is already underway (or, maybe happening in 2030 or 2035) and that it will be a gradual emerging phenomenon, rather than an instantaneous one.
Interesting conjecture.
The Dates Of AGI And ASI
Right now, efforts to forecast when AGI and ASI are going to be attained are generally based on putting a finger up into prevailing AI winds and wildly gauging potential dates. Please be aware that the hypothesized dates have very little evidentiary basis to them.
There are many highly vocal AI luminaires making brazen AGI/ASI date predictions. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. See my analysis of those dates at the link here.
A somewhat quieter approach to the gambit of date guessing is via the use of surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040.
Depending on how you interpret Sam Altman’s latest blog post, it isn’t clear as to whether AGI is happening by 2030 or 2035, or whether it is ASI instead of AGI since he refers to superintelligence, which might be his way of expressing ASI or maybe AGI. There is a muddiness of differentiating AGI from ASI. Indeed, I’ve previously covered his changing definitions associated with AGI and ASI, i.e., moving of the cheese, at the link here.
We’ll know how things turned out in presumably a mere 5 to 10 years. Mark your calendars accordingly.
All Roses And No Real Thorns
An element of the posting that has gotten the gall of especially AI ethicists is that the era of AGI and ASI seems to be portrayed as solely uplifting and joyous. We are in a gentle singularity. That’s certainly happy news for the world at large. Utopia awaits.
There is a decidedly other side to that coin.
AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as “P(doom),” which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI or x-risk.
The other camp entails the so-called AI accelerationists.
They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity’s problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that’s good in the sense that AI will invent things we never could have envisioned.
No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times. For my in-depth analysis of the two camps, see the link here.
You can readily discern which camp the posting sides with, namely roses and fine wine.
A Bundle Of Opinion And Conjecture
It is important to carefully assess the myriads of pronouncements and proclamations being made about the future of AI. Oftentimes, the wording appears to brazenly assert that the future is utterly known and predictable. With a sense of flair and confidence, many of these prognostications can be easily misread as somehow a bushel of facts and knowns, rather than a bundle of opinions and conjecture.
Franklin D. Roosevelt wisely stated: “There are as many opinions as there are experts.” Keep your eyes and ears open and be prudently mindful of all prophecies concerning the future of AI.
You’ll be immeasurably glad you were cautious and alert.