There are four key players in the global superintelligence arms race, according to artificial general intelligence developer Dr. Ben Goertzel. Who wins could well set the stage for the future of technological development and, essentially, global domination.
“You have the beginnings of an AGI arms race, both between U.S. companies, and between U.S. and China,” the CEO of SingularityNet added in a recent TechFirst podcast.
More than 50,000 people have signed an open letter published just last week calling for a ban on the development of superintelligence. They include AI pioneer Geoffrey Hinton, Apple co-founder Steve Wozniak, billionaire businessman Richard Branson, and multiple world leaders.
But while many want to slam on the AI brakes, Goertzel says we may be only two years away from AGI. He adds that AGI, a powerful precursor to superintelligence, is essentially inevitable in his opinion.
Goertzel defined artificial general intelligence in a book released in 2005 as a smart AI system that can generalize beyond its training.
Humans do this extremely well: once you’ve learned to ride a bike, you can often transfer those balance and steering skills to a scooter or even skiing, new skills you haven’t been explicitly taught. A human level AGI would be a system that can generalize beyond its history at least as well as people can, Goertzel says, and an AI-powered superintelligence would do this far better than people.
This is pretty much inevitably, Goertzel says, given where we are right now.
“If modern technological society continues without any huge calamities occurring, yeah, it’s basically inevitable,” he says. “AI delivers great value to people: economic value, human value and entertainment value, and making it smarter will allow it to deliver more and more value to people so people will keep on building it.”
Concerningly, the risks of superintelligence, according to the open letter, are many: human economic obsolescence and disempowerment, loss of freedom, civil liberties, dignity, and control, national security risks, even potential human extinction.
Those all sound scary. But what most nations and companies seem to have decided is that the risks of someone else developing superintelligence when they themselves have not are greater yet.
There are essentially four key players competing to win the AGI and ultimately superintelligence race, says Goertzel:
- The United States
- Big tech companies in the U.S.
- China and its assorted big tech companies
- Open source organizations
(Not coincidentally, the U.S. tops the global list of AI superpowers by chip capacity. China, meanwhile, ranks seventh, reflecting its choice of efficiency over brute-force GPU scale.)
Who wins this race essentially earns the right to set the tone for the next decades of technological and business power, which likely translates almost directly into geopolitical power.
And who wins matters.
“How the AGI is rolled out will also make a big difference,” says Goertzel. “If it’s rolled out in a Chinese company server farm or in Google’s server farm, which is half a step away from controlled by US government, or if it’s rolled out on a network of a thousand server farms in a hundred different countries … this doesn’t necessarily make a difference to the thinking process of the AGI: it makes a difference to what parties can control that thinking process.”
For example, an AGI built under Chinese government or a Chinese company could become the ultimate instrument of state power. Citizens might experience seamless AI-driven services, infrastructure optimization, and expanded prosperity, but they might also live under tighter surveillance, censorship and social control. Imagine AGI monitoring dissent in real time, or enforcing digital social contracts with near-perfect efficiency.
A U.S. government-controlled AGI or AI superintelligence might deliver similar power under the banner of national security.
Alternatively, an AGI born in Silicon Valley might be optimized for commerce. Superintelligence in that scenario might be woven into every app, service, and platform, hyper-personalizing advertising and aligning our social media around potential economic and financial outcomes. Instead of solving global challenges, AGI might focus first on maximizing engagement, subscriptions, or ad revenue.
Goertzel’s hope, and what his foundation is working towards, is an open-source AGI that everyone can use. A decentralized AGI could be the most democratic option, with benefits spread across researchers, entrepreneurs, and nations. It might become the “Linux of intelligence:” freely available, adaptable, and globally accessible. That could supercharge innovation, education, and problem-solving worldwide.
Of course, there’s a downside, as we see even today with our limited AI tools. Think deepfake voice calls purporting to be from your child or relative, who supposedly needs help, or your CEO, who oddly enough apparently needs you to buy a lot of expensive Apple Store gift cards and send them over.
No centralized guardrails means anyone, from startups to rogue states, could access and weaponize AGI. In this scenario, humanity gains the widest access, but also faces the widest risks.
Goertzel is not an AI doomer, however.
Even though there’s no certainty in the future.
“I think we’ve got to acknowledge it’s an unprecedented situation and we certainly can’t predict with certainty, but there’s no reason to make dystopic assumptions,” he says. “I think there’s every reason to believe we can create artificial general intelligence systems that will be beneficially disposed toward us and respect us and, you know, work with us together to make the world better and better for human and digital life.”
If nation-states get there first, however, they’re likely going to want to harness superintelligence for their own goals.
And those might be as bad and as harmful as any rogue AI that we can imagine.
