Artificial general intelligence is now an explicit aim for some of the largest corporations on the planet. Mark Zuckerberg’s new goal at Meta is to create smarter-than-humans AGI, the Verge says, and part of OpenAI’s charter is “planning for AGI and beyond.” If they or others achieve that goal, it could mean the death of death, the end of scarcity, and a whole new world of romance, according to author, scientist, and futurist Gregory Stock.
But that’s just the beginning of the changes AI could be bringing, he argued at the recent Beneficial AGI conference in Istanbul, Turkey.
AI is already significantly changing our culture and economy. AGI is a much bigger deal, however. It’s the point at which AI gets smarter than us, perhaps vastly smarter, and starts to learn at exponential rates. That’s potentially a major problem, which is why Geoffrey Hinton, one of the key figures in the development of AI, and Apple co-founder Steve Wozniak recently signed an open letter Statement on Superintelligence with almost 70,000 others calling for a prohibition on the development of superintelligence.
The reason: many of the concerns around AGI are existential.
Will AGI kill us? Will AI kill all the jobs? Will super-intelligent AIs experience consciousness (Microsoft’s CEO of AI Mustafa Seleyman recently said no)? Or, will AGI systems result in “human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control,” as the open letter states?
Stock, on the other hand, is not an AGI doomer, and he suggests that the most profound transformations caused by AGI may be within us: how humanity changes in response to AGI, and not just what machines become.
Here’s nine massive changes he foresees as AI continues to get better, faster.
-
A new human identity
Stock says the shift ahead isn’t just technological, it’s existential. Humans and machines are fusing into a super-organism, and when AI becomes an integral part of cognition, communication and creation, our individuality will blur. We’ll be less like tool users and more like biological nodes in a vast, hybrid intelligence, he says. -
The collapse of expertise
ChatGPT is making all of us instant experts, perhaps. But when any motivated person can gain AI-assisted mastery in hours, the “expert class,” Stock says, is finished. One example: medicine, where AI already in some cases outperforms human doctors at diagnosis. The next generation won’t defer to credentialed experts, Stock says, they’ll consult an AI that theoretically knows everything and forgets nothing. -
Movement from scarcity to abundance
Artificial intelligence will vaporize scarcity in many domains, Stock argues, which seems counterintuitive from the job loss we’re seeing right now. Communication, translation, design, photography, even education, all services that once required human labor, will become nearly free. -
Deep human-AI integration
Future generations won’t simply use AI; they’ll grow up with it. Stock envisions children developing in immersive AI environments: talking with avatars, learning through interactive models, and organizing their lives alongside digital assistants.
That means our thinking will evolve with constant augmentation and that AI won’t just amplify us, it could rewire what being human means. -
The rise of the global brain
French philosopher Teilhard de Chardin called it the “noosphere:” a collective consciousness across the entire planet. Stock argues we’re starting to enter that era now as instant translation and frictionless access to all information will make humanity function like a connected neural network. -
Emotional bonds with machines
We will love our AIs, Stock predicts, and he’s not talking metaphorically. They’ll be our teachers, therapists, coaches and partners, sure, but even lovers, he says.
Humans already form deep attachments to chatbots and virtual companions. When those entities become smarter, more responsive, funnier and ever-present, many will prefer them to human relationships, he adds. -
Digital immortality
You can already create avatars of yourself loaded up with data on who you are, how you think and speak, and more. Stock envisions cheap, persistent avatars built from thousands of hours of recorded conversations, video and text that will be much better. They may think they are you, and family members will talk with them–and perhaps prefer them–after we pass away. In other words, if you die tomorrow, your digital self might not. -
Greater global safety
Stock argues that AGI is not a threat to humans: we’re its parents, and we’re intertwined in the same ecosystem. But AGI and superintelligent AI systems will likely escape our control, and that’s a good thing, he says. The greater danger, he feels, is if humans remain in control. In his view, history shows what happens when we dominate technology: we weaponize it. Stock’s hope is that superintelligent AI will restrain us, acting as a planetary guardian that prevents us from destroying ourselves. -
Massive transition
The singularity isn’t extinction; it’s transformation, in Stock’s view. He believes the real risk is societal collapse during the handoff from human to hybrid civilization. Our economies, religions, and governments all assume scarcity, mortality, and human superiority, and it’s possible that none of that will survive contact with AGI.
It’s hard to piece out what’s realistic and what’s fanciful when we talk about near-magical technologies like artificial superintelligence. AI doomers are worried about possible human extinction, as the Statement on Superintelligence signed by nearly 70,000 people references. AI optimists or accelerationists think superintelligence will vastly improve human existence, solving disease, hunger, poverty and more.
The reality is: no-one can fully predict the future.
Given that, perhaps we should prepare for the worst while also hoping for the best.
How best to do that is hard to determine, but one way to try is international accords on how we develop AGI and how AGI systems should be used. Chinese President Xi Jinping recently suggested creating a global body to govern artificial intelligence, but global rivals in the U.S. and Europe are unlikely to join that kind of initiative.
Which means we may very well be at the mercy of organizations like Meta and OpenAI to develop AGI in pro-social ways, and not just ways to cement their own power and wealth.
Alternatively, we hope that independent and open-source organizations achieve AGI first, or also, which at least gives us a shot at spreading the benefits of superintelligence more widely.
