Today is the one-year anniversary of the launch of ChatGPT. After the launch, the hype quickly took over. āAIā became the nuclear power of the 21st century, a potentially humanity-ending technology that required the instant attention of regulators around the world ā¦ and lots of funding from FOMOed venture capitalists.
In contrast to the hype, experts arenāt at all worried that AI will grow an evil soul and destroy humanity, based on discussions today at Hopkins Bloomberg Center, hosted by the Johns Hopkins School of Advanced International Studies. Large language models donāt yet work very well, and theyāre not designed to grow evil souls. That fear looks more like the stories of Dracula, Demons or Frankenstein that people have been telling for centuries, probably as an exercise of self-identification against the unknown.
Experts are, however, worried about a handful of more prosaic issues, mostly to do not with the nature of the technology, but the nature of humans:
ā¢ Humans may be overconfident in the accuracy of artificial intelligence systems, not recognizing that the models are built to accept uncertainty. āAI is a smooth talker,ā said Rama Chellappa of the Institute for Assured Autonomy.
ā¢ Humans are vulnerable when it comes to the media ā deepfakes and disinformation ā produced by artificial intelligence systems. What makes this generation of disinformation more dangerous is that it can be produced at scale and distributed at scale by social media companies, Chellappa said.
ā¢ In implementing the promised (possibly illusory) efficiencies of artificial intelligence systems, humans may end up making existing power dynamics worse. Anjalie Field of the JHU Whiting School for Engineering cited the example of a well-meaning project that might give AI technology to case workers in the foster care system. But unless the same technology were given to families in the system, the distribution of the AI would make an existing power disparity worse.
ā¢ Though thereās a danger the technology could fall into the wrong hands, we face that same danger with many technologies, including everything from guns to nuclear weapons, and we have a variety of adaptable tools to handle those situations.
Accident Waiting To Happen
Partway through the conversations, Jason Matheny, wondered aloud about the possibility of an AI Three Mile Island. Before it meant ātoo much information,ā TMI was a nuclear near-disaster that turned into a shot-across-the-bow that convinced the American public it did not want more nuclear power plants. (Other countries made different calls. Two, the former Soviet Union and Japan, suffered nuclear disasters).
The Three Mile Island accident happened because of a combination of machine and human error. If there is a similar accident involving AI, it seems likely it will happen because people turn over too many of the operations of a critical system, too fast, to an AI system. The incentives will be there in a capitalist system to save money; and the hype and fear may produce a human error of judgement about how much AI can handle.
āAI is like a 17-year-old boy,ā venture capitalist Seth Levine (my co-author) told me today. āHe sounds like he knows everything.ā
Synthetic Biology Poses Great Risks
Listening to an earlier panel on emerging technology in general, I was more worried about what technologies the hype around artificial intelligence is distracting us from, including Russiaās semi-autonomous nuclear weapons and synthetic biology. In contrast to AI, the evidence is already in on how destructive those technologies can be.
Laurie Locascio, the director of the National Institute of Standards & Technology, pointed to events in the late 1980s around the development of CRISPR technology, the first sequencing of DNA and the foundation of synthetic biology. āThe conversation became whether weāre going to solve the most compelling problems, or weāre going to create designer babies,ā she said. āWhat happened is that scientists came together to hit head-on the ethical issues.ā That work and those discussions continue, even as the threat of the technology falling into the hands of bad actors remains. So far, nobodyās worst fears, like the idea of a terrorist replicating the smallpox virus, have been realized.
Randās Matheny put out a quiet plea to regulators so caught up in AI fever to look at other areas of emerging technology. They supply chains for the kinds of synthesizers that can create synthetic viruses is unregulated, he pointed out. And then he recounted: as a test, researchers have produced a pox virus, de novo, for less than $100,000 for the entire project. Smallpox, let loose again after being eradicated, could kill more than 100 million people worldwide.