As dusk settles on Davos, Switzerland this wintry Sunday night, the eve of the annual World Economic Forum, there is a quiet, albeit reserved anticipation that something notable will be done ā or at least said ā about how we, around the globe, will deal with artificial intelligence.
We have to agree somewhere.
Certainly, we are nowhere near an agreed-upon position, nor do we even seem to be near a common way of looking at AI. But a start is better than nothing at all, and the World Economic Forum is perhaps the most appropriate place for it. Of all the voices and forces in and around AI, this congress is the one body that, in the opinion of this writer, has the most to say (at least for now) with nothing to gain: not materially or any other way.
Itās not as if AI is being discussed for the first time, but the gigantic leaps in the field in the last year alone ā not to mention the much greater, almost unimaginable, expectations for this year and beyond ā bring AI to a commanding position ā permanently.
Why permanently? As Iāve written on multiple occasions in this venue, artificial intelligence will be the undoubtedly biggest advancement ever made by humans, given its potential to wrest superiority from us, frightful and fanciful is it now may seem. But itās been discussed, so it shouldnāt be a shock. If any of us hasnāt shuddered at the scene in 2001: A Space Odyssey ā where HAL9000 said, āIām sorry, Dave, Iām afraid I canāt do thatā ā then my point about superiority is moot.
The Two Conflicts Over AI
As I see it, two major points of contention have emerged and taken front-and-center positions.
1. Technology advances vs. ethics
Always an issue with every technology advance ever made, from stone tools to AI, we humans have never failed to figure out not only the beneficial uses, but also the destructive ways. The reason, stated simplistically, is that we are more committed to what we could do than what we should do. Case in point: the New York Timesā copyright infringement lawsuit against Microsoft and Open AI for āscrapingā intellectual property of the New York Times to use in teaching AI machines to learn. Neither a lawyer nor a publisher, Iāll not offer an opinion, but as an observer and writer, I submit thereās something wrong in there somewhere ā and it appears awfully big.
2. Long- vs. short-term management
Technology, by nature, forges ahead at its own speed. AI, being the most formidable technology ever, wants no reins, but thatās a recipe for nefarious actors ā governments, corporations, and as yet undefined cyber-organizations ā to us it for whatever they want. Follow that thinking to its foreseeable conclusion, and end-of-world scenarios are not out of the question, especially when aided by quantum computing, fusion energy, genetic engineering, biometrics, and facial recognition.
But is this the best way to manage such a ubiquitous and so-far unruly creature? Enter, World Economic Forum.
The World Economic Forumās āPrimer on AIā
According to the WEFās web site, the āPrimer on AIā session will address the rapid growth of artificial intelligence, emphasizing āthe critical need for inclusive, ethical, and well-aligned solutionsā while exploring how to ensure AI benefits everyone.
Why? A quick history lesson might help.
When Johannes Gutenberg gave us the movable type printing press in in Mainz, Germany in 1450, his invention spread rapidly. By 1500, 1,000 presses across Europe had published 9,000,000 copies of more than 40,000 works. Just as nature abhors a vacuum, humanity abhors stagnation, and when Chat GPT launched late last year, it took only five days to cross the 1,000,000-user mark.
Thatās the thing. Each technology makes it easier to adopt the next one. With that, thereās growing concern that we better get it right on AI ā now ā or live with the consequences forever.
Thatās how transformational AI is.