How Will AI Change the World?: A Captivating Animation Explores the Promise & Perils of Artificial Intelligence
Many of us can remember a time when artificial intelligence was widely dismissed as a science-fictional pipe dream unworthy of serious research and investment. That time, safe to say, has gone. “Within a decade,” writes blogger Samuel Hammond, the development of artificial intelligence could bring about a world in which “ordinary people will have more capabilities than a CIA agent does today. You’ll be able to listen in on a conversation in an apartment across the street using the sound vibrations off a chip bag” (as previously featured here on Open Culture.) “You’ll be able to replace your face and voice with those of someone else in real time, allowing anyone to socially engineer their way into anything.”
And that’s the benign part. “Death-by-kamikaze drone will surpass mass shootings as the best way to enact a lurid revenge. The courts, meanwhile, will be flooded with lawsuits because who needs to pay attorney fees when your phone can file an airtight motion for you?” All this “will be enough to make the stablest genius feel schizophrenic.” But “it doesn’t have to be this way. We can fight AI fire with AI fire and adapt our practices along the way.” You can hear a considered take on how we might manage that in the animated TED-Ed video above, adapted from an interview with computer scientist Stuart Russell, author of the popular textbook Artificial Intelligence: A Modern Approach as well as Human Compatible: Artificial Intelligence and the Problem of Control.
“The problem with the way we build AI systems now is we give them a fixed objective,” Russell says. “The algorithms require us to specify everything in the objective.” Thus an AI charged with de-acidifying the oceans could quite plausibly come to the solution of setting off “a catalytic reaction that does that extremely efficiently, but consumes a quarter of the oxygen in the atmosphere, which would apparently cause us to die fairly slowly and unpleasantly over the course of several hours.” The key to this problem, Russell argues, is to program in a certain lack of confidence: “It’s when you build machines that believe with certainty that they have the objective, that’s when you get sort of psychopathic behavior, and I think we see the same thing in humans.”
A less existential but more common worry has to do with unemployment. Full AI automation of the warehouse tasks still performed by humans, for example, “would, at a stroke, eliminate three or four million jobs.” Russell here turns to E. M. Forster, who in the 1909 story “The Machine Stops” envisions a future in which “everyone is entirely machine-dependent,” with lives not unlike the e-mail- and Zoom meeting-filled ones we lead today. The narrative plays out as a warning that “if you hand over the management of your civilization to machines, you then lose the incentive to understand it yourself or to teach the next generation how to understand it.” The mind, as the saying goes, is a wonderful servant but a terrible master. The same is true of machines — and even truer, we may well find, of mechanical minds.
Discover DALL-E, the Artificial Intelligence Artist That Lets You Create Surreal Artwork
Stephen Hawking Wonders Whether Capitalism or Artificial Intelligence Will Doom the Human Race
Based in Seoul, Colin Marshall writes and broadcasts on cities, language, and culture. His projects include the Substack newsletter Books on Cities, the book The Stateless City: a Walk through 21st-Century Los Angeles and the video series The City in Cinema. Follow him on Twitter at @colinmarshall or on Facebook.
This content was originally published here.