GET IN TOUCH WITH PAKKO, CREATIVE DIRECTOR ALIGNED FOR THE FUTURE OF CREATIVITY.
PAKKO@PAKKO.ORG

LA | DUBAI | NY | CDMX

PLAY PC GAMES? ADD ME AS A FRIEND ON STEAM

 


Back to Top

Pakko De La Torre // Creative Director

Elon Musk's artificial intelligence comments at Tesla's Investors Day event, explained.

Elon Musk’s artificial intelligence comments at Tesla’s Investors Day event, explained.

2/ Since y’all are asking for an example. pic.twitter.com/8SkU8EIFFl

— Brianna Wu (@BriannaWu)

Musk’s closing response made for the most sober, halting portion of the entire four-hour event. “I don’t see A.I. helping us make cars anytime soon,” he began, before expanding his purview beyond Tesla. “I’m a little worried about the A.I. stuff. I think it’s something, I don’t know, we should be concerned about.” After a lengthy pause: “I think we need some regulatory authority or something overseeing A.I. development and just making sure it’s operating within the public interest.” Then, a moment of self-reflection: “And, you know, it’s quite a dangerous technology. I fear I may have done some things to accelerate it, which is, I don’t know.” Musk brought his remarks back to “useful” applications like Tesla’s self-driving tech (uh-huh) before concluding Investor Day with some hesitation: “Tesla’s doing good things in A.I., and”—he sighed—“I don’t know, this one stresses me out, so not sure what more to say about it.”

It was an unusual break in bravado from the multibillionaire, a showman generally more inclined toward cowboy hats, big-ass trucks, and unbridled techno-optimism over somber addresses. But it also makes sense when you think about Musk’s ambivalent relationship with artificial intelligence technology, his longtime interest with its implications, and the way he perceives his own place in the now-rapidly-shifting A.I. landscape.

Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.

— Elon Musk (@elonmusk)

Musk prominently kept up this worry-and-oversight approach. In 2015, he would both sign an open letter opposing autonomous weapons and co-found a little nonprofit startup you may have heard of: OpenAI, which was initially established as a research center for “building technologies that augment rather than replace humans,” as the New York Times wrote. In 2017, he launched his Neuralink venture to craft devices that could interact with the human brain—very Matrix-y, yes—and pleaded with American governors to be “proactive with regulation” for A.I. tech, which he deemed “the greatest threat we face as a civilization,” citing his own “access to the very most cutting-edge A.I.” (A representative tweet of that time: “Competition for AI superiority at national level most likely cause of WW3 imo.”) The following year, he left the board of OpenAI over a “disagreement” regarding its mission. That was also the point this obsession would spill over into his personal life; Musk only started dating the art-pop musician Grimes after he found out she had once made a similar pun he’d tweeted riffing on the Singularity-esque “Roko’s Basilisk” thought experiment.

China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.

— Elon Musk (@elonmusk)

It’s always been hard to know what to make of the Musk A.I. crusade. On the one hand, his botpocalypse fears were shared by luminaries like Stephen Hawking; on another, plenty of A.I. researchers and Silicon Valley peers thought of Musk as alarmist; on yet another, the futuristically minded Tesla CEO helped to advance terrifying-sounding A.I.-tech inroads like humanoid robots and thought of himself as uniquely positioned to act as humanity’s guardian. Hence, his requests for the government to get on his ass while his teams constructed Tesla robots and Autopilots.

Musk was both an A.I. voice of caution and accelerationist, which tracked with the philosophy underpinning his stance: “longtermism,” the idea that we have a moral duty to maximize human potential and capability in the future, whatever form that takes. That term gained wider recognition beyond the tech sphere—where it had found a receptive audience—after the fall of FTX CEO Sam Bankman-Fried, whose association with the related “effective altruism” movement guided his own earn-to-give ethos. (Or least that’s how SBF, now accused of using FTX like a personal piggybank, once described his philosophy.) Influential thinkers in the space often cite runaway A.I. as a barrier to the goal of maximizing human potential, on par with threats like nuclear war and pandemics. Still, while longtermists and effective altruists may be scared of A.I., many also see its development for humanity’s benefit as a necessity: The Future Fund, a significant EA investment firm, has stated that “with the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease.” Yet, such actors say, this should be done in a way not to reduce human influence and impact on shaping civilization for years to come.

It seems that’s the kind of path Elon Musk has taken over the years: warn against A.I.’s worst potential, strive to ensure it’s only a boon for humanity. What Musk may have been thinking about on Tesla Investor Day, then, was whether that’s actually been the effect of his A.I. experiments and rhetoric. The businessman is discomfited with OpenAI (now a for-profit firm) and its awe-inspiring advances in generative-text tools like ChatGPT; he apparently also hates that the company is now heavily invested in Microsoft, decrying what he views as corporate control of open-source research. To Musk, the chatbot arms race is especially jarring—last month, he compared Microsoft’s sour Bing tool to a vintage video game antagonist that “that goes haywire and kills everyone.”

But what does Musk actually plan to do about the new A.I. threats he helped bring into the world, if they are in fact as dangerous as he views them to be? Well, as of this week, he’s reportedly building a rival to ChatGPT because—wait for it—the interface is trained not to spout racial slurs and otherwise has content-related guardrails. Back in December, he tweeted that “the danger of training A.I. to be woke—in other words, lie—is deadly.” About as deadly as World War III, I suppose.

Future Tense
is a partnership of
Slate,
New America, and
Arizona State University
that examines emerging technologies, public policy, and society.

This content was originally published here.