W
HEN, NOT IF,
ROBOTS DESTROY humanity in the next few decades, they won’t resemble the Terminator, says Eliezer Yudkowsky, the leading pioneer of artificial intelligence who now believes we’re doomed.

Yudkowsky is 43, balding, and with a dark beard and glasses. Over Zoom in Seattle, he wears a loose-fitting, long-sleeve gray polo shirt, and a cloak of despair. For the past 20 years, he’s been a lead researcher at the Berkeley-based, Machine Intelligence Research Institute, a nonprofit dedicated to creating what he calls “Friendly AI,” artificial intelligence that aligns with the ethics and morals of human beings. He’s considered one of the field’s most influential theorists. Sam Altman, CEO of OpenAI, the leading research lab, recently tweeted that Yudkowsky “got many of us interested in AGI [artificial general intelligence]” and was “critical in the decision to start OpenAI.”

This makes his predictions sound like the stuff of a madman or a prophet. The killer machines of the future, he tells me, will likely be nanoscopic bots tiny enough to infiltrate all matter including us. Or, as Yudkowsky puts it, “the AI can employ superbiology against you.” Picture intelligent bots spewing out clouds of “tiny bacteria made of diamond,” Yudkowsky continues, “made of just carbon, hydrogen, oxygen, and nitrogen so that they can reproduce in the atmosphere using sunlight for energy. They spread across the world. They hide in your bloodstream. And, at some point the same second, everybody on Earth just immediately falls over dead. That’s what I would say should be like the prototypical visualization rather than fighting Terminator armies.”

Yes, this is some Black Mirror shit, and it should be known that Yudkowsky writes Harry Potter fan fiction for fun.  But while Yudkowsky leads the fringe of AI Doomers, here’s the thing: As crazy as it all sounds, it’s not so crazy anymore. Though AI has been deeply entrenched in our lives for years — from voice assistants such as Siri and Alexa to the algorithms that tailor our feeds online — everything changed in November with OpenAI’s release of ChatGPT, the free chatbot that lets you have humanlike convos with a supersmart AI. Since then, every day brings new possibilities for AI’s disruption of our jobs, our bodies, and our minds. While writing an AI-assisted wedding toast seems banal enough, the bigger question remains: Where, if anywhere, is this going?

Much is promising: programmers using AI to write code that would take them months, hospitals employing AI to successfully screen for breast cancer, amateurs generating mind-blowing art, music, and poetry with AI. Our new powers and improved efficiency are by one estimate set to create nearly 100 million jobs, according to the World Economic Forum’s Future of Jobs Report. But the potential is also bringing untold upheaval: 300 million jobs lost, according to Goldman Sachs, an AI arms race with China, disinformation indistinguishable from reality, the exacerbation of societal biases, and the exploitation of low-wage workers who support these systems.

As ordinary humans navigate the extraordinary hype, two icons of the AI OG have come to personify the promise and perils. While Yudkowsky thinks AI will wipe us out, futurist Ray Kurzweil believes it will make us immortal, predicts that computer intelligence will surpass humans by the year 2045, in what has been dubbed the Singularity, when we’ll merge with machines and live forever. “I think this will be very good for humans,” Kurzweil tells me. “It’s not an alien invasion of intelligent machines from Mars come to compete with us.”  Which of these competing visions is correct?

Ray Kurzweil in 2018

Travis P Ball/Getty Images

FIFTEN YEARS AGO, ON A CRISP FALL DAY, I went to Cambridge, Massachusetts, to profile Kurzweil for Rolling Stone. The balding, bookish 60-year-old was already one of the world’s most influential and prolific inventors, forging text-to-speech software, music synthesizers, and scanning machines. Three presidents had bestowed honors on him, including the National Medal of Technology. Bill Gates called him “the best person I know at predicting the future of artificial intelligence.” Kurzweil’s 2005 book, The Singularity Is Near, had established him as the chief evangelist of the coming Techno Rapture, when man and machine essentially become one.

His dark office teemed with dusty cat statues (he collects them) and memorabilia of his father. As autumn leaves drifted outside his window, Kurzweil assured me that within his lifetime we would achieve immortality and resurrect the dead — including his father. By the 2030s, he said, he could use nanobots to harvest his dad Frederic’s DNA from his grave, learn all there is to know about him, and re-create him in cybernetic form. “Once we can build and create intelligence that doesn’t have the limitations of our brain,” he said, peering over his reading glasses at me, “there’s nothing it can’t do.”

Kurzweil is 75 now, and still popping 150 supplements a day to help him make it to immortality. But, he reveals, he won’t have to wait until then to have a conversation with his dead father — or, at least, a copy of him. In 2012, Google hired Kurzweil to head up its AI research as director of engineering, a nod to the fact that the world’s most powerful tech company takes his predictions seriously. During that time, Kurzweil’s Google team fed more than 100,000 books into a large language-model project called Talk to Books.

Once digested by the bot, a human could ask questions and the AI would answer. After long imagining talking with his late father, he finally had a chance. Kurzweil’s father was prolific, leaving behind correspondence, essays, and musical compositions. One by one, Kurzweil fed reams of Frederic’s papers into the machine — a love letter to his wife, an unfinished book he was writing about music — to create what he calls a “replicant” (a nod to the sci-fi film Blade Runner), and addressed his late father.

He asked his “Dad Bot,” as he puts it, what he loves most about music (“The connection to human feelings”), his gardening (“It’s the kind of work that never ends”), and his anxieties (“Often nightmarish”). “What’s the meaning of life?” he finally typed to his Dad Bot. “Love,” his Dad Bot replied. “I actually had a conversation with him,” Kurzweil says, “which felt a lot like talking to him.” He chronicles the experience in his upcoming book, The Singularity Is Nearer. Some iteration of the Kurzweil Dad Bot will soon go wide, he says, when any of us can create our own AI fam. “We’ll be able to actually create something like a large language model,” he says, “that really represents somebody else by having enough information.”

The Techno Rapture will accelerate next decade, he says, when we combine AI with our neocortex and start communicating directly with the Cloud. Kurzweil isn’t signing up for Elon Musk’s Neuralink brain implants, though, which recently received permission from the Food and Drug Administration to begin trials in humans. “Neuralink isn’t really what I’m talking about, it’s pretty slow,” he says. While it would be good for people with strokes, he adds, “it’s not something that we would want to attach to our brains to if we don’t have to.”

By 2045, he maintains, computer intelligence will finally eclipse our own and we will become cyborgs. “We’ll have expanded our intelligence millionfold compared to an unenhanced human,” Kurzweil says. And for the unenhanced who aren’t fortunate enough to make it to then, there’s hope from them, too, he says — at least the ones who could afford a deep freeze. “I think we actually will be able to ultimately re-create people who are under cryonics,” he says, “but it may be in a few decades from now.”

It’s not surprising that Kurzweil’s evangelism strikes his detractors as messianic. “The Singularity is a new religion — and a particularly kooky one at that,” as computer scientist and author Jaron Lanier once told me. “The Singularity is the coming of the Messiah, heaven on Earth, the Armageddon, the end of times. And fanatics always think that the end of time comes in their own lifetime.”

Kurzweil bristles when I suggest he seems more interested in the promise than in the perils. “I do recognize risks,” he says, though I have to press him to reveal what they are. “Well,” he offers, “if you have an enemy that’s more intelligent than you and it doesn’t like you and doesn’t want you around, that’s not a good situation to be in.”

Eliezer Yudkowsky

Courtesy of Eliezer Yudkowsky

THERE ARE TWO MOMENTS THAT LED ELIEZER YUDKOWSKY to believe that AI will destroy us. The first happened in 2001. Yudkowsky was a 21-year-old Kurzweilian true believer in the potential of AI. Since reading the works of Vernor Vinge as an Orthodox Jewish whiz kid in Chicago, he’d left his religion and his high school to bring artificial intelligence to life. “It was obviously the most important thing going on,” he says, “and the things that would, like, determine the fate of the galaxy and all that.”  

The previous year, he’d launched the Singularity Institute, a nonprofit research group, to accelerate AI’s arrival along with Kurzweil and the Immortals. “My first allegiance is to the Singularity, not humanity,” he wrote at the time. “I don’t know what the Singularity will do with us. I don’t know whether Singularities upgrade mortal races, or disassemble us for spare atoms.… If it comes down to Us or Them, I’m with Them.”

Then, by his own account, Yudkowsky grew up. He quickly realized there was a “a tiny crack” in his logic around AI. For years, he believed that “if you make a mind smart enough, it’s smart enough to know what the right thing to do is,” he says. “It wouldn’t kill off all of humanity.” But when he ran calculations on the exponential growth of AI, he hit a point of no return or guarantees. “I’d tried a straightforward extrapolation of technology,” he says, “and found myself precipitated over an abyss.”

By 2003, after a couple of years digging into the crack, his adolescent faith in AI had shattered, too. “I was wrong all along,” he says, realizing, “there’s nothing making AI automatically moral. The whole bad metaphysics house of cards collapsed.”

Computer scientists since the 1960s have warned that the goals of our machines may not align with our own. A nightmare of misalignment posed by philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, describes an AI tasked with making maximum paper clips, throwing our bodies into its mill to soullessly fulfill its goal. “If AI wants something for whatever reason,” Yudkowsky says, “it’s not necessarily going to stop on the way there and think about ‘Maybe we shouldn’t destroy this beautiful thing.’”

But, he continues, “I looked around, and nobody was working on a technical level on the question of: How do you point AI in the direction of niceness?” Instead, it just felt like the Church of Forever espoused by Kurzweil was beginning to grow. “Kurzweil was talking about all this wonderful variety of things,” he says. “The future would be like supermedicine, superlongevity, supervirtual reality, better robots.” Yudkowsky dismisses this now as “incredible techno juju stuff.”

“I wanted to get out ahead of the predictable giant planetary emergency,” he says, before we become a planet ruled by computers. But, he says, it hasn’t been enough. And the reason is the usual one: money. The “the tiny amount of money humanity has allocated to this problem” can’t compete with the exponentially booming AI industry — estimated to grow from $100 billion today to nearly $2 trillion by 2030. On March 22, however, there was a reprieve from the inexorable progression. Musk, Apple co-founder Steve Wozniak, and more than 1,000 other leading technologists signed an open letter calling for an immediate pause of at least six months on all AI development.

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” the open letter reads. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” it continues. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

“Everybody called me a crackpot,” Yudkowsky says. “Now, it’s harder to say that because the founder of Deep Learning is saying some of the same things.” Yudkowsky, however, didn’t sign. Given the AI arms race, he thought, a six-month pause was ludicrous. He responded seven days later with his own open letter, published in Time, calling for an immediate, worldwide indefinite shutdown of all development of AI, or risk certain extinction.

“We are not prepared,” he wrote. “We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.” With a real shutdown, he believes, steps can be taken to ensure that Friendly AI becomes real, such as limiting computer power for anyone training AI, and forging multinational agreements. Oh, and, if necessary, he added, “be willing to destroy a rogue datacenter by airstrike.”

BOMBING DATA CENTERS AND RESURRECTING frozen nerds is nowhere near reality, and may never be. But events that would once seem like science fiction have become everyday.

On May 22, a photo spread across Twitter showing an apparent airstrike on the Pentagon, and a plume of black smoke billowing into the air. RT, the Russian state media outlet with 3.1 million Twitter followers, posted the image, as did a blue-checked account identified as Bloomberg Feed  — causing the stock market to dip.

But the photo was fake, generated by someone unknown with a widely available program such as Midjourney or Dall-E. The Bloomberg account was also bogus, prompting the Arlington, Virginia, fire department to tweet (and tag the Department of Defense): “There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public.”

This historic mindfuck gave unintended urgency to a U.S. Department of Education report, “Artificial Intelligence and the Future of Teaching and Learning,” released, coincidentally, the next day. “It is imperative to address AI in education now to realize key opportunities, prevent and mitigate emergent risks, and tackle unintended consequences,” the report reads, like artificial reality attacking the real thing.

This is not a fire drill. AI-generated media is already warping reality online, and it’s only going to get more difficult to distinguish. At the same time, the literacy rates among the future generation in the U.S. are plummeting — a full two-thirds of fourth-graders cannot proficiently read. Together, it suggests a Don Delillo-level disinfo dystopia that could wreck lives and crush markets long before we become Kurzweil’s immortal robots or Yudkowsky’s Armeggedon.

Though Yudkowsky and Kurzweil see two radically different futures, they both agree on the need for humanity to assert itself while AI is young. “It does have to do with education,” Kurzweil says, “making sure that people value human life and so on.” And even Yudkowsky sees a chance. “If there’s something to rally behind,” he says, “it would be the hope that humanity wakes up one morning and decides to survive.”

On June 9, amid all this talk of eternal life and extinction, hundreds of people filled the pews during a service at St. Paul’s Church in Fürth, Germany, a 1,000-year-old village in northern Bavaria. “Dear friends,” said the pastor, a Black, bearded man in a white long-sleeve shirt, “it is an honor for me to stand here and preach to you.”

The pastor wasn’t real. He was an AI-generated avatar projected on a screen reading a sermon written by ChatGPT. The project was created by Jonas Simmerlein, a theologian and philosopher at the University of Vienna who wanted to explore how AI could be used by clergy. Despite the mixed reviews, Simmerlein considered the results “a pretty solid church service.” “Artificial intelligence will increasingly take over our lives, in all its facets,” he said. “And that’s why it’s useful to learn to deal with it.”