So I recently read the somewhat-infamous bestseller by Eliezer Yudkowsky and Nate Soares, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (2025). It’s a white-knuckled warning about the imminent peril of ASI (Artificial Super Intelligence). Note that they emphasize the distinction between run-of-the-mill A.I.—the kind that will write your term paper on The Great Gatsby or create a video of a cat playing a banjo—and the humanity-shattering A.S.I. that will bioengineer a super-virus to infect us all.

Did I enjoy the read? Hmmm. Technically I’d say it’s not really “good” and has many flaws, but is fascinating nonetheless. Basically it argues we could never control ASI so we shouldn’t build it: ASI will find humans dispensable and get rid of them. They (two authors) may very well be right but their writing style/organization is rather slipshod. They don’t have much evidence for the argument so you basically just have to believe them (that we’re all doomed). Worst thing about the book: Each chapter begins with a parable that it seems they (wait: two authors? how does that work, anyway?) made up. And the parables aren’t very good. After the parable comes a lot of ranting. The rants certainly include scary info nuggets: Some A.I. experts say the chance of ASI apocalypse is anywhere from 10-50%! (They quote various experts.) And they’re talking about the very foreseeable future. At some point they posit we maybe have 10 years left. They make a good argument that ASI research needs to slow down, that we’re rushing to create a super machine intelligence that we won’t understand or be able to control. One implication/subtext suggests ASI would be sneaky and untrustworthy. That it could pretend to be “aligned” with our goals—say, searching for cure for cancer—but meanwhile it would be developing some way to get rid of the pesky humans who want to find the cure for cancer. And we would have no idea what it was up to.
In the wider context of the A.I. and A.S.I. benefits/drawbacks debate—on the one hand we can giggle at the banjo-playing cat, before wailing in anguish as we face a horde of killer A.I. drones—certainly Yudkowsky and Soares argue an extreme viewpoint, but one that (according to them) is shared by many A.I. experts, including Nobel-prize winners. Others are equally cautious, and suggest an even earlier expiration date for humanity, such as the scenarios suggested in AI 2027, a polemic published last year by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean.
A more moderate, cautious approach to the peril is suggested by Mustafa Suleyman’s The Coming Wave.

As the credits note, Suleyman is the co-founder of Deepmind and Inflection AI, so it’s an insider’s perspective. The Coming Wave has a user-friendly structure and more “evidence” to support its grandiose proclamations that A.I. will be the greatest invention since fire, but its Pollyanna view of the future at times seems decidedly foolish. One example: He asserts that A.I. will be such an economic benefit that humans won’t have to work much anymore, and gives a casual nod to the idea of a “universal income”—that seems rather unlikely, considering human nature. Okay we’ve taken driving away from humans, and factory work, and medicine, and retail work . . . . So someone (who? the government? Google?) is going to give us all the money to buy all the products that A.I. will make so incredibly efficient to market? Good luck with that. Sounds like a recipe for disaster, sugar-coated. But I imagine many readers will read The Coming Wave out of the same impulse as I: Curiosity. For that reason, it’s a good book. Actually, after reading both books (plus A.I. 2027), I think I understand the nature and dangers of A.I. more than ever, and that at least my worries are informed. But how to reconcile the urgent warnings in If Anyone Builds It and A.I. 2027 with the well-documented and much-discussed frantic research into achieving A.S.I.? Greed, basically. Could the combination of greed and entertaining fake videos of banjo-playing cats do in humanity? Maybe.