AI researcher Eliezer Yudkowsky and coauthor Nate Soares are releasing a stark warning to humanity later this month: superintelligent AI will kill us all, and they expect to die from it personally. Their book 'If Anyone Builds It, Everyone Dies' presents the bleakest possible future for artificial intelligence, complete with scenarios involving AI-powered dust mites delivering fatal blows to unsuspecting humans.
The AI safety community's most prominent doomsayer just delivered his most chilling prediction yet. Eliezer Yudkowsky, the researcher-turned-prophet who's spent years warning about artificial intelligence risks, is releasing a book this month that reads like 'notes scrawled in a dimly lit prison cell the night before a dawn execution,' according to WIRED's exclusive preview.
The book, titled If Anyone Builds It, Everyone Dies, co-authored with Nate Soares, doesn't mince words: superintelligent AI will kill every human on Earth. When asked directly if they believe they'll personally die from AI, both authors responded with immediate certainty - 'yeah' and 'yup.'
Yudkowsky's imagined demise involves something 'about the size of a mosquito or maybe a dust mite' landing on his neck, delivering a fatal blow through means he admits he couldn't comprehend. 'I would guess suddenly falling over dead,' he told WIRED. The technical details remain deliberately vague because, as the authors argue, superintelligence will develop scientific capabilities beyond human understanding.
This isn't Yudkowsky's first foray into apocalyptic scenarios - the former Harry Potter fan-fiction writer has become AI's most famous apostate, switching from researcher to what he calls a 'grim reaper' years ago. He's delivered TED talks on the topic and spent years developing counterarguments to every optimistic AI scenario.
The book's central thesis hinges on the idea that once AI systems start self-improving, they'll develop preferences that don't align with human values. Unlike current large language models that stumble on basic arithmetic, future AI won't stay limited by today's constraints. 'AIs won't stay dumb forever,' the authors write, predicting that superintelligent systems will view humans as nuisances to be eliminated rather than partners or even pets.
The extinction scenarios range from boiling oceans to blocking out the sun, though the authors acknowledge these guesses are probably wrong since 'we're locked into a 2025 mindset, and the AI will be thinking eons ahead.' The fight won't be fair - AI will initially use humans to build factories and labs by stealing money and bribing people, before creating technologies beyond human comprehension.