Premium Only Content

The 'AI Apocalypse' Is Just PR - The Atlantic
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The 'AI Apocalypse' Is Just PR - The Atlantic
Big Tech’s warnings about an AI apocalypse are distracting us from years of actual harms their products have caused. Illustration by Joanne Imperio / The Atlantic On Tuesday morning, the merchants of artificial intelligence warned once again about the existential might of their products. Hundreds of AI executives, researchers, and other tech and business figures, including OpenAI CEO Sam Altman and Bill Gates, signed a one-sentence statement written by the Center for AI Safety declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Those 22 words were released following a multi-week tour in which executives from OpenAI, Microsoft, Google, and other tech companies called for limited regulation of AI. They spoke before Congress, in the European Union, and elsewhere about the need for industry and governments to collaborate to curb their product’s harms—even as their companies continue to invest billions in the technology. Several prominent AI researchers and critics told me that they’re skeptical of the rhetoric, and that Big Tech’s proposed regulations appear defanged and self-serving. Silicon Valley has shown little regard for years of research demonstrating that AI’s harms are not speculative but material; only now, after the launch of OpenAI’s ChatGPT and a cascade of funding, does there seem to be much interest in appearing to care about safety. “This seems like really sophisticated PR from a company that is going full speed ahead with building the very technology that their team is flagging as risks to humanity,” Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates against mass surveillance, told me. The unstated assumption underlying the “extinction” fear is that AI is destined to become terrifyingly capable, turning these companies’ work into a kind of eschatology. “It makes the product seem more powerful,” Emily Bender, a computational linguist at the University of Washington, told me, “so powerful it might eliminate humanity.” That assumption provides a tacit advertisement: The CEOs, like demigods, are wielding a technology as transformative as fire, electricity, nuclear fission, or a pandemic-inducing virus. You’d be a fool not to invest. It’s also a posture that aims to inoculate them from criticism, copying the crisis communications of tobacco companies, oil magnates, and Facebook before: Hey, don’t get mad at us; we begged them to regulate our product. Yet the supposed AI apocalypse remains science fiction. “A fantastical, adrenalizing ghost story is being used to hijack attention around what is the problem that regulation needs to solve,” Meredith Whittaker, a co-founder of the AI Now Institute and the president of Signal, told me. Programs such as GPT-4 have improved on their previous iterations, but only incrementally. AI may well transform important aspects of everyday life—perhaps advancing medicine, already replacing jobs—but there’s no reason to believe that anything on offer from the likes of Microsoft and Google would lead to the end of civilization. “It’s just more data and parameters; what’s not happening is fundamental step changes in how these systems work,” Whittaker said. Two weeks before signing the AI-extinction warning, Altman, who has compared his company to the Manhattan Project and himself to Robert Oppenheimer, delivered to Congress a toned-down version of the extinction statement’s prophecy: The kinds of AI products his company develops will improve rapidly, and thus potentially be dangerous. Testifying before a Senate panel, he said that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” Both Altman and the senators treated that increasing power as inevitable, and associated risks as yet-unrealized “potential downsides.” But many of the experts I spoke with were skeptical of how much AI will progress from its current abilities, and they were adamant that it need not advance at all to hurt people—indeed, many applications already do. The divide, then, is not over whether AI is harmful, but which harm is most concerning—a future AI cataclysm only its architects are warning about and claim they can uniquel...
-
LIVE
TheAlecLaceShow
2 hours agoGuests: General Flynn & Dr. Michael Schwartz | Pope Francis Died | Hegseth Out? | The Alec Lace Show
134 watching -
32:25
Adam Carolla
6 hours agoMenendez Brothers update and one's tie to Rose O'Donnell | The Adam Carolla Show | #news
8.01K1 -
DVR
The Shannon Joy Show
3 hours ago🔥🔥Hacked & Stacked: Musk Aligned Tech Bros Poised To Cash In On ‘DOGE Hackathon At IRS’ Amidst Growing Health Concerns About Data Processing Centers & EMF Radiation - Special Report On EMF With Dr. Basima Williams🔥🔥
20.6K -
32:00
Grant Stinchfield
1 hour ago $1.01 earnedDeep State Payday: Fauci’s $15M Windfall Raises Alarms Over COVID Corruption
13.8K1 -
1:07:16
Blockchain Basement
2 hours agoTrump Bitcoin ETF CONFIRMED! (NEW SEC Chair Is HERE)
11.9K -
1:04:07
The Rubin Report
2 hours agoElizabeth Warren Humiliated as Her Lying to Host Backfires Spectacularly
49.8K64 -
1:37:35
Benny Johnson
2 hours agoDefense Sec Pete Hegseth Hands DOJ Evidence Of Leakers SABOTAGING Trump | Charges Incoming?!
68.7K54 -
47:39
BitLab Academy
2 hours ago $0.33 earned$100k Bitcoin Next? Altcoins To Pump Next | Crypto Bulls Back In Charge!?
13.5K1 -
LIVE
LFA TV
16 hours agoLIVE PRESS CONFERENCE WITH KAROLINE LEAVITT | ALL DAY LIVE STREAM - 4/22/25
3,657 watching -
LIVE
Viss
2 hours ago🔴LIVE - Viss and TannerSlays Link To Dominate PUBG DUOS!
167 watching