Artificial intelligence (AI) experts, including the heads of OpenAI (makers of ChatGPT), Google (DeepMind) and Anthropic (Claude), warn that future synthetic intelligences could lead to the extinction of humanity.
An open letter published by the Center for AI Safety and signed by some 350 executives, including CEOs Sam Altman (OpenAI), Demis Hassabis (DeepMind) and Dario Amodei (Anthropic), states:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Possible disaster scenarios include AI being used to create chemical weapons, generate misinformation, and make extreme regimes more powerful.
However, detractors say the fears are overblown.
Meta’s AI lead Professor Yann LeCun recently tweeted that ‘the most common reaction by AI researchers to these prophecies of doom is face palming’.