AI executives warn its threat to humanity rivals ‘pandemics and nuclear war’

Statement with more than 350 signatories says mitigating the risk of extinction should be a ‘global priority’

A group of chief executives and scientists from companies including OpenAI and Google DeepMind has warned the threat to humanity from the fast-developing technology rivals that of nuclear conflict and disease.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” said a statement published by the Center for AI Safety, a San Francisco-based non-profit organisation.
“减轻AI造成灭绝的风险,应该与其他社会规模的风险——例如大流行病和核战争——一起被列为全球优先事项,”旧金山非营利组织——人工智能安全中心(Center for AI Safety)发布的一份声明称。

More than 350 AI executives, researchers and engineers, including Sam Altman of OpenAI, Demis Hassabis of Google DeepMind and Dario Amodei of Anthropic, were signatories of the one-sentence statement.
包括OpenAI的萨姆•奥尔特曼(Sam Altman)、谷歌DeepMind的德米斯•哈萨比斯(Demis Hassabis)和Anthropic的达里奥•阿莫代伊(Dario Amodei)在内的350多位AI高管、研究人员和工程师签署了这份只有一句话的声明。


您的电子邮箱地址不会被公开。 必填项已用 * 标注