Global Courant 2023-05-31 05:39:50
As artificial intelligence races toward everyday adoption, experts have come together – yet again – to express concern about the potential power of technology to harm — or even end — human life.
Months after Elon Musk and countless others working in the field signed a letter in March demanding a pause in AI development, another group consisting of hundreds of AI-engaged business leaders and academics signed a new statement from the Center for AI Safety which serves to “raise concerns about some of the most serious risks of advanced AI.”
The new statement, just one sentence long, is intended to “start discussion” and highlight the growing concern among those most adept at the technology, according to the nonprofit’s website. The full statement reads: “Mitigating the risk of AI extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Notable signatories to the document include Demis Hassabis, CEO of Google DeepMind, and Sam Altman, Chief Executive of OpenAI.
While proclamations of impending doom from artificial intelligence are not new, recent advances in generative AI, such as the public tool ChatGPT, developed by OpenAI, have infiltrated the public consciousness.
The Center for AI Security classifies the risks of AI eight categories. Among the dangers it foresees are AI-designed chemical weapons, personalized disinformation campaigns, humans becoming completely dependent on machines, and synthetic minds evolving beyond the point where humans can control them.
Geoffrey Hinton, an AI pioneer who signed the new statement, left Google earlier this year, saying he wanted to speak out about his concerns about potential harm from systems like the ones he helped design.
“It’s hard to see how you can prevent the bad actors from using it for bad things,” he said told the New York Times.
The March letter contained no support from executives of the major AI players and went significantly beyond the newer statement by calling for a voluntary six-month development hiatus. After the letter was published, Musk would support his own ChatGPT competitor,”TruthGPT.”
Tech writer Alex Kantrowitz noticed on Twitter that Center for AI’s funding was opaque, speculating that the media campaign surrounding the danger of AI may have been related to calls from AI executives for more regulation. In the past, social media companies like Facebook used a similar playbook: ask for regulation, then sit down when the laws are written.
The Center for AI Safety did not immediately respond to a request for comment on the sources of its funding.
Whether the technology actually poses a significant risk is up for debate, Times tech columnist Brian Merchant wrote in March. He argued that, for someone in Altman’s position, “apocalyptic ominous statements about the terrifying power of AI serve your marketing strategy.”