Global Courant
Doomsaying is an old pastime. Artificial intelligence (AI) is a complex subject. It’s easy to be afraid of what you don’t understand. These three truths go some way to explaining the oversimplification and dramatization that plagues discussions about AI.
Yesterday, outlets around the world were plastered with news of yet another claim open letter AI poses an existential threat to humanity. This letter, published through the non-profit Center for AI Safety, is signed by industry leaders, among others Jeffrey Hinton and the CEOs of Google DeepMind, OpenAI and Anthropic.
However, I would argue that a healthy dose of skepticism is warranted when considering the AI doomsayer story. On closer inspection, we see that there are commercial incentives to create fear in the AI space.
And as a researcher of artificial general intelligence (AGI), it seems to me that framing AI as an existential threat has more in common with 17th century philosophy than with computer science.
Was ChatGPT a ‘breather’?
When ChatGPT was released late last year, people were delighted, entertained and shocked.
But ChatGPT is not so much a research breakthrough as a product. The technology on which it is based is several years old. An early version of the underlying model, GPT-3, was released in 2020 with many of the same capabilities. It just wasn’t easily accessible online for anyone to play with.
Back in 2020 and 2021, i and many others wrote papers discussing the capabilities and shortcomings of GPT-3 and similar models – and the world went on as usual. Forward to today, and ChatGPT has had an incredible impact on society. What changed?
In March, Microsoft researchers said published a paper claiming that GPT-4 showed “sparks of artificial general intelligence”. AGI is the subject of several competing definitions, but for simplicity’s sake can be understood as AI with human-level intelligence.
Some immediately interpreted the Microsoft investigation as saying that GPT-4 is an AGI. By the AGI definitions I know, this is definitely not true. Nevertheless, it added to the hype and furore, and it was hard not to freak out. Scientists are no longer immune to it groupthink than anyone else.
The same day the paper was submitted, The Future of Life Institute published an open letter calling for a six-month pause in training AI models more powerful than GPT-4 so that everyone can take stock and plan ahead. Some of the AI stars who signed it were concerned that AGI poses an existential threat to humans and that ChatGPT is too close to AGI for comfort.
Soon after, prominent AI security researcher Eliezer Yudkowsky – who has commented on the dangers of super-intelligent AI since well before that 2020 – went one step further. He claimed we were moving towards building a “superhumanly smart AI”, in which case “the obvious thing that would happen” is “literally everyone on Earth will die”.
He even suggested that countries should be willing to risk nuclear war to enforce compliance with AI regulations across borders.
No existential threat
One aspect of AI security research is addressing potential threats that AGI may pose. It’s a difficult subject to study because there’s little agreement on what intelligence is and how it functions, let alone what superintelligence might entail. As such, researchers must rely on speculation and philosophical argument as much as evidence and mathematical evidence.
There are two reasons why I’m not worried about ChatGPT and its by-products.
First, it’s not even close to the kind of artificial superintelligence that could potentially pose a threat to humanity. The underlying models are slow learners that require massive amounts of data to construct something similar to the multi-faceted concepts humans can come up with from just a few examples. In that sense it is not ‘intelligent’.
Image: Twitter
Second, many of the more catastrophic AGI scenarios depend on premises that I find implausible. For example, there seems to be a prevailing (but unspoken) assumption that sufficient intelligence equates to limitless power in the real world. If this were true, more scientists would be billionaires.
Cognition, as we understand it in humans, occurs as part of a physical environment (including our bodies) – and this environment imposes limitations. The concept of AI as a “software mind” unconstrained by hardware has more in common with 17th century dualism (the idea that the mind and body are separate) than with contemporary theories of the mind existing as part of the physical world.
Why the sudden concern?
Still, doom and gloom is old-fashioned, and the events of recent years have probably not contributed to it. But there may be more to this story than meets the eye.
Among the high-profile advocates for AI regulation, many work for or are associated with established AI companies. This technology is useful and money and power are at stake – so scaremongering presents an opportunity.
Almost everything involved in building ChatGPT has been published in research that anyone can access. OpenAI’s competitors can (and have) replicated the process, and it won’t be long before free and open-source alternatives flood the market.
This point was made clear in a memo reportedly leaked from Google titled “We don’t have a moat, and neither does OpenAI.” A moat is slang for a way to secure your business from competitors.
Yann LeCun, who leads AI research at Meta, says these models need to be open as they become public infrastructure. He and many others are unconvinced by the AGI demise story.
Remarkable, Meta wasn’t invited when US President Joe Biden recently met with the leadership of Google DeepMind and OpenAI. That’s despite the fact that Meta is almost certainly a leader in AI research; it produced PyTorch, the machine learning framework OpenAI used to create GPT-3.
At White House meetings, OpenAI CEO Sam Altman suggested that the US government should issue licenses to those trusted to responsibly train AI models. Licenses, as Stability AI CEO Emad Mostaque post it“are a kind of moat.”
Companies like Google, OpenAI and Microsoft have everything to lose by allowing small, independent competitors to flourish. Introducing licensing and regulation would help strengthen their position as market leaders and paralyze competition before it can emerge.
While regulation is appropriate in some circumstances, regulation that passes quickly will benefit incumbents and small, free and open-source competition.
Michael Timothy Bennett is a PhD student at the School of Computing, Australian National University
This article has been republished from The conversation under a Creative Commons license. Read the original article.
Similar:
Loading…