Time to Stop Treating AIs Like Humans – Asia

Omar Adan

Global Courant 2023-05-17 18:05:12

The artificial intelligence (AI) pioneer Geoffrey Hinton recently resigned from Google, warning of the dangers of technology becoming “more intelligent than us”. His fear is that one day AI will succeed in “manipulating humans to do what it wants”.

There are reasons why we should be concerned about AI. But we often treat or talk about AIs as if they were humans. By stopping this and realizing what they really are, we can maintain a fruitful relationship with the technology.

In a recent essay, American psychologist Gary Marcus advised us to do just that stop treating AI models like humans. By AI models, he means large language models (LLMs) such as ChatGPT and Bard, which are now used by millions of people every day.

- Advertisement -

He cites blatant examples of people “exaggeratingly” attributing human cognitive abilities to AI, which has had a range of consequences.

The funniest thing was the US senator who said that ChatGPT “taught itself chemistry.” The most poignant was the report of a young Belgian who had taken his own life after lengthy conversations with an AI chatbot.

Marcus is right when he says we should stop treating AI like humans – sentient moral agents with interests, hopes and desires. However, many will find this difficult to almost impossible. This is because LLMs are designed – by humans – to interact with us as if they were human, and we are designed – by biological evolution – to interact with them in the same way.

Good imitations

The reason why LLMs can mimic human conversations so convincingly stems from a deep understanding of computer pioneer Alan Turing, who realized that it is not necessary for a computer to understand an algorithm in order to execute it. This means that while ChatGPT can produce paragraphs filled with emotional language, it doesn’t understand a single word in the sentence it generates.

The LLM designers successfully turned the problem of semantics – the arrangement of words to create meaning – into statistics, by matching words based on their frequency of previous use. Turing’s insight mirrors Darwin’s theory of evolution, which explains how species adapt to their environment and become increasingly complex, without having to understand anything about their environment or themselves.

- Advertisement -

AI still needs humans to survive and thrive. Image: Shutterstock / The Conversation

The cognitive scientist and philosopher Daniel Dennett coined the expression “competence without understanding”, which perfectly captures the insights of Darwin and Turing.
Another important contribution from Dennett is his “intentional attitude.”

This essentially states that to fully explain the behavior of an object (human or non-human), we must treat it as a rational agent. This usually manifests itself in our tendency to anthropomorphize non-human species and other non-living entities.

- Advertisement -

But it’s useful. For example, if we want to beat a computer at chess, the best strategy is to treat it as a rational agent who “wants” to beat us. We can explain without any contradiction in terms that the reason why the computer cast, for example, was because “it wanted to protect its king from our attack”.

We can speak of a tree in a forest as “wanting to grow” towards the light. But neither the tree nor the chess computer represent those “wishes” or reasons for themselves; only that the best way to explain their behavior is to treat them as if they did.

Intentions and freedom of choice

Our evolutionary history has provided us with mechanisms that predispose us to find intention and agency everywhere. In prehistoric times, these mechanisms helped our ancestors avoid predators and develop altruism towards their closest relatives.

These mechanisms are the same ones that cause us to see faces in clouds and anthropomorphize inanimate objects. It doesn’t hurt us if we mistake a tree for a bear, but a lot happens the other way around.

Evolutionary psychology shows us how we always try to interpret as a human any object that could be human. We unconsciously adopt the intentional attitude and attribute all our cognitive capacities and emotions to this object.

With the potential disruption that LLMs can cause, we have to realize that they are just probabilistic machines with no intentions or concerns for humans. We need to be extra vigilant about our language when describing the human-like performance of LLMs and AI more generally. Here are two examples.

The first was one recent research which found that ChatGPT is more empathetic and provided “higher quality” answers to patients’ questions compared to doctors’. Using emotional words like “empathy” for an AI allows us to endow it with the capacities of thinking, reflecting and genuine concern for others – which it lacks.

The second was when GPT-4 (the latest version of ChatGPT technology) was launched last month, opportunities for greater creativity and reasoning skills were attributed to it. However, we just see an upscaling of “competence”, but still no “understanding” (in Dennett’s sense) and certainly no intentions – just pattern recognition.

Actually, AIs are not very empathetic. Photo: iStock

Safe

In his recent remarks, Hinton raised a near-term threat from “bad actors” using AI for subversion. We can easily imagine an unscrupulous regime or multinational using an AI trained on fake news and untruths to flood the public discourse with misinformation and deep falsifications. Fraudsters can also use an AI to prey on vulnerable people in financial scams.

Last month, Gary Marcus and others including Elon Musk signed a open letter calling for an immediate pause on further development of LLMs. Marcus has also called for an international agency to promote safe, secure and peaceful AI technologies” – calling it a “Cern for AI”.

In addition, many have suggested that anything generated by an AI should bear a watermark so that there can be no doubt whether we are interacting with a human or a chatbot.

Regulation in AI lags behind innovation, as so often in other areas of life. There are more problems than solutions, and the gap is likely to widen before it narrows. But in the meantime, repeating Dennett’s phrase “competence without understanding” may be the best antidote to our innate compulsion to treat AI like humans.

Neil Saunders is a senior lecturer in mathematics, University of Greenwich

This article has been republished from The conversation under a Creative Commons license. Read the original article.

Similar:

Like it Loading…



Time to Stop Treating AIs Like Humans – Asia

Asia Region News ,Next Big Thing in Public Knowledg

Share This Article
slot ilk21 ilk21 ilk21