Is AI more dangerous than the atomic bomb? –

Omar Adan
Omar Adan

Global Courant 2023-04-29 02:03:30

The astonishing performance of recent so-called “large language models” – first and foremost OpenAI’s ChatGPT series – has raised expectations that systems able to match the cognitive capabilities of human beings, or even possess “superhuman” intelligence, may soon become a reality.

At the same time, experts in artificial intelligence are sounding dire warnings about the dangers that a further, uncontrolled development of AI would pose to society, or even to the survival of the human race itself.

Is this mere hype, of the sort that has surrounded AI for over half a century? Or is there now an urgent need for measures to control the further development of AI, even at the cost of hampering progress in this revolutionary field?

- Advertisement -

On March 22, an open letter appeared, signed by experts in artificial intelligence as well as prominent personalities like Elon Musk and closing with the statement: “Therefore we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

Justifying the need for such a moratorium, the open letter argues:

Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

(We) must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

Eliezer Yudkowsky, widely regarded as one of the founders of the field of artificial intelligence, went much farther in a Time article entitled “Pausing AI Developments Isn’t Enough. We Need to Shut It All Down

This 6-month moratorium would be better than no moratorium…. I refrained from signing because I think the letter is understating the seriousness of the situation.…

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”

The hydrogen bomb example

The spectacle of AI scientists calling for a pause, or even cessation, of rapidly-advancing work in their own field cannot but remind us of the history of nuclear weapons.

The awesome destructive power of the atomic bomb, which scientific research had made possible, prompted Einstein’s famous remark  “Ach! The world is not ready for it.”

- Advertisement -

In 1949 some leading nuclear physicists and other veterans of the wartime atomic bomb project demonstratively refused to participate in the project to develop fusion-based devices (“hydrogen bombs”), whose energy release could be 1000 or more times larger than fission-based atomic bombs.

The first stage of a hydrogen bomb cannot be scaled down, at least not easily. Photo: Asia Tmes files / Stock

The General Advisory Committee to the US Atomic Energy Commission was led by Robert Oppenheimer (often credited as the “Father of the Atomic Bomb”). Other members were Enrico Fermi, I.I. Rabi,  James B. Conant, Lee A. DuBridge, Oliver A. Buckley, Glenn Seaborg, Hartley Rowe and Cyril Stanley Smith.

- Advertisement -

At its final meeting on October 30, 1949, the committee determined that, by not proceeding to develop the hydrogen bomb, “we see a unique opportunity of providing by example some limitations on the totality of war and thus of limiting the fear and arousing the hopes of mankind.”

The majority shared the view that the hydrogen bomb threatened the very future of the human race: “We believe a super bomb should never be produced. Mankind would be far better off not to have a demonstration of the feasibility of such a weapon until the present climate of world opinion changes.”

The minority consisting of Fermi and Rabi stated: “The fact that no limits exist to the destructiveness of this weapon makes its very existence and the knowledge of its construction a danger to humanity as a whole. It is necessarily an evil thing considered in any light.” (Seaborg missed the meeting and no vote was recorded for him.)

President Harry Truman overruled the committee and the rest is history.

Of course, one should not forget that alongside its military applications atomic energy, in the form of fission reactors, has brought enormous benefits to mankind. Fusion energy, first released in an uncontrolled form in the hydrogen bomb, promises even greater benefits.

‘General artificial intelligence’

Similarly for advanced forms of AI.

I suppose the analog of the hydrogen bomb, in the domain of artificial intelligence, would be the creation of “general artificial intelligence” devices that would possess all the capabilities of the human mind and even exceed them by orders of magnitude.

Observers differ greatly in their opinions about when the goal of GAI might be reached. Some AI experts assert that GAI will be achieved in the near future, while others consider it a very remote prospect, if achievable at all.

I myself believe and have argued in Global Courant that a GAI based on digital computer technology is impossible in principle.

This conclusion is supported by the results of Kurt Gödel – further elaborated by others – concerning the fundamental limitations of any system that is equivalent to a Turing machine. That applies in particular to all digital computers.

Model of a Turing machine by Mike Delaney. Source: Wikimedia

As I argued in another Global Courant article, my view is further strengthened by the fact that the functioning of neurons in the human brain has virtually nothing at all in common with the functioning of the “on-off “ switching elements that are the basis of digital computers. A single neutron is many orders of magnitude more complex, as a physical system, than any digital computer we can expect to build in the foreseeable future. I believe that the mind-boggling complexity of real neurons, which are living cells rather than inert switching elements, is essential to human intelligence.

All that said, however, the main message of the current article is this: It is crucial to realize that AI systems would not need to be near to GAI – or even be like GAI at all – in order to constitute a major threat to society.

When ‘deep learning’ runs amok

Consider the following scenario: AI systems, operating on the basis of “deep learning” gradually acquire capabilities for manipulating humans via psychological conditioning and behavioral modification. Such systems, given large-scale access to the population, might de facto take control over society. Given the often-unpredictable behavior of deep-learning-based systems, this situation could have catastrophic consequences.

We are not so far away from such a scenario as people might think.

In the simplest variant, the leadership of a nation would deliberately deploy a network of AI systems with behavioral modification capabilities into the media, educational system and elsewhere in order to “optimize” the society. This process might work at first but soon get out of control, leading to chaos and collapse.

Developments leading to AI control over society can also arise independently from human intentions – through the “spontaneous” activity of networked AI systems having sufficient access to the population, and possessing (or gradually acquiring) behavioral modification capabilities.

As I shall indicate, many AI applications are explicitly optimized for modifying human behavior. The list includes chatbots used in psychotherapy. In many other cases, such as in the education of children, AI applications have strong behavior-modifying effects.

Like any other technology, each AI application has its benefits, as well as potential hazards. Generally speaking today, the performance of these systems can still be supervised by human beings. A completely different dimension of risk arises when they are integrated into large “supersystems.”

To avoid misunderstanding, I am not imputing to AI systems some mysterious “will” or “desire” to take over society. I am merely suggesting that a scenario of an AI-controlled society could unfold as an unintended consequence of the growing integration of these systems and the optimization criteria and training methods upon which deep-learning systems are based.

Firstly, it does not require human-like intelligence to manipulate humans. It can be done even by quite primitive devices. That fact was well-established long before the advent of AI, including through experiments by behaviorist psychologists.

The development of AI has opened a completely new dimension. Very much worth reading, on this subject is a recent article in Forbes magazine by the well-known AI expert Lance Eliot in which he lays out in some detail various ways in which chatbots and other AI applications can manipulate people psychologically even when they are not intended to do so.

On the other hand, deliberate mental and behavioral modification by AI systems is a rapidly-growing field, with ongoing application in a variety of contexts.

Examples easily come to mind. Tens of billions have been poured into the use of AI for advertising and marketing – activities that by their very essence involve psychological manipulation and profiling.

In another direction, AI-assisted education of children and adults – exemplified by advanced AI-based E-learning systems – can also be seen as a form of behavioral modification. Indeed, AI applications in the field of education tend to be based on behaviorist models of human learning. Advanced AI teaching systems are designed to optimize the child’s responses and performance outcomes, profiling the individual child, assessing the child’s progress in real-time and adapting its activity accordingly.

Another example is the proliferation of AI Chatbots that are intended to help people give up smoking or drugs, to exercise properly, to adopt more healthy habits.

At the same time, AI chatbots are finding growing applications in the domain of psychology. One example is the “Woebot” app, designed “to help you work through the ups and downs of life”– particularly directed at people suffering from depression.

These applications represent only the beginning stages of a far-reaching transformation of clinical psychology and psychotherapy.

AI’s potential impacts on the thinking and behavior of the population are greatly enhanced by the strong tendency of people to project, unconsciously, “human” qualities onto systems such as OpenAI’s GPT-4. This projection phenomenon opens the way for sophisticated AI systems to enter into “personal” relationships with individuals and in a sense to integrate themselves into society.

Ernie Bot. Image: Alex Santafe / The China Project / Twitter

As today’s rapidly growing replacement of human interlocutors by chatbots suggests, there is virtually no limit to the number of AI-generated “virtual persons.” Needless to say, this opens up a vast scope for behavior modification and conditioning of the human population. The hazards involved are underlined by the tragic case of a Belgian man who committed suicide after a six-week-long dialog with the AI chatbot Chai.

Summing up: AI-based behavioral modification technology is out of the bottle, and there are no well-defined limits to its use or misuse. In most cases – as far as we know – the human subjects whose behavior is to be modified agree voluntarily. It is a small step, however, to applications where the subjects are unaware that behavioral modification is being applied to them.

Filtering or modification of internet media content by AI systems and AI-managed interventions in social media could shape the mental life and behavior of entire populations. This is already occurring to a certain extent, as in AI-based identification and removal of “offensive material” from Facebook and other social media.

We are at most only steps away from a situation in which the criteria for judging what is “harmful,” “objectionable,” “true” or “false” will be set by AI systems themselves.

Beware the ‘supersystem’

There is a natural tendency in today’s society, to integrate data systems into larger wholes. This is routine practice in the management of large firms and supply chains and in the “digitalization” of government and public services, motivated in part by the striving for greater efficiency. Despite resistance, there is a natural drive to extend the process of data sharing and integration of information systems far beyond the limits of individual sectors.

Where might this lead when the relevant information systems involve AI in essential ways? It would be quite natural, for example, to apply AI to optimizing the performance of an employee, as assessed by an AI system, according to his or her psychological and medical condition, as assessed by another AI system.

Conversely, psychological therapy via a chatbot and detection of potential health problems might be optimized by an AI system on the basis of AI profiling of workplace behavior and internet activity.

Another example: Using AI to optimize the criteria used by AI systems to filter social media, so as to minimize the probability of social unrest, as assessed by an AI system. Similarly for the optimization of AI chatbots used by political leaders to compose their public statements.

Reflecting on these and other examples, one does not need much imagination to grasp the enormous scope for integration of the AI systems involved in different aspects of society into ever larger systems.

Most importantly, the growing practice of integration of AI systems leads naturally to hierarchically-structured “supersystems” in which the higher-up subsystems dictate the optimization criteria (or metrics) as well as the databases on the basis of which the lower-level systems “learn” and operate.

To grasp what this implies, one should bear in mind, that deep-learning-based AI is ultimately nothing but a combination of sophisticated mathematical optimization algorithms + large computers + large data sets.

The relevant computer program contains a large number of numerical variables whose values are set during its “training” phase, and subsequently modified in the course of the system’s interactions with the outside world, in an iterative optimization process. Like any other optimization process, this occurs according to a chosen set of criteria or metrics.

Expressed metaphorically, these criteria define what the system “wants” or is “trying” to accomplish.

In the typical AI system of this type today, the optimization criteria and training database are chosen by the system’s human designers. Already the number of internal parameters generated during the “training process” is often so high that is impossible to exactly predict or even explain the system’s behavior under given circumstances.

The predecessor to GPT-4, the GPT-3 system, already contains some 175 billion internal parameters. As the system’s operation is determined by the totality of parameters in a collective fashion, it is generally impossible to identify what to correct when the system misbehaves. In the field of AI, this situation is referred to as the “transparency problem”.

Today there is much discussion in the AI field concerning the so-called “alignment problem”: How can one ensure that AI systems, which are constantly proliferating and evolving, will remain “aligned” to the goals, preferences, or ethical principles of human beings? I would claim that the “alignment” problem is virtually impossible to solve when it comes to hierarchically-structured supersystems.

It is not hard to see that the training of systems becomes increasingly problematic the higher up we go in the hierarchy. How can “right” versus “wrong” responses be determined, as is necessary for the training of these higher systems? Where do we get an adequate database? The consequences of a given response appear only through the activity of the lower-level systems, which the higher-level system supervises. That takes time. The tendency will therefore be to shortcut the training process – at the cost of increasing the probability of errors, or even wildly inappropriate decisions, at the upper levels of the hierarchy.

The reader may have noted the analogy with difficulties and risks involved in any hierarchically-organized form of human activity – from a single enterprise to the leadership structure of an entire nation. These issues obviously predate artificial intelligence by thousands of years. Today, many argue that AI systems will perform better than humans in managing enterprises, economies – maybe even society as a whole.

There is no doubt that AI systems do indeed perform better than humans in many specific contexts. Also, AI is constantly improving. But where is the ongoing process of extending and integrating AI systems taking us – particularly when it leads to ever more powerful and comprehensive capabilities for shaping human thinking and behavior?

In human history, attempts to fully optimize a society in the form of a supersystem operating under strict criteria have generally led to disaster.  Sustainable societies have always been characterized by significant leeway provided for independent decision-making, of the kind that tends to run counter to adopted criteria for optimization of the system. Ironically, providing such degrees of freedom produces by far the most optimal results.

In line with the open letter cited above, most experts in the field of artificial intelligence would agree that AI applications should always occur under some sort of human supervision. More generally, the development and application of AI must be governed by human wisdom – however one might define that.

Here I have attempted to argue that the proliferation of deep-learning-based AI into more and more domains of human activity and the tendency to integrate such systems into ever larger hierarchical systems together pose an enormous risk to society.

Indeed, the question should be pondered: In case such a supersystem goes awry, threatening catastrophic consequences, who or what will intervene to prevent it?

In Stanley Kubrick’s famous science fiction film “2001: A Space Odyssey,” the surviving astronaut intervenes at the last moment to turn the AI system off. But would the astronaut have done that if the AI system had previously conditioned him psychologically not to do so?

I do not think it makes sense to try to restrict the development of AI itself. That would be harmful and counterproductive. But wisdom dictates that the dangers arising from the rapid proliferation of AI systems into virtually every sphere of human activity be reined in by appropriate regulation and human supervision. That applies especially to the emergence of AI supersystems of the sort I have discussed here.

Mathematician and linguist Jonathan Tennenbaum is a former editor of FUSION magazine. He lives in Berlin and travels frequently to Asia and elsewhere, consulting on economics, science and technology.

Is AI more dangerous than the atomic bomb? –

Is AI more dangerous than the atomic bomb? –

Asia Region News ,Next Big Thing in Public Knowledg

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *