Artificial Intelligence — From Historical Stories to Todays Reality

Posted by on Mar 19, 2019

Words like „technological singularity“, „superintelligence“, and „transhumanism“ have been invented within the context of a number of concepts dominating the current discourse about artificial intelligence. So let us take a closer look at old and new narratives that define our today’s view on artificial intelligence.

Once upon a time … the story goes that on March 17, 1580 Rabbi Judah Loew of Prague created a human figure from a lump of clay and breathed life into it in a cabalist ritual. This golem should help protect the Jewish community as a guardian from threats coming from the Christian majority of Prague’s population. As tradition has it, the golem used to sit in a corner of the rabbi’s studio, only coming to life when a piece of paper inscribed with the name of God was placed under his tongue. Removing this piece of paper deprived him of his vitality. But one day the rabbi forgot to take the piece of paper out of his mouth — which resulted in the golem’s raging through the streets of the Prague ghetto and smashing everything in his way- According to legend, „The rabbi, throwing himself in front of him, removing the piece of paper and destroyed it, whereupon the golem came to pieces.“

Machines in Control

Figures escaping human control like the golem have always fired people’s imagination and evoked dystopias — such as in Mary Shelley’s Frankensteinfrom 1818 and countless other stories of nineteenth-century fantastic literature. With the rise of modern science, the technological element of these stories was given growing emphasis: more recently, they were also increasing linked to political ideas. Just think books like Aldous Huxley’s Brave New World (1932), Georg Orwell’s 1984 (1948), and Ray Bradbury’s Fahrenheit 451 (1953), or of the autocratic world of machines in Fritz Lang’s film drama Metropolis (1927).

For a long time, the preoccupation with man-made forces gradually defying control had naturally remained a purely theoretical venture. This began to change, however, with the advent of cybernetics after World War II. In May 1949, for instance, in a seminar room at the Massachusetts Institut of Technology (MIT), Norbert Wiener presented a robot named „Palomilla“ that was mounted on three wheels and automatically moved in the direction of a light source. thus a recent had been established that technology could indeed borrow cognitive achievements from humans in practice so as to furnish machines with a form of „intelligence“.

The old idea was becoming a reality — which implicated that the fears of what it would bring were likewise becoming more and more real. A variant of such a plot is told in Stanley Kubrick’s film 2001: A Space Odyssey from 1968: about the spaceship „Discovery One“, the supercomputer HAL 9000, endowed with artificial intelligence and capable of autonomously steering the spaceship, starts developing a life on its own, identifying the human crew as a hazard and turning agains it.

Computer’s Enlightenment

The behavior HAL 9000 exhibits in the film resembles what several scientists refer to as „superintelligence“. Wikipedia defines superintelligence as an „agent that possesses intelligence far surpassing that of the brightest and most gifted human mind {…} in most or virtually all domains of interest“, both in terms of creative, problem-solving intelligence and social competence. The expression first appeared in 1965 and became popular above all through Nick Bostrom (University of Oxford) in the late 1990s.

 

One differentiates between „weak“ superintelligence — which does not go beyond the quality level of human processes of thought, but as to quantity works many times faster — and „strong“ superintelligence, also operating on a superior level in terms of quality. Interestingly enough, the proponents of this idea have left the question unanswered whether superintelligence also involves a capacity for remembering or a conscious mind; neither do they specify how it is brought about: biologically, technologically, or as a hybrid between those two.

Technological singularity“ is closely related to the idea of superintelligence. Due to their observation that the capacity of computer systems seemed to be increasing exponentially over time, scientists of the late 1950s wondered by when machines would be able to improve themselves through artificial intelligence. The idea was that this would accelerate technological progress to such an extent that the future of humanity would no longer be foreseeable after this event had occurred. The term singularity is meant to express the belief that technological change is so rapid and so profound that it represents a rupture in the fabric of human history.

This came from the most popular proponent of the thesis, the futurist (and Google’s senior technology developer) Raymond Kurzweil. In 1999 he prophesied in his book The Age of Spiritual Machines that by around 2030 the intelligence of computers would be outperforming that of humans. In the meantime, though, the predicted date by when singularity will come about has been put off to the future by decades more than once. Nevertheless the idea prevails that one day this leap will take place.

Another central concept hiding behind superintelligence and singularity is that of „intelligence explosion“, which was described by the British statistician Irving John Good: „Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any human being however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an „intelligence explosion“ and the intelligence of man would be let far behind“, he pointed out, adding: „Thus the first ultra-intelligent machine is the last invention that humanity need ever make“. (Irving John Good, „Speculations Concerning the First Ultraintelligent Machine“, 1965)

In 2001, Ray Kurzweil, in his article The Law of Accelerating Returns, proposed the theory the Moore’s Law (according to which the processing power of computers doubles every year) was only a special case within a more universal law on which the entire technological evolution was based: exponential growth would therefore also continue in technologies about the replace today’s microprocessors.

As mentioned above, the advocates of the theory of superintelligence fail to explain whether it can be realized biologically, technologically, or as a hybrid form. The idea of „transhumanism“ is based on the latter: through the application of certain technological methods, i.e., the interlock of biology and technology, the limitations of human capacities are to be extended intellectually, physically, and psychologically. The term was coined in the late 1960s and proliferated thanks to such figures as the futurist FM-2030 (born as F.M. Esfandiary): „The transhumance is representative of the earliest manifestation of new evolutionary beings. They are like hominids who many millions of years ago came down from the threes and began to look around“, he verbalized in his book Are You a Transhuman? in 1989.

There are outstanding examples with people like Neil Harrbisson, who is the first person in the world with an antenna implanted in his skull and for being legally recognized as a cyborg by a government. But taking a closer look, it turns out that countless cyborgs do already exist today — such as wearers of intelligent prosthetics limbs or a pacemaker. Some scientists also regard smartphones and cloud-stored databases as extensions of human capacities through technology.

Still, transhuman ideas continue to trigger a feeling of profound discomfort in many people, such as when currently faced with the plans of the company Neuralink, which seeks to link human brains via a „brain cloud“ enhanced with artificial intelligence.

Caught Between Chairs

These concepts — all of which are related to the old myths of the golem, and the like — linger in the background of many current debates about artificial intelligence. They are, for instance, reflected in the dread of autonomous weapon systems capable of killing people without a — “responsible” — soldier involved.

Incidents like the Cambridge Analytica data scandal and its apparent abuse of the data of at least 87 million users of the social network Facebook for the recent US presidential election campaign once again fueled fears of data espionage, surveillance, and intelligent manipulation.

These examples illustrate that the discussion in society about how and under what circumstances artificial intelligence can, may, or should be applied will continue for a long time to come — and has only just begun.

Nevertheless, the question arises why the subject evokes horrifying dystopias rather than optimistic utopias in so many people. We are not talking about non-specialists unfamiliar with the technological backgrounds and limits, but about numerous insiders, including the founder of Microsoft Bill Gates, or the recently deceased astrophysicist Stephen Hawking, who stated, that “AI is likely to be either the best or the worst thing to happen to humanity“.

Humanity’s intellectual injury?

As a possible explanation for this it has repeatedly been stated that artificial intelligence might represent a further „existential injury“ inflicted upon humankind.

The term alludes to the three injuries to man postulated by Sigmund Freud in 1917. Each of them was the result of modern science and questioned man’s self-conception: first, the „cosmological injury“ resulting from Nicolaus Copernicus’s discovery that the Earth was not at the center of the universe; second, the „biological injury“ caused by Charles Darwin’s discovery that humans descended from animal ancestors; and third, the „psychological injury“ as human’s inner life in part refuses to be controlled by conscious will.

Now the intellect, too, has been cast into doubt as man’s jealously guarded unique selling proposition. The power of intelligence stems from our vast diversity, not from any single, perfect principle. Is that so?