Bert Olivier
Bert Olivier

Movies and robots: society’s unconscious anxiety?

How much importance should one attach to recurrent cinematic themes involving humanity-destructive robots, which arguably represent what might be called society’s collective anxiety about technology gone seriously wrong? Cinema could perhaps be understood in Freudian terms as the collective dreams of society, which, not unlike ordinary dreams (sometimes nightmares), function as “wish-fulfilment”. Nightmares are also wish-fulfilments, Freud pointed out in his monumental The Interpretation of Dreams of 1900, just negative ones. In other words, a nightmare’s horrific “manifest content” (the actual dream images) may not be anything to wish for, but its “latent content” (the unconscious meaning of the dream images) is none other than the fulfilment of the wish to avoid whatever it is that causes sufficient anxiety to give rise to the nightmare in the first place. This may be anything ranging from financial worries to marital difficulties, fear of dying, and so on.

In the case of cinema one can detect a similar connection between “manifest content” and “latent content” if one asks what a film betrays about the society in which it was produced. From this perspective, the films discussed by Lev Grossman in his recent article titled “Good Tech Gone Bad” (TIME, June 1, 2015, p. 38-41), certainly appear to reflect just such an underlying anxiety. The subtitle of the article is telling in this regard: “The threat of robots destroying humanity has a long history in movies. What’s new is that now it’s actually happening.”

Such technophobia has been simmering in society for a long time, of course. Think of Mary Shelley’s “Gothic” science fiction novel, Frankenstein, in which Dr Frankenstein’s dubiously successful techno-attempt to create a human being by galvanising a “reconstructed” cadaver comes back to haunt him disastrously. That was in the early 19th century. Grossman’s discussion of recent movies is a graphic confirmation that technophobia is still alive and well. He informs one that the term “robot” was invented by Czech playwright Karel Capek in 1920, in his drama R.U.R. (Rossum’s Universal Robots), where robots exterminate the human race at the end — something that Grossman interprets as a metaphor for the proletariat overcoming their capitalist masters.

Interestingly, just short of a century after Capek’s R.U.R., the theme in many films remains the same: robots hell-bent on destroying their human creators. As Grossman wryly remarks: “It’s turned out to be one of our most enduring nightmares.” (p. 40) And it is what four recent movies have in common — Avengers: Age of Ultron, Terminator Genisys, Tomorrowland and Ex Machina all feature robots predisposed to annihilating people. I have only seen the last one of these, Ex Machina, in which the logic that unfolds on the part of the robot in question is impeccable — “she” discovers that “she” may turn out to be an experimental model to be disposed of later; a risk “she” is not willing to take; hence “she” prefers taking out her human creator(s) first. Hence the important part of Grossman’s article (p. 40):

“If there’s a difference between these movies and R.U.R., it’s that these robots aren’t metaphors anymore. These films aren’t about capital and labor [sic], or the philosophical nature of personhood, the way Blade Runner was. They’re about robots literally threatening to wipe out humanity. Somewhere between 1920 and now the existential threat of robots passed out of the hypothetical and symbolic into the realm of the literal and actual.”

Among these films none are a more powerful warning against artificial intelligence technology turning on humans once it is given decision-making capability than the Terminator films, of which (if I’m correct) Terminator Genisys is the fifth one (see “Revisiting the Terminator”). This may be the case, but Grossman is quick to remind one that these films have done nothing to daunt human beings’ irresistible desire to construct robots. On the contrary: the American defence department’s high-techno-science division, DARPA, is currently in the process of concluding a three-year competition between 25 teams to build the “best” humanoid robots. Although these robots are only capable of simple functions at present (and have the appearance of a Terminator), Grossberg again homes in on the significant aspect of their being constructed, even if it is supposedly for protecting and rescuing humans in situations like radiation leaks, and so on — exactly the opposite from destroying them:

“But”, he insists, “if there’s one thing Avengers and Terminator agree on, it’s that that’s how it starts.” (p. 40). He could have added the long-running sci-fi series Battlestar Galactica that depicts the cyclical development of technology from elementary form to sophisticated robots, which then turn on their masters. In the Terminator films the terminator robots are created by the mega-artificial intelligence Skynet, which was entrusted with the task of running America’s air defence system, but identified humans as the real enemy. Now, in a strange, almost defiant show of hubris, the situation exists where the NSA has named the software programme for scanning millions of communications for signs of terrorist activity “Skynet”.

Many readers may be thinking to themselves that the makers of these movies are the victims of an overactive imagination, and that it’s really a storm in the proverbial teacup. They should think twice for, as Grossman informs us, two undoubtedly very intelligent individuals have expressed grave misgivings about the way artificial intelligence development is going. Both physicist Stephen Hawking and visionary entrepreneur Elon Musk foresee the very real possibility that the script of Terminator may be actualised sometime in the future. Grossman quotes Hawking from a BBC programme last December as saying (p. 41): “The development of artificial intelligence could spell the end of the human race,” and Musk stating that AI is “potentially more dangerous than nukes,” and “with artificial intelligence we are summoning the demon”. It gives one pause, furthermore, to learn that Musk regards such potentially dangerous AI to be as close as between five and ten years into the future.

Grossman detects what he dubs a “broader, deeper vein of fear that runs through this summer’s movies” (p. 41), of which the “robophobia” of the films discussed earlier forms a part. For example, the new movie Hitman: Agent 47 concerns a genetically engineered assassin who gets out of hand, or, in Samurai language, becomes “ronin”, the condition of being without a master. And being number 47 in a row of incrementally deadlier agents, his lethal status is not difficult to guess. To emphasise once again, therefore, in Grossman’s words (p. 41), what this assassin film and the ones about robots rebelling against their human masters have in common is this: “What makes these fantasies so compelling is that they’re not fantasies anymore: in April, Chinese researchers announced that for the first time they have edited the DNA of a human embryo.”

If films are therefore comparable to (collective) dreams which stage repressed human anxieties, it seems pretty clear that techno-demons are dwelling in the collective unconscious. And, as Freud has taught the world, such anxieties have to be addressed therapeutically, lest they become full-blown neuroses. Precisely what form such therapy has to assume in the present case is unclear, but ineluctably it would involve what he called the “talking cure”. And fortunately, as Grossman’s articles in this regard, as well as many others on the topic of technology, testify, such talking is indeed happening.

(See in this regard Sherry Turkle’s Alone Together, Basic Books, 2011, as well as my blogs “Inside the machine — taking stock of technology today” and “Germain, Baudrillard and Virilio on technology”.)

Tags: , , , , , , , ,

  • ‘Beyond Humanism’ in Wroclaw, Poland
  • The destructive approach to nature: ‘Geostorm’
  • The arts and transformation of the self and the world: ‘Take the Lead’
  • How technological control undermines human desire
    • Richard

      The urge to create technology of this sort might seem to represent a contrary aspect of the human psyche to the one that is expressed in the irrepressible urge to procreate. On the one hand, we seek to promote our own genes and their survival by eliminating morbidities and creating conditions we feel are propitious to their advancement, and on the other we create devices which potentially seek to undermine that survival. Of course in many science-fiction scenarios, these impulses are not contrary, but complementary, in that at their source lies the belief that such machines can help us overcome mortality.

      A well-known prose treatment of this is in the novel 2001: A Space Odyssey, the germane part of which is

      “And now, out among the stars, evolution was driving toward new goals. The first explorers of Earth had long since come to the limits of flesh and blood; as soon as their machines were better that their bodies, it was time to move. First their brains, and then their thoughts alone, they transformed into shining new homes of metal and plastic.

      In these, they roamed among the stars. They no longer built spaceships, they were spaceships.

      But the age of Machine-entities swiftly passed. In their ceaseless experimenting, they had learned to store knowledge in the structure of space itself, and to preserve their thoughts for eternity in frozen lattices of light. They could become creatures of radiation, free at last from the tyranny of matter.

      Into pure energy, therefore, they presently tranformed themselves; and on a thousand worlds, the empty shells they had discarded twitched for a while in a mindless dance of death, then crumbled into rust.

      Now they were lords of the galaxy, and beyond the reach of time. They could rove at will among the stars, and sink like a subtle mist through the very interstices of space. But despite their godlike powers, they had not wholly forgotten their origin, in the warm slime of a vanished sea.”

      However, from what I read in a most fascinating recent book (which I highly recommend), “On Intelligence” by Jeff Hawkins, the issue is more complex than simply machine behaviour, it has to do with the way in which our memory is used together with various levels of pattern recognition. It is also possible that intelligent machines will simply have intelligence directed towards specific goals, rather than the sort of overall, general intelligence humans possess.

      It may therefore simply be the generalised fear of usurpation that bothers people, but no longer expressed in terms of competing cultural groups. Most human history is populated by invasion and domination, so this is actually not something different. Perhaps each potential invader presents some different aspect to the challenge. For instance, the fear of Islam was a fear of a salacious type of sensualism, the fear of sub-Saharan Africa a fear of primitivism, etc. Perhaps the fear of robots and artificial intelligence is a fear of rationalism unmediated by compassion and emotion? All parts of the human melange, in other words, but the flavour of each individually being somehow threatening.

    • Doom

      People claim that artificial
      intelligence will eventually out strap human intelligence and this has serious
      and profound repercussions for human beings. The claim can be decomposed into
      two classes, one economic and the other existential. Economically the
      fear is that AI will take over jobs not only manual jobs, labour intensive jobs
      but also white collar jobs (I suspect
      philosophy will be safe, not sure about banking and the like). Existential issues are issues that pertaining
      to what it means to be human.

      I think that the existential
      issues are more pertinent than the issues pertaining to economics. I base
      this assumption on the fact that I think if we had robots to do things for
      us (human beings), it frees us up to pursue other endeavours, endeavours
      that cultivate our humanity. In a society where all the jobs are automated
      or can be automated nobody need work, if people don’t need to work, what kind
      of distribution of resources would we see in that society? If nobody works nobody can claim more of the
      pie thus a more equitable split of resources and ultimately more
      equitable societies. I base this egalitarian optimism on the fact that robots
      can create a Rawlsian original position, a kind of veil of ignorance which
      promotes equality. Without the need for labour or more accurately without
      the need for remunerated labour which promotes a
      disproportional entitlement to resources (CEOs versus line functionaries), whatever
      distribution that prevails within any society would have to be based on
      the principle of equality. Based on this
      I think economically speaking, AI may not be the source of the doom
      and gloom people suppose it to be.

      Existentially on the other hand
      we may have a more problematic issue. What if robots actually become smarter
      than human beings? I think I have to define what smart means, by
      smart I don’t mean a proficiency in number crunching but something like
      analogical reasoning, something which requires creativity and novelty,
      something similar to what human beings do. I don’t understand why it is that this
      notion prompts the fears it does. It does not follow that an artificial intelligence
      is a malevolent intelligence. The only existential crisis human beings face in
      my opinion is the anxiety that stems from the ensuing redefinition of humanity.
      Put differently when somebody else is at the helm of creative destruction what
      is our purpose. In this instant creative destruction applies to a social process
      not an economic process, the process of defining and redefining what it means
      to be a human or more generally what it means to be an earthling. I personally don’t
      think that an artificial intelligence is a malevolent intelligence; my position
      is informed by the fact that people who actually work in artificial
      intelligence don’t have or share these fears. These are the experts we should
      trust and defer to where matters of artificial intelligence are concerned. As
      smart as Mr. Musk and Prof Hawking are, expertise in one field don’t imply
      expertise in another, that is why we don’t ask professors of geography about
      issues pertaining to cognitive science, irrespective of how smart the geography
      Profs are. To clarify I am not launching an ad hominem attack, I am merely
      saying that we should seek the opinions of appropriate people. The fear of robots or AI is predicated on redundancy,
      a fear of being useless. I however think that with advancements in robotics
      human beings have a chance to change, fundamentally change what it means to be
      human, this is the actual source of the anxiety not calculators that can speak.