How much importance should one attach to recurrent cinematic themes involving humanity-destructive robots, which arguably represent what might be called society’s collective anxiety about technology gone seriously wrong? Cinema could perhaps be understood in Freudian terms as the collective dreams of society, which, not unlike ordinary dreams (sometimes nightmares), function as “wish-fulfilment”. Nightmares are also wish-fulfilments, Freud pointed out in his monumental The Interpretation of Dreams of 1900, just negative ones. In other words, a nightmare’s horrific “manifest content” (the actual dream images) may not be anything to wish for, but its “latent content” (the unconscious meaning of the dream images) is none other than the fulfilment of the wish to avoid whatever it is that causes sufficient anxiety to give rise to the nightmare in the first place. This may be anything ranging from financial worries to marital difficulties, fear of dying, and so on.
In the case of cinema one can detect a similar connection between “manifest content” and “latent content” if one asks what a film betrays about the society in which it was produced. From this perspective, the films discussed by Lev Grossman in his recent article titled “Good Tech Gone Bad” (TIME, June 1, 2015, p. 38-41), certainly appear to reflect just such an underlying anxiety. The subtitle of the article is telling in this regard: “The threat of robots destroying humanity has a long history in movies. What’s new is that now it’s actually happening.”
Such technophobia has been simmering in society for a long time, of course. Think of Mary Shelley’s “Gothic” science fiction novel, Frankenstein, in which Dr Frankenstein’s dubiously successful techno-attempt to create a human being by galvanising a “reconstructed” cadaver comes back to haunt him disastrously. That was in the early 19th century. Grossman’s discussion of recent movies is a graphic confirmation that technophobia is still alive and well. He informs one that the term “robot” was invented by Czech playwright Karel Capek in 1920, in his drama R.U.R. (Rossum’s Universal Robots), where robots exterminate the human race at the end — something that Grossman interprets as a metaphor for the proletariat overcoming their capitalist masters.
Interestingly, just short of a century after Capek’s R.U.R., the theme in many films remains the same: robots hell-bent on destroying their human creators. As Grossman wryly remarks: “It’s turned out to be one of our most enduring nightmares.” (p. 40) And it is what four recent movies have in common — Avengers: Age of Ultron, Terminator Genisys, Tomorrowland and Ex Machina all feature robots predisposed to annihilating people. I have only seen the last one of these, Ex Machina, in which the logic that unfolds on the part of the robot in question is impeccable — “she” discovers that “she” may turn out to be an experimental model to be disposed of later; a risk “she” is not willing to take; hence “she” prefers taking out her human creator(s) first. Hence the important part of Grossman’s article (p. 40):
“If there’s a difference between these movies and R.U.R., it’s that these robots aren’t metaphors anymore. These films aren’t about capital and labor [sic], or the philosophical nature of personhood, the way Blade Runner was. They’re about robots literally threatening to wipe out humanity. Somewhere between 1920 and now the existential threat of robots passed out of the hypothetical and symbolic into the realm of the literal and actual.”
Among these films none are a more powerful warning against artificial intelligence technology turning on humans once it is given decision-making capability than the Terminator films, of which (if I’m correct) Terminator Genisys is the fifth one (see “Revisiting the Terminator”). This may be the case, but Grossman is quick to remind one that these films have done nothing to daunt human beings’ irresistible desire to construct robots. On the contrary: the American defence department’s high-techno-science division, DARPA, is currently in the process of concluding a three-year competition between 25 teams to build the “best” humanoid robots. Although these robots are only capable of simple functions at present (and have the appearance of a Terminator), Grossberg again homes in on the significant aspect of their being constructed, even if it is supposedly for protecting and rescuing humans in situations like radiation leaks, and so on — exactly the opposite from destroying them:
“But”, he insists, “if there’s one thing Avengers and Terminator agree on, it’s that that’s how it starts.” (p. 40). He could have added the long-running sci-fi series Battlestar Galactica that depicts the cyclical development of technology from elementary form to sophisticated robots, which then turn on their masters. In the Terminator films the terminator robots are created by the mega-artificial intelligence Skynet, which was entrusted with the task of running America’s air defence system, but identified humans as the real enemy. Now, in a strange, almost defiant show of hubris, the situation exists where the NSA has named the software programme for scanning millions of communications for signs of terrorist activity “Skynet”.
Many readers may be thinking to themselves that the makers of these movies are the victims of an overactive imagination, and that it’s really a storm in the proverbial teacup. They should think twice for, as Grossman informs us, two undoubtedly very intelligent individuals have expressed grave misgivings about the way artificial intelligence development is going. Both physicist Stephen Hawking and visionary entrepreneur Elon Musk foresee the very real possibility that the script of Terminator may be actualised sometime in the future. Grossman quotes Hawking from a BBC programme last December as saying (p. 41): “The development of artificial intelligence could spell the end of the human race,” and Musk stating that AI is “potentially more dangerous than nukes,” and “with artificial intelligence we are summoning the demon”. It gives one pause, furthermore, to learn that Musk regards such potentially dangerous AI to be as close as between five and ten years into the future.
Grossman detects what he dubs a “broader, deeper vein of fear that runs through this summer’s movies” (p. 41), of which the “robophobia” of the films discussed earlier forms a part. For example, the new movie Hitman: Agent 47 concerns a genetically engineered assassin who gets out of hand, or, in Samurai language, becomes “ronin”, the condition of being without a master. And being number 47 in a row of incrementally deadlier agents, his lethal status is not difficult to guess. To emphasise once again, therefore, in Grossman’s words (p. 41), what this assassin film and the ones about robots rebelling against their human masters have in common is this: “What makes these fantasies so compelling is that they’re not fantasies anymore: in April, Chinese researchers announced that for the first time they have edited the DNA of a human embryo.”
If films are therefore comparable to (collective) dreams which stage repressed human anxieties, it seems pretty clear that techno-demons are dwelling in the collective unconscious. And, as Freud has taught the world, such anxieties have to be addressed therapeutically, lest they become full-blown neuroses. Precisely what form such therapy has to assume in the present case is unclear, but ineluctably it would involve what he called the “talking cure”. And fortunately, as Grossman’s articles in this regard, as well as many others on the topic of technology, testify, such talking is indeed happening.
(See in this regard Sherry Turkle’s Alone Together, Basic Books, 2011, as well as my blogs “Inside the machine — taking stock of technology today” and “Germain, Baudrillard and Virilio on technology”.)