What does the demonstrable pessimism regarding robots and their projected “attitude” towards humans in recent science fictional films tell us about our understanding (or perhaps imagining) of artificial intelligence? To be sure, let me state at the outset there are exceptions to this, even in some of the most pessimistic instances of such films – for example the protector terminator played by Arnold Schwarzenegger in James Cameron’s Terminator 2: Judgement Day (Cameron 1991), who “learns” from John and Sarah Connor in the course of protecting them against the T-1000’s destructive intent, to the point where it “sacrifices” itself at the end of the film for the sake of securing a threat-free future for them.

There is also the quasi-ethical behaviour, if not action, on the part of the advanced NS-5 robot, Sonny (Alan Tudyk), in Alex Proyas’s neo-noir film, I, Robot (2004), who arguably achieves the status of an authentic human simulacrum when it/he displays the capacity to feel guilt in the face of having killed his “creator”, Dr Lanning (James Cromwell), albeit at the latter’s request. Add to this that the narrative is set in the context of pessimism about the ability of human beings to take responsibility for their own collective well-being (particularly as far as ecological responsibility for the earth is concerned) – as evinced in the fictional artificial intelligence-computer, VIKI’s “decision”, to subject humans to the rule of a new generation of NS-5 robots for their own sake – and it should be apparent that there is a flip-side to the impression of preponderant techno-pessimism. If the latter narrative thread is considered, what I, Robot tells us is that robotics might just deliver us from our self-destructive tendencies (particularly regarding ecosystems) and teach us the meaning of ethical conduct into the bargain, as evinced mainly in the figure of Sonny, but also of VIKI’s (in the end unsuccessful) quasi-ethical intervention in human affairs.

One could add Caradog W James’s recent (2013) film, The Machine, to this group of films, given its thematisation of artificial intelligence (AI) – albeit in the guise of AI and extreme body-reinforcement implanted in a (deceased) human body – which is capable and willing to resist arguably destructive forces and protect apparently benign people. But by and large the fictionally projected behaviour of robots in recent (western) cinema reflects strikingly techno-pessimistic attitudes on the part of the producers of these films and/or the writers on which the film scripts are based. I have written on some of these here before.

But, to take the matter further, consider Alex Garland’s 2014 science fiction thriller, Ex Machina (tellingly subtitled “What happens to me if I fail your test?” and in some versions: “There is nothing more human than the will to survive”), where the pessimism in question is given a new twist, imputing a strangely “human” mode of thinking and acting to the robot. In fact, given what it, or “she” is willing to do to secure her own survival, one might say that her creator, Nathan Bateman (Oscar Isaac) has succeeded only too well in recreating human “nature” in the robot, named Ava (Alicia Vikander), with its echo of the mythically “first” woman, Eve. The lesson from this, as I shall argue below, is that artificial intelligence researchers should perhaps not aim too squarely for a human simulacrum in their work, lest they succeed. Because Ex Machina shows one that, if they do, one has reason to be very pessimistic.

The plot of the film revolves around Nathan’s desire to demonstrate beyond doubt that Ava is a human simulacrum in all respects, including the capacity to evoke the personal, or perhaps emotional interest of another human being. To this end Nathan invites a promising employee at his software company, Blue Book, programmer Caleb Smith (Domhnall Gleeson), to his secluded luxury retreat where he wants Caleb to meet with Ava regularly to assess his own capacity for establishing a relationship of sorts with “her”, regardless of her artificial status.

Caleb finds that he enjoys his meetings with Ava, who converses like an intelligent human being, and despite his knowledge that she is a robot he grows to like and trust her, which should hardly be difficult, given her beautiful feminine face and shape, although her body reveals its artificial android structure. For her part, Ava goes as far as confessing that she is strongly attracted to Caleb. Furthermore, Caleb learns from her that she longs to escape from her confinement to the outside world, and witnessing increasing manifestations of narcissistic behaviour on Nathan’s part, he becomes receptive to her wish.

However, because Caleb is aware of Nathan observing their meetings, they cannot converse freely, until Ava reveals that she can cause power failures that interrupt his surveillance system. During one of these, which simultaneously initiates the locking of all doors by the security system, Ava urges Caleb not to trust Nathan. To his alarm, Nathan shares with Caleb his intention to reprogramme Ava, who would not be the first robot to whom it would happen; Caleb has seen the models that preceded Ava, and failed Nathan’s stringent tests. He realises that, in effect, this would “terminate” the Ava he knows – what comes after her would be a new AI.

To cut a long story short, Caleb – with the intention of rescuing Ava – tricks Nathan into passing out from too much drinking, gains access to the latter’s computer with his security card and alters the security system by changing some code(s). Having seen alarming video material of Nathan’s actions towards decommissioned robots, Caleb starts doubting his own humanity too, leading him to verify his own flesh-and-blood status by cutting himself. Having hatched a plot with Ava to repeat his neutralising of Nathan and re-coding of the system to open the doors instead of closing them during a power failure, so that they can escape together, their plans are thwarted by Nathan, who has listened to all their supposedly secret conversations by means of an independently powered video-camera.

The most surprising development in the narrative is no doubt when Nathan informs Caleb that, in his view, Ava has been manipulating Caleb with the purpose of finding a way to escape – her “romantic” interest in him has all been a pretence, according to Nathan. But (and this is the significant part) precisely this has been the true test of her success as a human simulacrum – the fact that she has the intelligence to manipulate a human being for her own hidden purposes.

Needless to stress, this is an extremely pessimistic, if not downright cynical, assessment of what makes us human beings. In terms of Jürgen Habermas’s theory of discourse ethics, instead of having practised “communicative action” (where one is as open and sincere as possible in your communication with others) in her conversations with Caleb, Ava has been indulging in “strategic action” instead (where one has a hidden agenda, hiding one’s true intent for the sake of manipulating others). This is the central message of Garland’s film, then: artificial intelligence researchers will have succeeded in constructing an android – a robot that perfectly simulates human behaviour – when they can come up with something (or “someone”) as devious as Ava, who does not hesitate to use others – in this case Caleb – to reach their own goals.

In this case the android’s plan turns out to be escape at all costs, and the costs entail, firstly, killing Nathan (with the help of another android, Kyoko, whose job it was to see to the satisfaction of Nathan’s needs, but who is destroyed in the process), and covering the transparent parts of her body with “skin”-parts from earlier robots, to resemble a human being fully. But the cherry on the top of her cynicism and disingenuousness comes when she abandons Caleb, who had been rendered unconscious by Nathan, and wakes up just in time to see her escaping to the outside, while he remains trapped inside the securely locked house. The film ends with Ava completing her escape by boarding the helicopter intended for Caleb, presumably heading for human society.

Author

  • As an undergraduate student, Bert Olivier discovered Philosophy more or less by accident, but has never regretted it. Because Bert knew very little, Philosophy turned out to be right up his alley, as it were, because of Socrates's teaching, that the only thing we know with certainty, is how little we know. Armed with this 'docta ignorantia', Bert set out to teach students the value of questioning, and even found out that one could write cogently about it, which he did during the 1980s and '90s on a variety of subjects, including an opposition to apartheid. In addition to Philosophy, he has been teaching and writing on his other great loves, namely, nature, culture, the arts, architecture and literature. In the face of the many irrational actions on the part of people, and wanting to understand these, later on he branched out into Psychoanalysis and Social Theory as well, and because Philosophy cultivates in one a strong sense of justice, he has more recently been harnessing what little knowledge he has in intellectual opposition to the injustices brought about by the dominant economic system today, to wit, neoliberal capitalism. His motto is taken from Immanuel Kant's work: 'Sapere aude!' ('Dare to think for yourself!') In 2012 Nelson Mandela Metropolitan University conferred a Distinguished Professorship on him. Bert is attached to the University of the Free State as Honorary Professor of Philosophy.

READ NEXT

Bert Olivier

As an undergraduate student, Bert Olivier discovered Philosophy more or less by accident, but has never regretted it. Because Bert knew very little, Philosophy turned out to be right up his alley, as it...

Leave a comment