Through my son, who is as much of a science fiction aficionado as I am, I recently discovered the Terminator movie spin-off television series, The Sarah Connor Chronicles, which apparently had to be “terminated” temporarily, after nine episodes, because of the film and television-writers’ strike in Hollywood. Watching the series develop the theme of a small band of humans trying to prevent a machine-initiated, nuclear Armageddon (ironically, with the help of a “protector” terminator in feminine guise, played by appropriately named Summer Glau), my interest in the three movies was rekindled to such an extent that I viewed them all over again.

Make no mistake — these are films depicting [and some would say glorifying] extreme violence. The argument can be made that, if their combined narratives carry a message of vigilance against technology usurping the decision-making capability of humans, this message is subverted at the level of the image-sequences of (glorifying) unmitigated destruction.

What I find more interesting, however, is their authentic science-fictional character, which [thinking back to a remark by science fiction fundi James Sey] consists in this: while emphasising the tremendous capacity of science and technology, to open up new worlds, as it were, genuine science fiction simultaneously shows that science and technology [today, “techno-science”] also have the ability to destroy the world as we know it.

In other words, together they comprise a pharmakon — a poison and cure at the same time. This has always been the hallmark of science fiction, as opposed to techno-science fantasy such as the Star Wars films (which are interesting in their own right, not least for their elaboration on Jungian archetypes in the various characters, and on a novel, if ambivalent conception of a pervasive deity in the notion of The Force).

One of the earliest science-fictional narratives — if not the earliest — the young Mary Shelley’s Frankenstein, of 1818, is medical science fiction in just this sense: it elaborates imaginatively on the Promethean power imparted to human beings by science and technology, which may be used wisely, or unwisely, as in the case of the tragic monstrosity created by Dr Victor Frankenstein. Jules Verne’s stories, too, are genuine science fiction, and his brainchild, the fictional Captain Nemo’s eponymous submarine, the Nautilus, epitomises what is indispensable in the genre in so far as this wondrous machine discloses hitherto unknown worlds, but also harbours the force that could bring about the demise of the planet. In the work of Asimov, Clarke, Robert Heinlein and Dick, too, one encounters wonderful instances of science fiction, many of which have been filmed (including 2001: A Space Odyssey; Blade Runner and Minority Report).

Humanity is rapidly approaching the historical point where one of the sub-themes of the Terminator movies and spin-off series, namely the question, whether a machine can become self-aware, may possibly surpass the domain of fiction. Computers are undoubtedly a form of artificial, even if not human, intelligence, and the speculative insight, that the qualitatively new moment of self-consciousness may be attained when the connectivity in a “brain” (human or artificial) reaches a critical quantitative threshold, seems to me to be a fair hypothesis. In Heinlein’s classic SF novel, The Moon is a Harsh Mistress, the central character, Mannie, talking about “Mike”, the ‘…fair dinkum thinkum, sharpest computer you’ll ever meet’, puts it like this:

‘…Human brain has around ten-to-the-tenth neurons. By third year [of augmenting Mike’s neural nets and memory banks] Mike had better than one and a half times that number of neuristors.

And woke up.

Am not going to argue whether a machine can “really” be alive, “really” be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don’t know about you, Tovarisch, but I am. Somewhere along evolutionary chain from macromolecule to human brain self-awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high numbers of associational paths. Can’t see it matters whether paths are protein or platinum.’

In other words, beyond a certain quantitative barrier a qualitative novelty is likely to manifest itself, captured nicely in the now outmoded description of our species, Homo sapiens sapiens — the reflectively wise (or aware) human. Not merely consciousness, but awareness of one’s consciousness. In the Terminator films, too — James Cameron’s first two, and the third, directed by Jonathan Mostow, as well as the television series based on these — the same assumption operates. Skynet — the computer network that launches the attack against humanity — turns out to be, not a giant mainframe computer, but a newly self-aware, globally interconnected network of computers which, together, comprise something like a gigantic brain, with an almost unimaginable number of associational neural pathways. And, as the narrative has it, given this novel state of self-consciousness on the part of Skynet, it wrests decision-making power away from General Brooster when the latter authorises the computer network to take over (among other things) the air defence system from human control, that is, from human decision-making.

Here lies the rub of Terminator’s lesson: for as long as people retain the capacity to decide what to do next, instead of relinquishing this volitional capacity to artificial intelligence which lacks the crucial quality of compassion, the machines are unlikely to rule over humanity. And it is a moot question, whether such an affective capacity could become part of artificial intelligence. Skynet and its offspring terminators lack all feeling, while Mike, the computer in The Moon is a Harsh Mistress, is endowed with feelings, as is Hal in Clarke’s 2001: A Space Odyssey. In the series, The Sarah Connor Chronicles, there are interesting sequences which suggest that the incongruously gendered “female” protector terminator (Cameron) is interested in humans’ capacity to feel, and even that “she” desires to have this capacity, too.

Things are even more complicated than this, of course. In Proyas’s film, I, Robot, the computer (named Vicky) which orchestrates a takeover rebellion against the humans among the robots, justifies her actions by appealing to the very logic that was supposed to prevent such a rebellion, namely the robot-programming rule, that robots will never do anything that is not in humans’ interest. The takeover was justified, she claims, because humans had shown themselves incapable of looking after their own interests, notably as far as their abuse of the natural environment is concerned, and it was therefore up to artificial intelligence to take charge of humans. Here one witnesses the critical dimension so conspicuous in most of science fiction.

Science fiction has been a literary and cinematic genre where writers play around with various possibilities and hypotheses concerning (among many other themes) the character of artificial intelligence, and what the constitutive difference between humans and computers or robots would be. Spielberg’s AI – Artificial Intelligence is one of the most provocative films in this respect, in so far as it offers a challenge to the developers of artificial intelligence, to come up with a machine that would truly be a simulation of humanness, something which would require far more – according to the film’s guiding hypothesis — than merely “intelligence”. That elusive quality, the film asserts (in surprisingly Lacanian vein), is ‘the desire to be loved’, exemplified in the tragic ‘child robot’ character, David.

Are there conceptual horizons that do not coincide with artificial intelligence research or with SF-speculation, and within which one finds the means to reflect on these interesting (and increasingly significant) matters? To be sure. Offhand I can think of two very fruitful and suggestive philosophical perspectives which enable one to reflect on what is done in these two domains (AI-research and SF), namely Heidegger’s thought on technology and Lyotard’s on techno-science. But that will have to wait for another time.

Author

  • As an undergraduate student, Bert Olivier discovered Philosophy more or less by accident, but has never regretted it. Because Bert knew very little, Philosophy turned out to be right up his alley, as it were, because of Socrates's teaching, that the only thing we know with certainty, is how little we know. Armed with this 'docta ignorantia', Bert set out to teach students the value of questioning, and even found out that one could write cogently about it, which he did during the 1980s and '90s on a variety of subjects, including an opposition to apartheid. In addition to Philosophy, he has been teaching and writing on his other great loves, namely, nature, culture, the arts, architecture and literature. In the face of the many irrational actions on the part of people, and wanting to understand these, later on he branched out into Psychoanalysis and Social Theory as well, and because Philosophy cultivates in one a strong sense of justice, he has more recently been harnessing what little knowledge he has in intellectual opposition to the injustices brought about by the dominant economic system today, to wit, neoliberal capitalism. His motto is taken from Immanuel Kant's work: 'Sapere aude!' ('Dare to think for yourself!') In 2012 Nelson Mandela Metropolitan University conferred a Distinguished Professorship on him. Bert is attached to the University of the Free State as Honorary Professor of Philosophy.

READ NEXT

Bert Olivier

As an undergraduate student, Bert Olivier discovered Philosophy more or less by accident, but has never regretted it. Because Bert knew very little, Philosophy turned out to be right up his alley, as it...

Leave a comment