In a recent TIME magazine (December 29, 2014, p20), there’s a short article by futurologist and outspoken techno-optimist Ray Kurzweil titled “Don’t fear Artificial Intelligence” (AI). He cites two highly creative individuals – Stephen Hawking and South-African born Elon Musk – as representatives of “the pessimistic view” before putting forward his own optimistic argument regarding humanity’s future relations with AI. He even goes as far as stating that humans have a “moral imperative” to actualise the promise of AI.

Here’s what Hawking said: “Once humans develop artificial intelligence, it would take off on its own and redesign itself … the development of full artificial intelligence could spell the end of the human race.” This bears a strong resemblance to what happens in the sci-fi series Battlestar Galactica, where the Cylons redesign themselves to reach the level of perfect human simulacra, who then set out to destroy their erstwhile creators.

Musk, in turn, had this to say: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that … we are summoning the demon.” Despite the medieval metaphor of “the demon”, it is clear that Musk attributes the potential capacity of destroying humanity to its own creation, namely AI.

For his part, Kurzweil is quick to remind readers that humanity has faced, and still faces many existential threats, including that of bioterrorism. “Technology”, he adds wisely, “has always been a double-edged sword, since fire kept us warm but also burned down our villages”. Kurzweil continues: “The typical dystopian futurist movie has one or two individuals fighting for control of ‘the AI’. Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands; it’s in 1 billion or 2 billion hands. A kid in Africa with a smartphone has more intelligent access to knowledge than the president of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually everyone’s mental capabilities will be enhanced by it within a decade.”

Kurzweil goes on to say that conflicts between certain groups are, and will in future also be “enhanced” by AI, but argues that research has shown that violence has decreased tremendously in the course of the last six centuries; if one has the opposite impression, it is probably because we have far better information about violence everywhere today than in previous times. That he is not quite as sanguine about the “safety” of AI as he would like one to believe, is evident from the fact that he insists on the availability of ways of ensuring that emerging AI-technologies remain safe.

Drawing a comparison between AI-development and biotechnology – where strategies of control were devised 39 years ago and regularly revised – Kurzweil claims that the same could be done in the case of AI development. He reminds one that attempts to devise safety guidelines are already underway in academia as well as the private sector, and urges readers to accept that AI can be kept harmless through constant attention being given to “human governance and social institutions”. In other words, he believes that what one might call human decision-making should remain primary, and not be superseded by such decision-making being relinquished to machines, as these sentences indicate: “We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress.”

In other words, Kurzweil (somewhat naively for such a learned person) believes that technology is just another “tool” that may be used for “progress”. Not only did the Enlightenment dream of progress, going back to the 18th century, collapse in the 20th century in the course of two devastating world wars, but several thinkers, pre-eminently Martin Heidegger, have argued that technology has long since ceased being a mere tool. Instead Heidegger argued that, quite apart from technological gadgets ostensibly offering themselves as glitzy “tools”, technology as “enframing” has installed itself as the hegemonic frame of reference for all questions, issues and problems that people are able to identify today.

Just as, during the medieval period, all questions and answers were approached with reference to the fundamental assumption that there is an omnipotent God who created everything in existence, and were hence answered in terms of the horizon of belief anchored in this assumption, today technology as “enframing” has become such a fundamental assumption. When problems crop up no one asks whether God would condone such-and-such an approach to solve them; instead one turns to the latest technology to resolve them.

But he is making an even more fundamental mistake in believing that practically “everyone’s mental capabilities will be enhanced by it within a decade”. As a university teacher I (and I’m sure my colleagues too) have frequently experienced the difference between students who can handle a smartphone or tablet, or laptop, in conjunction with the internet, to access relevant information, on the one hand, and those who not only have this tech-savvy ability, but something far more valuable, namely the ability to grasp, by conceptualising, what the requisite information actually means, and who can articulate it verbally. I often say to my students that I’m not interested in what they can find on the internet, but in whether they can show me that they understand it.

Moreover, it appears to me that Kurzweil is much too assured about the ability of humanity to rein in AI-development through “governance strategies”. As Hawking suggests, AI might, beyond a certain point, “take off on its own and redesign itself”. This is not only the stuff of sci-fi. In a book called The Inhuman (Polity Press, 1991) one of Europe’s foremost thinkers on technology, Jean-Francois Lyotard, has suggested that beyond a certain point of technological development the very process acquires a momentum that is no longer sustained by humans, which seems to me to be compatible with Hawking’s observation.

More importantly, though, Lyotard – drawing on the work of Hubert Dreyfus and Maurice Merleau-Ponty – points out that there is a fundamental difference between AI and human intelligence. While human intelligence employs all kinds of logic, including binary logic, it mostly works with “intuitive, hypothetical configurations” (p15), as well as with ambiguous or imprecise information and analogical thinking, switching between focused and lateral thinking all the time – which is modelled on the way that human body-oriented perception works in a field of vision.

By contrast, AI works with binary logic of the mathematical or digital kind (“if … then … “, or “p is not non-p”) – which is just one of the thinking-modes open to humans. Given such a fundamental, irreducible difference, would it be at all surprising to find that, when AI reaches the point of development called “the singularity” by Kurzweil, where it is expected to surpass all human intelligence combined, we are confronted by a truly “inhuman” intelligence? Such an intelligence can hardly be expected to be sympathetic to humans, because its “intelligence” would be binary, digital, whereas human intelligence is embodied and reflective.

Can one even talk about “surpassing” human intelligence in this case? I think not – the two “kinds” of intelligence are incomparable; small wonder science fiction has routinely depicted the confrontation between advanced AI and humanity as one where violent conflict results. Kurzweil realises this intuitively, too. Why else would he insist that human beings should take care to remain in the position of “governance” in relation to AI? But if Hawking and Lyotard are right about AI possibly escaping human control, such “human governance” would be meaningless.

Anyone interested in a more sustained treatment of this theme can read my paper: “Body, thought, being-human and artificial intelligence: Merleau-Ponty and Lyotard”. South African Journal of Philosophy 21 (1), 2002, pp. 44-62.

Author

  • As an undergraduate student, Bert Olivier discovered Philosophy more or less by accident, but has never regretted it. Because Bert knew very little, Philosophy turned out to be right up his alley, as it were, because of Socrates's teaching, that the only thing we know with certainty, is how little we know. Armed with this 'docta ignorantia', Bert set out to teach students the value of questioning, and even found out that one could write cogently about it, which he did during the 1980s and '90s on a variety of subjects, including an opposition to apartheid. In addition to Philosophy, he has been teaching and writing on his other great loves, namely, nature, culture, the arts, architecture and literature. In the face of the many irrational actions on the part of people, and wanting to understand these, later on he branched out into Psychoanalysis and Social Theory as well, and because Philosophy cultivates in one a strong sense of justice, he has more recently been harnessing what little knowledge he has in intellectual opposition to the injustices brought about by the dominant economic system today, to wit, neoliberal capitalism. His motto is taken from Immanuel Kant's work: 'Sapere aude!' ('Dare to think for yourself!') In 2012 Nelson Mandela Metropolitan University conferred a Distinguished Professorship on him. Bert is attached to the University of the Free State as Honorary Professor of Philosophy.

READ NEXT

Bert Olivier

As an undergraduate student, Bert Olivier discovered Philosophy more or less by accident, but has never regretted it. Because Bert knew very little, Philosophy turned out to be right up his alley, as it...

Leave a comment