Bert Olivier
Bert Olivier

Gelernter: A dissenting voice in the field of artificial intelligence

The relationship between the human mind and body is something that has occupied philosophers at least since the father of modern philosophy, René Descartes, bequeathed his notorious “dualism” to his successors. For Descartes the mind was a different “substance” compared to the body – the former was a “thinking substance” and the latter an “extended substance”, and he resolved the problem of the manner in which these mutually exclusive substances “interacted” by postulating the so-called “animal spirits” – a hybrid concept, denoting something between mind and body – as mediating between them in the pineal gland at the base of the human brain.

Increasingly, from the late 19th-century onwards, thinkers started questioning the validity of such dualistic thinking; in various ways philosophers such as Edmund Husserl, Martin Heidegger, Maurice Merleau-Ponty and Jean-Francois Lyotard argued that humans cannot be broken down into mutually exclusive parts, but that they comprised beings characterised by a unity-in-totality. Through many phenomenological analyses Merleau-Ponty, for example, demonstrated that, although – in the event of an injury to your leg, for example – one is able to distance oneself from your body, as if it is something alien to yourself, referring to “the pain in your leg”, and so on, it is undeniable that, at a different level of awareness, “you” are in pain, and not just your leg. In short: we don’t just have bodies; we “ARE our bodies”.

This line of thinking, which has far-reaching implications for current thinking about the differences – or the presumed similarities – between humans and artificial intelligence (AI) , has been resurrected, perhaps surprisingly, by one of the most brilliant computer-scientists in the world, namely David Gelernter of Yale University in the United States – the subject of a recent article by David Von Drehle (Encounters with the Archgenius; TIME, March 7, 2016, pp. 35-39). Needless to stress, this goes against the grain of orthodox thinking in artificial intelligence-research and development circles – Google’s Ray Kurzweil as well as Sam Altman (president of Startup Incubator Y Combinator), for instance, believe that the future development of AI along lines that don’t admit the differences between body-related human intelligence and artificial intelligence, can only benefit humankind – although there are prominent figures at the other end of the spectrum, such as physicist Stephen Hawking and engineer-entrepreneur Elon Musk, who believe that AI poses the “biggest existential threat” to humans (see the spectrum-of-opinion schema by Matt Peckham at the bottom of p. 36-37).

Gelernter – a stubbornly independent thinker, like a true philosopher (he has published on computer science, popular culture, religion, psychology and history, AND he is a productive artist) – fits into neither of these categories. Reminiscent of Merleau-Ponty, he insists that one cannot avoid the problem of accounting for the human body when conceiving of artificial intelligence, as computer scientists have tended to do since 1950, when Alan Turing deliberately “pushed it to one side” (p. 36) because it was just too “daunting”. Regarding Gelernter’s position, Von Drehle states (p. 36):

“In his latest book, “The Tides of Mind: Uncovering the Spectrum of Consciousness”, Gelernter argues that the entire field of AI is off track, and dangerously so. A key question in the pursuit of intelligence has never been answered – indeed, it has never been asked: Does it matter that your brain is part of your body?

“Or put another way: What is the human mind without the human being?”

Such circumspect perspicacity does not sit well with the majority of other researchers in the field, who generally don’t merely set the question aside, like Turing did (because he realised its intractability), but simply ignore it, in the naïve belief that one can legitimately equate the mind with software and the brain with hardware, in computerspeak. This seems to imply, for unreflective AI-developers, that, like software, human minds will, in future, be “downloadable” to computers, and moreover, that human brains will – like computer hardware – become “almost infinitely upgradable”. Anyone with a knowledge of the phenomenology of human experience, specifically of the body, will know that this is a hopelessly naïve, uninformed view. I don’t know how much Gelernter knows about phenomenology, but at least his understanding of, and admiration for, the human mind, puts him in a position to feel exactly the same way as I do about the questionable anthropology underpinning this short-sighted, misguided misconception on the part of mainstream AI-researchers (Von Drehle 2016, p. 36):

“David Gelernter isn’t buying it. The question of the body must be faced, and understood, he maintains. ‘As it now exists, the field of AI doesn’t have anything that speaks to emotions and the physical body, so they just refuse to talk about it’, he says. ‘But the question is so obvious, a child can understand it. I can run an app on any device, but can I run someone else’s mind on your brain? Obviously not.’

“In Gelernter’s opinion, we already have a most singular form of intelligence available for study – the one that produced Bach and Shakespeare, Jane Austen and Gandhi – and we scarcely understand its workings. We’re blundering ahead in ignorance when we talk about replacing it.”

Lest anyone should doubt the qualifications of this man who openly expresses his misgivings about the direction that AI-research is headed, let me quote another paragraph from Von Drehle’s article (p. 37):

“Sun Microsystems co-founder Bill Joy has called Gelernter, who pioneered breakthroughs in parallel processing, ‘one of the most brilliant and visionary computer scientists of our time.’ Gelernter’s 1991 book, Mirror Worlds, foretold with uncanny accuracy the ways the Internet would reshape modern life, and his innovative software to arrange computer files by timeline, rather than folder, foreshadowed similar efforts by several major Silicon Valley firms.”

Unlike most of his colleagues, however, Gelernter is not in thrall to the power of computers; as already apparent from the above, he is far more – and appropriately so – under the impression of the complexity and the multi-faceted nature of the human mind (Von Drehle, p. 37):

“The human mind, Gelernter asserts, is not just a creation of thoughts and data; it is also a product of feelings. The mind emerges from a particular person’s experience of sensations, images and ideas. The memories of these sensations are worked and reworked over a lifetime – through conscious thinking and also in dreams. ‘The mind,’ he says, is in a particular body, and consciousness is the work of the whole body.’

“Engineers may build sophisticated robots, but they can’t build human bodies. And because the body – not just the brain – is part of consciousness, the mind alters with the body’s changes.”

David Gelernter should know. In 1993 he innocently opened a parcel that had arrived in his office at Yale. It exploded and shattered his right hand. As it turned out it was a pipe bomb that had been sent to him by the “Unabomber”, Ted Kaczynski, because he had erroneously identified Gelernter as one of the “technophile enemies” of humankind, not knowing that, of all computer scientists, Gelernter (a poet and artist as well) is probably the least enamoured of technology, valuing human capacities in their totality far more highly. But as a result of the (still continuing) experience of trauma and (daily) pain, Gelernter’s mind is not the same as before the attack – his body has been negatively altered, he has come close to death, and because body and mind are aspects of the same person, his consciousness, his thinking, has changed.

It is astonishing, and gives one hope, that someone who comes from the world of AI-research – admittedly a very unusual inhabitant of that sphere – is speaking out on the blindness that afflicts most of his starry-eyed colleagues, and that, despite the fact that he does not speak exactly the same language as phenomenologists of the body, like Merleau-Ponty, has come up with exactly the same insights. I, for one, hope that the warning he has sounded in his latest book does not go unnoticed.

Tags: , , , , , , , ,

  • ‘Beyond Humanism’ in Wroclaw, Poland
  • The destructive approach to nature: ‘Geostorm’
  • How technological control undermines human desire
  • The world has not learnt anything from Mary Shelley’s Frankenstein