We are in the beautiful and ancient city of Wroclaw, Poland (the former Breslau, in German), at the 10th International ‘Beyond Humanism’ conference, where theorists of post- and transhumanism come together (in a different country) every year. It is the third of these inter- and multidisciplinary conferences that we have attended, and as before, the variety of papers, across all disciplines, has been a feast for the intellect.

One of the great delights here has been the opportunity to listen to Bernard Stiegler, who is, in my estimation, one of the (if not THE) most important philosopher in the world today. He is known for his trilogy of books on ‘Technics’ (and for many more books), where he sets out his philosophical anthropology of human beings as fundamentally being ‘Homo and Gyna technologicus’, so that technology cannot be regarded as just an incidental addition to our being, but as its defining attribute.

In his keynote address, Stiegler sounded a stern warning about the current state of affairs in the world, however. Technology being a pharmakon (poison and cure), most people in the world are not able to use their technical devices for critical ends, but allow capitalist agencies to manipulate their way of living through these devices, such as smartphones. At present, therefore, we are witnessing a process of entropy at many levels in the world: socially, ecologically and economically, and it is up to the people of the world to change this to what he calls ‘negentropy’, through the means available to us, such as technology, political action and economic interventions.

The other keynote speaker, Steve Fuller – also a much-published author and sociologist – promoted the ‘transhumanist’ project in his talk. Transhumanism, unlike posthumanism, is predicated on the assumption that humans should take every opportunity, even if it entails risk, to ‘enhance’ their own being technologically (merging with machines), because – in his words – we are capable of ‘greater things’ than hitherto accomplished, even if that means we have to leave the earth and populate deep space.

When, during question time, I asked him to unpack what he meant by ‘greater things’ – which seemed to be intended in exclusively technological terms – he merely said (to affirm his appreciation of bio-diversity, I assume) that if humans are forced to leave Earth (because of some planetary catastrophe), we should take specimens of animals and plants with us in a self-sustaining environment (like a hyper-modern ark, I suppose), and did not address the axiological gist of my question, which I had explained by referring to the literary, musical, architectural, philosophical and other ‘great’ achievements of human beings. After all, it is not ONLY technological progress that is something of value!

My own presentation — titled AI and being human: What is the difference? — evoked (if not provoked) a heated debate among those present, who included AI-researchers, sociologists, psychologists, philosophers, musicologists, architects and engineers. This was predictable, because either one argues (as I do) in favour of an irreducible difference between AI and human beings (as imaginatively depicted in the film, ‘Her’), or one defends the thesis that AI can be constructed which models the human mind perfectly in terms of intelligence, and will eventually, in the shape of robots, be able to perform all the tasks that humans carry out.

My argument was, briefly, to take computer scientist, David Gelernter’s position, set out in his book, The Tides of Mind (2016), as point of departure, move from there to Hubert Dreyfus’s Heideggerian critique of AI research, and then show, in an expanded Heideggerian fashion, that neither of these two thinkers goes far enough in their respective criticisms of mainstream AI-research’s notion of ‘intelligence’.

Gelernter takes the AI research community to task for the overly narrow conception of ‘mind’ on which its computationalist programme is based, and proposes instead that AI should recognise that the mind is best understood as embodying a ‘spectrum’ of consciousness – hence the title of the book: The Tides of Mind. AI research, Gelernter claims, focuses reductively on only one of the ‘levels’ of consciousness, namely where it is focused in a sustained manner, and proceeds logically, step-by-step. This, he points out, is only one level of mental activity, beyond which there are two more, which are marked by medium and low focus, respectively, and where free-association, daydreaming, dreaming and creative functions are located. AI research in its current guise therefore does not nearly do justice to the multi-dimensionality of the mind.

Hubert Dreyfus criticises AI-research on different grounds, namely, that it fails to recognise that human intelligence is embedded and embodied, instead using an erroneous model of the mind that prevents AI from recognising things in the world in terms of their ‘what for?’ He does this on Heideggerian grounds of humans’ ‘concernful’ relationship with things as ‘ready-to-hand’, which presupposes a kind of body-intelligence.

My own move beyond this was to argue that such ‘concernful’ being-in-the-world cannot be understood without Heidegger’s understanding of human beings as fundamentally being structured by ‘care’ – everything we do is an expression of some or other mode of care (even in negative terms), whether we care for our garden, or our pets, our children or other loved ones. Even actions such as discrimination against others is a manifestation of care, albeit a negative one. This is not something that can be programmed on the part of AI, and even less so the fact that every human being has a unique personal narrative, or – that which individualises us most, for Heidegger – a different attitude towards our own mortality. Robots cannot die in this human sense, even if they can be destroyed. And I am not talking of AI characters like Hal in Kubrick’s 2001 A Space Oddyssey, and Ava (Eva?) in Alex Garland’s Ex_Machina, who represent a challenge to AI research, and not an actualised reality.

Hence, although AI undeniably represents a kind of intelligence, it is irreducibly different from being-human; after all, what we are, cannot be subsumed under ‘intelligence’ alone. Apart from what was mentioned above, humans make moral choices, and have an aesthetic sense, which no AI has, or can have, unless it is programmed as a series of algorithmically determined reactions to certain markers.

During the ensuing discussion someone played a piece of music on his smartphone and asked me what I thought of it. Realising that it had been ‘composed’ by an AI, and having listened to similar music before, I responded by saying so, but denying that it proved my argument wrong. After all, I pointed out, an AI can be programmed to ‘compose’ music according to pre-determined rules, but it is doubtful whether it is able to render an independent critical appraisal of a musical composition — with which my critic agreed.

Two of the other presenters in the same session — both from Korea — put forward the view, on functionalist grounds, that the point had been reached to acknowledge that a framework for moral judgment of AI performance should be developed – because, they argued, ‘up to a certain level’ AI performs in a very similar ‘rule-based’ way as humans, who are held responsible for their behaviour, therefore implying that this should be the case for AI as well.

One of them also explicitly denied that humans have free will, given the fact that most people are swayed by social opinion regarding their moral choices. What both presenters seemed not to understand, is that humans do not make ethical choices because they ‘follow rules’; they can act according to a set of rules because they are ethical beings to begin with, and demonstrably have free will — think of people fasting for weeks to make a political or moral point, which goes against all survival instincts.

Needless to say, many attendees disagreed with this, mainly because morality is not a function of abstract intelligence, but of a gradual appropriation of cultural and ethical values, which cannot be programmed. My own view was that — as psychoanalysis has taught us — the source of a moral sense is the repression of prohibitions, which makes it possible to experience guilt when we transgress these prohibitions, lodged in the unconscious. No AI has a sense of prohibition, or, for that matter, guilt. Despite the divergent views on these matters, though, during coffee breaks everyone chatted together amicably.

READ NEXT

Bert Olivier

Bert Olivier

As an undergraduate student, Bert Olivier discovered Philosophy more or less by accident, but has never regretted it. Because Bert knew very little, Philosophy turned out to be right up his alley, as it...

Leave a comment