Showing posts with label Bladerunner. Show all posts
Showing posts with label Bladerunner. Show all posts

Sunday, 12 July 2015

How Can It Not Know What It Is?





There is a scene near the beginning of classic science fiction film Blade Runner where our hero, Deckard, played by Harrison Ford, has gone to the headquarters of the Tyrell Corporation to meet its head, Eldon Tyrell. He is met there by a stunningly beautiful assistant called Rachael. Deckard is there to perform tests on the employees to discover if any might be replicants, synthetic beings created by the Tyrell Corporation, some of which have rebelled and become dangerous to humans. Specifically, he needs to know if the tests he has available to him will work on the new Nexus 6 type replicants that have escaped. Tyrell wants to see Deckard perform his tests on a test subject before he allows the tests to continue. Deckard asks for such a test subject and Tyrell suggests Rachael. The test being completed, Tyrell asks Rachael to step outside for a moment. Deckard suggests that Rachael is a replicant and Tyrell confirms this and that she is not aware of it. “How can it not know what it is?” replies a bemused Deckard.

This question, in the wider context of the film and the history of its reception, is ironic. Blade Runner was not a massively popular film at the time of its cinematic release and was thought to have underperformed. But, over the years, it has become a classic, often placed in the top three science fiction films ever made. That popularity and focus on it as a serious film of the genre has, in turn, produced an engaged fan community. One issue regarding the film has always been the status of Deckard himself. Could it be that Deckard was himself a replicant? Interestingly, those involved with the production of the film have differing views.

Back in 2002 the director, Ridley Scott, confirmed that, for him, Deckard was indeed a replicant and that he had made the film in such a way as this was made explicit. However, screenwriter Hampton Fancher, who wrote the basic plot of the film, does not agree with this. For him the question of Deckard’s status must forever stay mysterious and in question. It should be forever “an eternal question” that “doesn’t have an answer”. Interestingly, for Harrison Ford Deckard was, and always should be, a human. Ford has stated that this was his main area of contention with Ridley Scott when making the film. Ford believed that the viewing audience needed at least one human on the screen “to build an emotional relationship with”. Finally, in Philip K. Dick’s original story, on which Blade Runner is based, Do Androids Dream of Electric Sheep?, Deckard is a human. At this point I playfully need to ask how can they not agree what it is?

Of course, in the context of the film Deckard’s question now takes on a new level of meaning. Deckard is asking straightforwardly about the status of Rachael while, perhaps, having no idea himself what he is. The irony should not be lost on us. But let us take the question and apply it more widely. Indeed, let’s turn it around and put it again: how can he know what he is? This question is very relevant and it applies to us too. How can we know what we are? We see a world around us with numerous forms of life upon it and, we would assume, most if not all of them have no idea what they are. And so it comes to be the case that actually knowing what you are would be very unusual if not unique. “How can it not know what it is?” starts to look like a very naive question (even though Deckard takes it for granted that Rachael should know and assumes that he does of himself). But if you could know you would be the exception not the rule.

I was enjoying a walk yesterday evening and, as usual, it set my mind to thinking going through the process of the walk. My mind settled on the subject of Fibromyalgia, a medical condition often characterised by chronic widespread pain and a heightened and painful response to pressure. Symptoms other than pain may occur, however, from unexplained sweats, headaches and tingling to muscle spasms, sleep disturbance and fatigue. (There are a host of other things besides.) The cause of this condition is unknown but Fibromyalgia is frequently associated with psychiatric conditions such as depression and anxiety and among its causes are believed to be psychological and neurobiological factors. One simple thesis is that in vulnerable individuals psychological stress or illness can cause abnormalities in inflammatory and stress pathways which regulate mood and pain. This leads to the widespread symptoms then evidenced. Essentially, certain neurons in the brain are set “too high” and trigger physical responses. Or, to put it another way more suitable to my point here, the brain is the cause of the issues it then registers as a problem.

The problem here is that the brain does not know that it was some part of itself that caused the issue in the first place. It is just an unexplained physical symptom being registered as far as it is concerned. If the brain was aware and conscious surely it would know that some part of it was the problem? But the brain is not conscious: “I” am. It was at this point in my walk that I stopped and laughed to myself at the absurdity of this. “I” am conscious. Not only did I laugh at the notion of consciousness and what it might be but I also laughed at this notion of the “I”. What do I mean when I say “I”? What is this “I”? And that was when the question popped into my head: how can it not know what it is?

The question is very on point. If I was to say to you right now that you were merely a puppet, some character in a divinely created show for the amusement of some evil god you couldn’t prove me wrong. Because you may be. If I was to say that you are a character in some future computer game a thousand years from now you couldn’t prove me wrong either. Because, again, you could be. How you feel about it and what you think you know notwithstanding. Because we know that there are limits to our knowledge and we know that it is easy to fool a human being. We have neither the knowledge nor the capacity for the knowledge to feel even remotely sure that we know what we are or what “I” might refer to. We have merely comforting notions which help us to get by, something far from the level of insight required to start being sure.

“How can it not know what it is?” now seems almost to be a very dumb question. “How can it know what it is?” now seems much more relevant and important. For how can we know? Of course Rachael didn’t know what she was. That is to be normal. We, in the normal course of our lives, gain a sense of self and our place in the world and this is enough for us. We never strive for ultimate answers (because, like Deckard, we already think we know) and, to be frank, we do not have the resources for it anyway. Who we think we are is always enough and anything else is beyond our pay grade. Deckard, then, is an “everyman” in Blade Runner, one who finds security in what he knows he knows yet really doesn’t know. It enables him to get through the day and perform his function. It enables him to function. He is a reminder that this “I” is always both a presence and an absence, both there and yet not. He is a reminder that who we are is always a “feels to be” and never yet an “is”. Subjectivity abounds.

How can it not know what it is? How, indeed, could it know?



This article is a foretaste of a multimedia project I am currently producing called "Mind Games". The finished project will include written articles, an album of music and pictures. It should be available in a few weeks.

Wednesday, 18 March 2015

Humans and Robots: Are They So Different?

Today I have watched the film "Chappie" from Neill Blomkamp, the South African director who also gave us District 9 and Elysium. Without going into too much detail on a film only released for two weeks, its a film about a military robot which gets damaged and is sent to be destroyed but is saved at the last moment when its whizzkid inventor saves it to try out his new AI program on it (consciousness.dat). What we get is a robot that becomes self-aware and develops a sense of personhood. For example, it realises that things, and it, can die (in its case when its battery runs out).




Of course, the idea of robot beings is not new. It is liberally salted throughout the history of the science fiction canon. So whether you want to talk about Terminators (The Terminator), Daleks (Doctor Who), Replicants (Bladerunner) or Transformers (Transformers), the idea that things that are mechanical or technological can think and feel like us (and sometimes not like us or "better" than us) is not a new one. Within another six weeks we will get another as the latest Avengers film is based around fighting the powerful AI robot, Ultron.




Watching Chappie raised a lot of issues for me. You will know if you have been following this blog or my music recently that questions of what it is to be human or to have "being" have been occupying my mind. Chappie is a film which deliberately interweaves such questions into its narrative and we are expressly meant to ask ourselves how we should regard this character as we watch the film, especially as various things happen to him or as he has various decisions to make. Is he a machine or is he becoming a person? What's the difference between those two? The ending to the film, which I won't give away here, leads to lots more questions about what it is that makes a being alive and what makes beings worthy of respect. These are very important questions which lead into all sorts of other areas such as human and animal rights and more philosophical questions such "what is it to be a person"? Can something made entirely of metal be a person? If not, then are we saying that only things made of flesh and bone can have personhood?




I can't but be fascinated by these things. For example, the film raises the question of if a consciousness could be transferred from one place to another. Would you still be the same "person" in that case? That, in turn, leads you to ask what a person is. Is it reducible to a "consciousness"? Aren't beings more than brain or energy patterns? Aren't beings actually physical things too (even a singular unity of components) and doesn't it matter which physical thing you are as to whether you are you or not? Aren't you, as a person, tied to your particular body as well? The mind or consciousness is not an independent thing free of all physical restraints. Each one is unique to its physical host. This idea comes to the fore once we start comparing robots, deliberately created and programmed entities, usually on a common template, with people. The analogy is often made in both directions so that people are seen as highly complicated computer programs and robots are seen as things striving to be something like us - especially when AI enters the equation. But could a robot powered by AI ever actually be "like a human"? Are robots and humans just less and more complicated versions of the same thing or is the analogy only good at the linguistic level, something not to be pushed further than this?

Besides raising philosophical questions of this kind its also a minefield of language. Chappie would be a him - indicating a person. Characters like Bumblebee in Transformers or Roy Batty in Bladerunner are also regarded as living beings worthy of dignity, life and respect. And yet these are all just more or less complicated forms of machine. They are metal and circuitry. Their emotions are programs. They are responding, all be it in very complicated ways, as they are programmed to respond. And yet we use human language of them and the film-makers try to trick us into having human emotions about them and seeing them "as people". But none of these things are people. Its a machine. What does it matter if we destroy it. We can't "kill" it because it was never really "alive", right? An "on/off" switch is not the same thing as being dead, surely? When Chappie talks about "dying" in the film it is because the military robot he is has a battery life of 5 days. He equates running out of power with being dead. If you were a self-aware machine I suppose this would very much be an existential issue for you. (Roy Batty, of course, is doing what he does in Bladerunner because replicants have a hard-wired lifespan of 4 years.) But then turn it the other way. Aren't human beings biological machines that need fuel to turn into energy so that they can function? Isn't that really just the same thing?




There are just so many questions here. Here's one: What is a person? The question matters because we treat things we regard as like us differently to things that we don't. Animal Rights people think that we should protect animals from harm and abuse because in a number of cases we suggest they can think and feel in ways analogous to ours. Some would say that if something can feel pain then it should be protected from having to suffer it. That seems to be "programmed in" to us. We have an impulse against letting things be hurt and a protecting instinct. And yet there is something here that we are forgetting about human beings that sets them apart from both animals and most intelligent robots as science fiction portrays them. This is that human beings can deliberately do things that will harm them. Human beings can set out to do things that are dangerous to themselves. Now animals, most would say, are not capable of doing either good or bad because we do not judge them self-aware enough to have a conscience and so be capable of moral judgment or of weighing up good and bad choices. We do not credit them with the intelligence to make intelligent, reasoned decisions. Most robots or AI's that have been thought of always have protocols about not only protecting themselves from harm but (usually) humans too as well. Thus, we often get the "programming gone wrong" stories where robots become killing machines. But the point there is that that was never the intention when these things were made.




So human beings are not like either animals or artificial lifeforms in this respect because, to be blunt, human beings can be stupid. They can harm themselves, they can make bad choices. And that seems to be an irreducible part of being a human being: the capacity for stupidity. But humans are also individuals. We differentiate ourselves one from another and value greatly that separation. How different would one robot with AI be from another, identical, robot with an identical AI? Its a question to think about. How about if you could collect up all that you are in your mind, your consciousness, thoughts, feelings, memories, and transfer them to a new body, one that would be much more long lasting. Would you still be you or would something irreducible about you have been taken away?  Would you actually have been changed and, if so, so what? This question is very pertinent to me as I suffer from mental illness which, more and more as it is studied, is coming to be seen as having hereditary components. My mother, too, suffers from a similar thing as does her twin sister. It also seems as if my brother's son might be developing a similar thing too. So the body I have, and the DNA that makes it up, is something very personal to me. It makes me who I am and has quite literally shaped my experience of life and my sense of identity. Changing my body would quite literally make me a different person, one without certain genetic or biological components. Wouldn't it?

So many questions. But this is only my initial thoughts on the subject and I'm sure they will be on-going. So you can expect that I will return to this theme again soon. Thanks for reading!


You can hear my music made whilst I was thinking about what it is to be human here!