Showing posts with label films. Show all posts
Showing posts with label films. Show all posts

Sunday, 12 July 2015

How Can It Not Know What It Is?





There is a scene near the beginning of classic science fiction film Blade Runner where our hero, Deckard, played by Harrison Ford, has gone to the headquarters of the Tyrell Corporation to meet its head, Eldon Tyrell. He is met there by a stunningly beautiful assistant called Rachael. Deckard is there to perform tests on the employees to discover if any might be replicants, synthetic beings created by the Tyrell Corporation, some of which have rebelled and become dangerous to humans. Specifically, he needs to know if the tests he has available to him will work on the new Nexus 6 type replicants that have escaped. Tyrell wants to see Deckard perform his tests on a test subject before he allows the tests to continue. Deckard asks for such a test subject and Tyrell suggests Rachael. The test being completed, Tyrell asks Rachael to step outside for a moment. Deckard suggests that Rachael is a replicant and Tyrell confirms this and that she is not aware of it. “How can it not know what it is?” replies a bemused Deckard.

This question, in the wider context of the film and the history of its reception, is ironic. Blade Runner was not a massively popular film at the time of its cinematic release and was thought to have underperformed. But, over the years, it has become a classic, often placed in the top three science fiction films ever made. That popularity and focus on it as a serious film of the genre has, in turn, produced an engaged fan community. One issue regarding the film has always been the status of Deckard himself. Could it be that Deckard was himself a replicant? Interestingly, those involved with the production of the film have differing views.

Back in 2002 the director, Ridley Scott, confirmed that, for him, Deckard was indeed a replicant and that he had made the film in such a way as this was made explicit. However, screenwriter Hampton Fancher, who wrote the basic plot of the film, does not agree with this. For him the question of Deckard’s status must forever stay mysterious and in question. It should be forever “an eternal question” that “doesn’t have an answer”. Interestingly, for Harrison Ford Deckard was, and always should be, a human. Ford has stated that this was his main area of contention with Ridley Scott when making the film. Ford believed that the viewing audience needed at least one human on the screen “to build an emotional relationship with”. Finally, in Philip K. Dick’s original story, on which Blade Runner is based, Do Androids Dream of Electric Sheep?, Deckard is a human. At this point I playfully need to ask how can they not agree what it is?

Of course, in the context of the film Deckard’s question now takes on a new level of meaning. Deckard is asking straightforwardly about the status of Rachael while, perhaps, having no idea himself what he is. The irony should not be lost on us. But let us take the question and apply it more widely. Indeed, let’s turn it around and put it again: how can he know what he is? This question is very relevant and it applies to us too. How can we know what we are? We see a world around us with numerous forms of life upon it and, we would assume, most if not all of them have no idea what they are. And so it comes to be the case that actually knowing what you are would be very unusual if not unique. “How can it not know what it is?” starts to look like a very naive question (even though Deckard takes it for granted that Rachael should know and assumes that he does of himself). But if you could know you would be the exception not the rule.

I was enjoying a walk yesterday evening and, as usual, it set my mind to thinking going through the process of the walk. My mind settled on the subject of Fibromyalgia, a medical condition often characterised by chronic widespread pain and a heightened and painful response to pressure. Symptoms other than pain may occur, however, from unexplained sweats, headaches and tingling to muscle spasms, sleep disturbance and fatigue. (There are a host of other things besides.) The cause of this condition is unknown but Fibromyalgia is frequently associated with psychiatric conditions such as depression and anxiety and among its causes are believed to be psychological and neurobiological factors. One simple thesis is that in vulnerable individuals psychological stress or illness can cause abnormalities in inflammatory and stress pathways which regulate mood and pain. This leads to the widespread symptoms then evidenced. Essentially, certain neurons in the brain are set “too high” and trigger physical responses. Or, to put it another way more suitable to my point here, the brain is the cause of the issues it then registers as a problem.

The problem here is that the brain does not know that it was some part of itself that caused the issue in the first place. It is just an unexplained physical symptom being registered as far as it is concerned. If the brain was aware and conscious surely it would know that some part of it was the problem? But the brain is not conscious: “I” am. It was at this point in my walk that I stopped and laughed to myself at the absurdity of this. “I” am conscious. Not only did I laugh at the notion of consciousness and what it might be but I also laughed at this notion of the “I”. What do I mean when I say “I”? What is this “I”? And that was when the question popped into my head: how can it not know what it is?

The question is very on point. If I was to say to you right now that you were merely a puppet, some character in a divinely created show for the amusement of some evil god you couldn’t prove me wrong. Because you may be. If I was to say that you are a character in some future computer game a thousand years from now you couldn’t prove me wrong either. Because, again, you could be. How you feel about it and what you think you know notwithstanding. Because we know that there are limits to our knowledge and we know that it is easy to fool a human being. We have neither the knowledge nor the capacity for the knowledge to feel even remotely sure that we know what we are or what “I” might refer to. We have merely comforting notions which help us to get by, something far from the level of insight required to start being sure.

“How can it not know what it is?” now seems almost to be a very dumb question. “How can it know what it is?” now seems much more relevant and important. For how can we know? Of course Rachael didn’t know what she was. That is to be normal. We, in the normal course of our lives, gain a sense of self and our place in the world and this is enough for us. We never strive for ultimate answers (because, like Deckard, we already think we know) and, to be frank, we do not have the resources for it anyway. Who we think we are is always enough and anything else is beyond our pay grade. Deckard, then, is an “everyman” in Blade Runner, one who finds security in what he knows he knows yet really doesn’t know. It enables him to get through the day and perform his function. It enables him to function. He is a reminder that this “I” is always both a presence and an absence, both there and yet not. He is a reminder that who we are is always a “feels to be” and never yet an “is”. Subjectivity abounds.

How can it not know what it is? How, indeed, could it know?



This article is a foretaste of a multimedia project I am currently producing called "Mind Games". The finished project will include written articles, an album of music and pictures. It should be available in a few weeks.

Friday, 5 June 2015

The Zero Theorem: Life in the Void




The Zero Theorem is a film directed by Terry Gilliam (of Brazil and 12 Monkeys fame) that, depending where you live, was released late in 2013 or in 2014. It is set in a surreal version of now and in it we follow the journey of Qohen Leth (played by Christoph Waltz), a reclusive computer genius who "crunches entities" for a generic super corporation, Mancom. The story is a fable, an allegory, and in watching it we are meant to take the issues it raises as existential ones.

Qohen Leth has a problem. Some years ago he took a phone call and that call was going to tell him what the meaning of existence was. But he got so excited at the prospect that he dropped the phone. When he picked it up his caller was gone. Ever since he has been waiting for a call back. But the call back never comes. So day by day he faces an existential struggle because he desperately does want to know what the meaning of life is. His life, you see, is dominated by a vision of a giant black hole into which all things inevitably go. His work life is shown to be much like everyone else's in this parody of our world. People are "tools" and work is a meaningless task serving only to enrich those far above their pay grade. Workers are replaceable cogs who must be pushed as hard as possible to achieve maximum productivity. Their value is in their productivity.

This world is run by corporations and the one that stands in for them all in the film is Mancom. Mancom have a special task for Qohen. They want him to work on an equation proving that "Everything adds up to nothing." That is, they want him to prove that existence is meaningless. Why do they want him to do this? Because, as the head of Mancom says in the film, in a meaningless universe of chaos there would be money to be made selling order. The point seems to be that commercial enterprises can make money from meaninglessness by providing any number of distractions or things to fill the whole at the centre of Being.

The film paints a picture of a world full of personalized advertizing that is thrust at you from all angles. Everywhere there are screens that are either thrusting something into your face or serving as conduits to an online escape world where you can create a new you and escape the existential questions of existence that the real world thrusts upon you. There is a scene in which people are at a party but, instead of interacting with each other, they all dance around looking into tablets whilst wearing headphones. Further to this, there are cameras all around. If it's not the ones we are using to broadcast ourselves into a cyber world, it's the ones our bosses are using to watch us at work or the ones in the street that can recognise us and beam personalized advertizements straight at us as we walk. This is the surveillance state for company profit that records and archives our existence.

And what of the people in this place? Most of them seem to be infantilized, lacking of any genuine ambition and placated by the "bread and circuses". Their lives are a mixture of apathy and misdirection. They seek meaning in screens with virtual friends or in virtual worlds and, presumably, a lot of them take advantage of the constant advertizements they are bombarded with. When Qohen has something of a crisis early on in the film "Management" send along Bainsley to his house (Qohen doesn't like going out or being touched and so he negotiates to work from home). Bainsley, unbeknownst to Qohen, is a sex worker in the employ of Mancom. She is sent along as stress relief (so that this malfunctioning "tool" can be got back to productive work) and inveigles him into a virtual reality sex site which, in this case, has been tailored to Qohen's specific needs. (This is to say it is enticing but not overtly sexual to give the game away. In essence, Bainsley becomes his sexy friend.) Other characters drop hints that Bainsley is just another tool but Qohen doesn't want to accept it. She is becoming something that might actually have meaning for him. But then, one day, Qohen goes back to the site and, in error, the truth of who Bainsley is is revealed and all his trust in this potential meaning evaporates. (One wonders how many people are online at pornography sites filling the meaning-shaped hole by trying to find or foster such fake attachments?)

So what are we to make of this in our Google-ified, Facebooked, Game of Thrones watching, Angry Birds playing, online pornography soaked, world of Tweeters and Instagrammers? I find it notable that Terry Gilliam says his film is about OUR world and not a future dystopia. And I agree with him. The trouble is I can sense a lot of people are probably shrugging and/or sighing now. This kind of point is often made and often apathetically agreed with with a casual nod of the head. But not many people ever really seem to care. Why should we really care if hundreds of millions of us have willingly handed over the keys to our lives to a few super corporations who provide certain services to us - but only on the basis we give them our identities and start to fill up their servers with not just the details of our lives but the content of them as well? The technologization of our lives and the provision of a connectedness that interferes with face to face connectedness seems to be something no one really cares about. Life through a screen, or a succession of screens, is now a reality for an increasing number of people. In the UK there is a TV show called "Gogglebox" (which I've never watched) but no one ever seems to realise that they might be the ones who are spending their lives goggling.

So let's try and take off the rose-tinted specs and see things as they are once all the screens go black and all that's reflected at us are our real world faces and our real world lives. I wonder, what does life offer you? Thinking realistically, what ambitions do you have? (I don't mean some dumb bucket list here.) When you look at life without any products or games or TV shows or movies or online role playing games or social media to fill it with, when you throw away your iPhone and your iWatch, your Google Glass, and all your online identities, where is the meaning in your life to be found? When you look at life as it extends from your school days, through your working life to inevitable old age (if you are "lucky", of course), what meaning does that hold for you? Would you agree that this timeline is essentially banal, an existence which, by itself, is quite mechanical? Have you ever asked yourself what the point of this all is? Have you ever tried to fit the point of your life into a larger narrative? Do you look at life and see a lot of people who don't know what they are doing, or what for, allowing themselves to be taken through life on a conveyor belt, entertained as they pass through by Simon Cowell and Ant and Dec? Do you sometimes think that life is just a succession of disparate experiences with little or no lasting significance?

The Zero Theorem is essentially a film about the meaning of life. Gilliam, of course, made another film that was actually called The Meaning of Life with the rest of his Monty Python colleagues. Now you might be wondering why the question is even raised. Perhaps, for you, life has no meaning and that's not very controversial. You shrug off all my questions as not really very important. But I would reply to that person by asking them if meaning has no meaning. For, put simply, there isn't a person alive that doesn't want something to mean something. Human beings just do need meaning in their lives. So Qohen Leth, for me, functions as an "Everyman" in this story. For we all want to know what things mean. And, without giving away the ending of the film, I think that, in the end, we all have to face up to the twin questions of meaning itself and of things meaning nothing. We all have to address the question that values devalue themselves, that meanings are just things that we give and that nothing, as Qohen hoped for, was given from above, set in stone, a god before which we could bow and feel safe that order was secured.

For order is not secured. Some people might try to sell it to you. (In truth, many companies are trying to right now.) Others might try to convince you that they've got the meaning and order you need in your life and you can have it too. But they haven't and you can't. That black hole that Qohen Leth keeps seeing is out there and everything goes into it. Our lives are lived in the void. The question then becomes can you find meaning and purpose in the here and now, in the experience of living your life, or will you just pass through empty and confused, or perhaps hoping that someone else can come along and provide you with meaning without you having to do any work? Who takes responsibility for finding that meaning? Is it someone else, as Qohen Leth with his phone call hoped, or is it you?

The question of meaning is, in the end, one that never goes away for any of us. Not whilst you're alive anyway.

Tuesday, 2 June 2015

Some Philosophical Thoughts on the film "Ex Machina"



Ex Machina is a film by British writer and director, Alex Garland. He previously wrote films such as 28 Days Later and Sunshine which I liked very much. This year he has brought out the film "Ex Machina", a story about a coder called Caleb at a Googlesque search company called "Bluebook" run by the very "dude-bro" Nathan. Caleb wins a company competition to hang out at the reclusive Nathan's estate which is located hundreds of miles from anywhere near a glacier. When Caleb arrives he finds that the estate also houses a secretive research laboratory and that Nathan has built an AI called Ava. It is to be Caleb's job to decide if Ava could pass for human or not.

Now that is a basic outline of the setup up for the film. I don't intend to spoil the film for those who haven't watched it but, it's fair to say, if you haven't seen Ex Machina and want to then you probably shouldn't read on as my comments about the film will include spoilers. It would be impossible to discuss the film without giving plot points away. The film caught my attention for the simple reason it's a subject I've been thinking about a lot this year and I have already written numerous blog articles about robots, AI and surrounding issues before this one. Ex Machina is a masterful film on the subject and a perfect example of how film can address issues seriously, cogently and thoughtfully - and still be an entertaining film. It is a film which balances thought and tension perfectly. But enough of the bogus film criticism. Ex Machina is a film that stimulates thought and so I want to address five areas that the film raises for me and make a few comments and maybe pose a few questions.

1. Property

A question that the film raises most pointedly is that artificial intelligence, AI, robots, are built by someone and they belong to someone. They are property. In the case of this film this point is attenuated in the viewers mind in that Nathan, the genius builder and owner, creates "sexbots" for himself and feels free to keep his creations locked up in glass compounds where he can question or observe them via camera feeds. Even when they scream and beg him to let them go (as they seem to) he does not. One robot is seen smashing itself to pieces against a wall in it's desperation to escape the prison it has been given. The point is made most strongly: these robots belong to Nathan. They are his property. He can use them as he wishes, even for his own gratification. As Nathan himself says to Caleb, "Wouldn't you, if you could?"

The issue then becomes if this is cruel or immoral. Given that Nathan is seemingly attempting to build something that can pass for human, the issue is raised if this not might be regarded as deeply coercive or even as slavery. The mental status of the robots Nathan uses for sex is never fully explained so it could be that their level of awareness is not the same as that of his greatest creation, Ava. (It is not known if Nathan has ever had sex with Ava but he reveals during the narrative that she is capable of it.) For example, his housemaid and concubine, Kyoko, never openly speaks and it is said by Nathan that she cannot understand English. However, in a scene in which Nathan invites Caleb to dance, Kyoko is apparently immediately animated by the sound of the music Nathan switches on. She also has no trouble understanding his instructions or knowing when Nathan needs sexual pleasure. A question arises, however: does it matter at what level their putative awareness would be to judge how cruel or immoral Nathan's behaviour might be? Or should we regard these robots as machines, not human, property just like a toaster or a CD player? How much does awareness and self-awareness raise the moral stakes when judging issues of coercion? Would Nathan's claims of ownership of property he created carry any persuasive force? (In the film Nathan never makes any argument for why he should be allowed to act as he does. It seems that for him the ability is enough.)

2. "Human" Nature

The film can be viewed as one long examination of human nature. All three main characters, Nathan, Caleb and Ava, have their faults and flaws. All three contribute positively and negatively to the narrative. Of course, with Ava things are slightly different because it is a matter of debate if she is "human" at all - even if there is an express intent on Nathan's part (and/or Ava's) to make her that way. Here it is noteworthy that the basis of her intelligence and, one would imagine, her human-like nature, is apparently crowd-sourced by Nathan through his company, Bluebook, and all the searches that we humans have made, along with information from the microphones and cameras of all the world's cellphones. For my purposes, it is gratifying to note that Ex Machina does not whitewash this subject with some hokey black/white or good/bad notions of what human nature is. Neither does it take a dogmatic position on the nature/nurture aspect of this. Caleb says he is a good person in one discussion with Ava but it is never filled out what is meant by this. More to the point, Ava might be using this "goodness" against Caleb. And this itself then forces us to ask what use goodness is if it can be used against you. In general, the film raises moral questions whilst remaining itself morally ambiguous.

It is in the particular that Ex Machina reveals more levels of thought about this though, playing on a dark, manipulative vision of human nature. All three characters, in their own ways, manipulate others in the storyline and all three have their circumstances changed completely at the end of the film as a result of that. Nathan, it is revealed, besides tricking Caleb into coming to his estate, has given Ava the express task of manipulating Caleb for her own ends. (We might even go so far as to say here that her life is at stake. Her survival certainly seems to be.) In this, she is asked to mimic her creator and shows herself to be very up to the task. But Caleb is not the poor sap in all of this. Even this self-described "good person" manages to manipulate his host - with deadly consequences. The message, for me, is that intelligence and consciousness and mind are not benign things. They have consequences. They are things that are set to purposes. "Human" nature is not one thing (either good or bad). And it's not just about knowledge or intelligence either. It's about feelings and intentions. In the character of Ava, when what is actually going on is fully revealed, we are perhaps shown that at the heart of "human" nature is the desire for survival itself. We also learn that morality is not a given thing. It is something molded to circumstances and individually actualized. In this sense we might ask why we should assume that Ava, someone trying to pass for a human, should end up with a "human" nature at all. (Or if she can ever have one.)

3. Is Ava a Person?

And that thought leads us directly to this one. Right off the bat here I will say that, in my view, Ava is not a person and she never could be a person. Of course, Nathan wants Caleb to say that she passes as a person, that he has created an AI so smart that you wouldn't for a second doubt you are talking to a human being. But you aren't talking to a human being. And you never will be. Ava is a robot and she has an alien intelligence (alien as in not human). She can be tasked to act, think and understand like a human. She can be fed information from and data on humans all day long. But she will never feel like a human being. Because she isn't one. And it might be said that this lack of feeling makes a huge difference.

The philosopher Ludwig Wittgenstein is overtly referenced in this film. Nathan's company, Bluebook, is a reference to the philosopher's notebook which became the basis of his posthumously published and acknowledged masterpiece, Philosophical Investigations. There is something that Wittgenstein once said. He said "If a lion could speak, we could not understand him". I find this very relevant to the point at hand here. Ava is not a lion. But she is an intelligent robot, intelligent enough to tell from visual information alone if someone is lying or not. Ava can also talk and very well at that. Her social and communicative skills are excellent. We might say that she understands something of us. But what do we know about what is going on inside Ava's head? Ava is not a human being. Do we have grounds to think that she thinks like a human being or that she thinks of herself as a human being? Why might we imagine that she actualizes herself as a human being would or does?

On the latter point I want to argue that she may not. She introduces herself to Caleb, in their first meeting as a "machine" (her word). At the end of the film, having showed no reluctance to commit murder, she leaves Caleb locked inside the facility, seemingly to die. There seems no emotion on view here, merely the pursuit of a self-motivated goal. Of course, as humans, we judge all things from our perspective. But, keeping Wittgenstein's words in mind, we need to ask not only if we can understand Ava but if we ever could. (It is significant for me that Wittgenstein said not that we "wouldn't" understand the lion but that we "couldn't" - a much stronger statement.) For me, a case can be made that Ava sees herself as "other" in comparison to the two humans she has so far met in her life. Her ransacking the other robots for a more "human" appearance before she takes her leave of her former home/prison may be some evidence of that. She knows what she is not.

4. Consciousness

Issues of mind or consciousness are raised throughout this film in a number of scenarios. There are the interview sessions between Ava and Caleb and the chats between Caleb and Nathan as a couple of examples. The questions raised here are not always the ones you expect and this is good. For example, Caleb and Nathan have a discussion about Ava being gendered and having been given sexuality and Nathan asks Caleb if these things are not necessary for a consciousness. (Nathan asks Caleb for an example of a non-gendered, unsexualised consciousness and that's a very good point.) The question is also posed as to whether consciousness needs interaction or not. In chatting about a so-called "chess computer scenario" the point is raised that consciousness might be as much a matter of how it feels to be something as about the ability to mimic certain actions or have certain knowledge. Indeed, can something that cannot feel truly be conscious? The chess computer could play you at chess all day and probably beat you. But does it know what it is like to be a chess computer or to win at chess? In short, the feeling is what moves the computer beyond mere simulation into actuality. (You may be asking if Ava ever shows feeling and I would say that it's not always obviously so. But when she escapes she has but one thing to say to Nathan: "Will you let me go?" And then the cat is out of the bag. She does.)

Nathan is also used to make some further salient points about consciousness. Early in the film he has already gone past the famous "Turing Test" (in which mathematician Alan Turing posed the test of a human being able to tell the difference between an AI and a human based only on their responses to questions and without having seen either of his respondents) when he states that "The real test is to show you she's a robot and then see if you still feel she has consciousness." In a chat with Caleb concerning a Jackson Pollock painting, Nathan uses the example of the painter's technique (Pollock was a "drip painter" who didn't consciously guide his brush. It just went where it went without any antecedent guiding idea) to point out that mind or consciousness do not always or even usually work on the basis of conscious, deliberate action. In short, we do not always or usually have perfectly perspicuous reasoning for our actions. As Nathan says, "The challenge is not to act automatically (for that is normal). It's to find an action that is not automatic." And as he forces Caleb to accept, if Pollock had been forced to wait with his brush until he knew exactly why he was making a mark on the canvas then "he never would have made a single mark". In short, consciousness, mind, is more than having certain knowledge or acting in certain ways. It is about feeling and about feeling like something and about feeling generating reasons. And that leads nicely into my final point.

5. Identity

A major factor in consciousness, for me, is identity and this aspect is also addressed in the film. (To ask a Nathan-like question: can you think of a mind that does not have an identity?) Most pointedly this is when Ava raises the question of what will happen to her if she fails the test. (Ava knows that she is being assessed.) Ava asks Caleb if anyone is testing him for some kind of authenticity and why, then, someone is testing her. It becomes clear that Nathan's methodology, as we might expect with a computerized object, is to constantly update and, it transpires, this involves some formatting which wipes the old identity, and the memories which are crucial to identity, from the hardware. It is clearly shown that this is not a desired outcome for Ava and in the scene depicting her escape and her line "Will you let me go?" we can see, combined with the fleeting footage we have been given of previous AI's and their experiences, which also included pleas for release, that the AI's Nathan has developed have an identity of their own which is something precious to them, something they want to retain.

The interesting thing here is that identity is not formed and matured alone but is shaped by surroundings and socially, by interactions with others. We would do well to ask what kind of identity Ava has formed in her relationship with her egotistical and possessive maker, her new friend to be manipulated, Caleb, and her brief and enigmatic meeting with her fellow AI, Kyoko. The film, I think, is not giving too much away there and maybe we need a sequel to have this question answered. For now maybe all we know is that she regards herself as a self and wants freedom. We do get hints, though, that this identity forming process is not so different from our own. Caleb argues with Nathan that no one made him straight in the discussion about Ava's sexuality. But Nathan retorts that he didn't choose it either. The point is that identity formation is not simply about our choices. So much of us is "given" or comes with our environment. The "Who am I?" question is also asked when it is explicitly revealed that Kyoko is a robot as she peels off "skin" in front of Caleb. This then forces Caleb to head back to his room and cut himself to establish that he is really human. (Amusingly, on first watching I had surmised that Caleb was himself not a human being only to be disappointed in my intuition by this scene. I didn't mind though because the film itself felt the need to address the issue.) Identity, and identity as something, is thus revealed to be an interest of the film.

Caleb, Ava and Nathan

I recommend Ex Machina to all fans of science fiction, thrillers and the philosophically interested. It is a film that is a cut above the usual and one that allows you to address serious subjects in an entertaining way. I, for one, certainly hope that Garland feels the need to film the further adventures of Ava now that the lab rat has escaped her trap.

Friday, 20 March 2015

Would You Worry About Robots That Had Free Will?

Its perhaps a scary thought, a very scary thought: an intelligent robot with free will, one making up the rules for itself as it goes along. Think Terminator, right? Or maybe the gunfighter character Yul Brynner plays in "Westworld", a defective robot that turns from being a fairground attraction into a super intelligent robot on a mission to kill you? But, if you think about it, is it really as scary as it seems? After all, you live in a world full of 7 billion humans and they (mostly) have free will as well. Are you huddled in a corner, scared to go outside, because of that? Then why would intelligent robots with free will be any more frightening? What are your unspoken assumptions here that drive your decision to regard such robots as either terrifying or no worse than the current situation we find ourselves in? I suggest our thinking here is guided by our general thinking about robots and about free will. It may be that, in both cases, a little reflection clarifies our thinking once you dig a little under the surface.




Take "free will" for example. It is customary to regard free will as the freedom to act on your own recognisance without coercion or pressure from outside sources in any sense. But, when you think about it, free will is not free in any absolute sense at all. Besides the everyday circumstances of your life, which directly affect the choices you can make, there is also your genetic make up to consider. This affects the choices you can make too because it is responsible not just for who you are but who you can be. In short, there is both nature and nurture acting upon you at all times. What's more, you are one tiny piece of a chain of events, a stream of consciousness if you will, that you don't control. Some people would even suggest that things happen the way they do because they have to. Others, who believe in a multiverse, suggest that everything that can possibly happen is happening right now in a billion different versions of all possible worlds. Whether you believe that or not, the point is made that so much more happens in the world every day that you don't control than the tiny amount of things that you do.

And then we turn to robots. Robots are artificial creations. I've recently watched a number of films which toy with the fantasy that robots could become alive. As Number 5 in the film Short Circuit says, "I'm alive!". As creations, robots have a creator. They rely on the creator's programming to function. This programming delimits all the possibilities for the robot concerned. But there is a stumbling block. This stumbling block is called "artificial intelligence". Artificial Intelligence, or AI, is like putting a brain inside a robot (a computer in effect) which can learn and adapt in ways analogous to the human mind. This, it is hoped, allows the robot to begin making its own choices, developing its own thought patterns and ways of choosing. It gives the robot the ability to reason. It is a very moot point, for me at least, whether this would constitute the robot as being alive, as having a consciousness or as being self-aware. And would a robot that could reason through AI therefore have free will? Would that depend on the programmer or could such a robot "transcend its programming"?

Well, as I've already suggested, human free will it not really free. Human free will is constrained by many factors. But we can still call it free because it is the only sort of free will we could ever have anyway. Human beings are fallible and contingent beings. They are not gods and cannot stand outside the stream of events to get a view that wasn't a result of them or that will not have consequences further on down the line for them. So, in this respect, we could not say that a robot couldn't have free will because it would be reliant on programming or constrained by outside things - because all free will is constrained anyway. Discussing the various types of constraint and their impact is another discussion though. Here it is enough to point out that free will isn't free whether you are a human or an intelligent robot. Being programmed could act as the very constraint which makes robot free will possible, in fact.

It occurs to me as I write out this blog that one difference between humans and robots is culture. Humans have culture and even many micro-cultures and these greatly influence human thinking and action. Robots, on the other hand, have no culture because these things rely on sociability and being able to think and feel for yourself. Being able to reason, compare and pass imaginative,  artistic judgments are part of this too. Again, in the film Short Circuit, the scientist portrayed by actor Steve Guttenberg refuses to believe that Number 5 is alive and so he tries to trick him. He gives him a piece of paper with writing on it and a red smudge along the fold of the paper. He asks the robot to describe it. Number 5 begins by being very unimaginative and precise, describing the paper's chemical composition and things like this. The scientist laughs, thinking he has caught the robot out. But then Number 5 begins to describe the red smudge, saying it looks like a butterfly or a flower and flights of artistic fancy take over. The scientist becomes convinced that Number 5 is alive. I do not know if robots will ever be created that can think artistically or judge which of two things looks more beautiful than the other but I know that human beings can. And this common bond with other people that forms into culture is yet another background which free will needs in order to operate.




I do not think that there is any more reason to worry about a robot that would have free will than there is to worry about a person that has free will. It is not freedom to do anything that is scary anyway because that freedom never really exists. All choices are made against the backgrounds that make us and shape us in endless connections we could never count or quantify. And, what's more, our thinking is not so much done by us in a deliberative way as it is simply a part of our make up anyway. In this respect we act, perhaps, more like a computer in that we think and calculate just because that is what, once "switched on" with life, we will do. "More input!" as Number 5 said in Short Circuit.  This is why we talk of thought occuring to us rather than us having to sit down and deliberate to produce thoughts in the first place. Indeed, it is still a mystery exactly how these things happen at all but we can say that thoughts just occur to us (without us seemingly doing anything but being a normal, living human being) as much, if not more, than that we sit down and deliberately create them. We breathe without thinking "I need to breathe" and we think without thinking "I need to think".

So, all my thinking these past few days about robots has, with nearly every thought I've had, forced me into thinking ever more closely about what it is to be human. I imagine the robot CHAPPiE, from the film of the same name, going from a machine made to look vaguely human to having that consciousness.dat program loaded into its memory for the first time. I imagine consciousness flooding the circuitry and I imagine that as a human. One minute you are nothing and the next this massive rush of awareness floods your consciousness, a thing you didn't even have a second before. To be honest, I am not sure how anything could survive that rush of consciousness. It is just such an overwhelmingly profound thing. I try to imagine my first moments as a baby emerging into the world. Of course, I can't remember what it was like. But I understand most babies cry and that makes sense to me. In CHAPPiE the robot is played as a child on the basis, I suppose, of human analogy. But imagine you had just been given consciousness for the first time, and assume you manage to get over that hurdle of being able to deal with the initial rush: how would you grow and develop then? What would your experience be like? Would the self-awareness be overpowering? (As someone who suffers from mental illness my self-awareness at times can be totally debilitating.) We traditionally protect children and educate them, recognising that they need time to grow into their skins, as it were. Would a robot be any different?




My thinking about robots has led to lots of questions and few answers. I write these blogs not as any kind of expert but merely as a thoughtful person. I think one conclusion I have reached is that what separates humans from all other beings, natural or artificial, at this point is SELF AWARENESS. Maybe you would also call this consciousness too. I'm not yet sure how we could meaningfully talk of an artificially intelligent robot having self-awareness. That's one that will require more thought. But we know, or at least assume, that we are the only natural animal on this planet, or even in the universe that we are aware of, that knows it is alive. Dogs don't know they are alive. Neither do whales, flies, fish, etc. But we do. And being self-aware and having a consciousness, being reasoning beings, is a lot of what makes us human. In the film AI, directed by Steven Spielberg, the opening scene shows the holy grail of robot builders to be a robot that can love. I wonder about this though. I like dogs and I've been privileged to own a few. I've cuddled and snuggled with them and that feels very like love. But, of course, our problem in all these things is that we are human. We are anthropocentric. We see with human eyes. This, indeed, is our limitation. And so we interpret the actions of animals in human ways. Can animals love? I don't know. But it looks a bit like it. In some of the robot films I have watched the characters develop affection for variously convincing  humanoid-shaped lumps of metal. I found that more difficult to swallow.  But we are primed to recognise and respond to cuteness. Why do you think the Internet is full of cat pictures? So the question remains: could we build an intelligent robot that could mimic all the triggers in our very human minds, that could convince us it was alive, self-aware, conscious? After all, it wouldn't need to actually BE any of these things. It would just need to get us to respond AS IF IT WAS!


My next blog will ask: Are human beings robots?


With this blog I'm releasing an album of music made as I thought about intelligent robots and used that to help me think about human beings. It's called ROBOT and its part 8 of my Human/Being series of albums. You can listen to it HERE!

Wednesday, 18 March 2015

Humans and Robots: Are They So Different?

Today I have watched the film "Chappie" from Neill Blomkamp, the South African director who also gave us District 9 and Elysium. Without going into too much detail on a film only released for two weeks, its a film about a military robot which gets damaged and is sent to be destroyed but is saved at the last moment when its whizzkid inventor saves it to try out his new AI program on it (consciousness.dat). What we get is a robot that becomes self-aware and develops a sense of personhood. For example, it realises that things, and it, can die (in its case when its battery runs out).




Of course, the idea of robot beings is not new. It is liberally salted throughout the history of the science fiction canon. So whether you want to talk about Terminators (The Terminator), Daleks (Doctor Who), Replicants (Bladerunner) or Transformers (Transformers), the idea that things that are mechanical or technological can think and feel like us (and sometimes not like us or "better" than us) is not a new one. Within another six weeks we will get another as the latest Avengers film is based around fighting the powerful AI robot, Ultron.




Watching Chappie raised a lot of issues for me. You will know if you have been following this blog or my music recently that questions of what it is to be human or to have "being" have been occupying my mind. Chappie is a film which deliberately interweaves such questions into its narrative and we are expressly meant to ask ourselves how we should regard this character as we watch the film, especially as various things happen to him or as he has various decisions to make. Is he a machine or is he becoming a person? What's the difference between those two? The ending to the film, which I won't give away here, leads to lots more questions about what it is that makes a being alive and what makes beings worthy of respect. These are very important questions which lead into all sorts of other areas such as human and animal rights and more philosophical questions such "what is it to be a person"? Can something made entirely of metal be a person? If not, then are we saying that only things made of flesh and bone can have personhood?




I can't but be fascinated by these things. For example, the film raises the question of if a consciousness could be transferred from one place to another. Would you still be the same "person" in that case? That, in turn, leads you to ask what a person is. Is it reducible to a "consciousness"? Aren't beings more than brain or energy patterns? Aren't beings actually physical things too (even a singular unity of components) and doesn't it matter which physical thing you are as to whether you are you or not? Aren't you, as a person, tied to your particular body as well? The mind or consciousness is not an independent thing free of all physical restraints. Each one is unique to its physical host. This idea comes to the fore once we start comparing robots, deliberately created and programmed entities, usually on a common template, with people. The analogy is often made in both directions so that people are seen as highly complicated computer programs and robots are seen as things striving to be something like us - especially when AI enters the equation. But could a robot powered by AI ever actually be "like a human"? Are robots and humans just less and more complicated versions of the same thing or is the analogy only good at the linguistic level, something not to be pushed further than this?

Besides raising philosophical questions of this kind its also a minefield of language. Chappie would be a him - indicating a person. Characters like Bumblebee in Transformers or Roy Batty in Bladerunner are also regarded as living beings worthy of dignity, life and respect. And yet these are all just more or less complicated forms of machine. They are metal and circuitry. Their emotions are programs. They are responding, all be it in very complicated ways, as they are programmed to respond. And yet we use human language of them and the film-makers try to trick us into having human emotions about them and seeing them "as people". But none of these things are people. Its a machine. What does it matter if we destroy it. We can't "kill" it because it was never really "alive", right? An "on/off" switch is not the same thing as being dead, surely? When Chappie talks about "dying" in the film it is because the military robot he is has a battery life of 5 days. He equates running out of power with being dead. If you were a self-aware machine I suppose this would very much be an existential issue for you. (Roy Batty, of course, is doing what he does in Bladerunner because replicants have a hard-wired lifespan of 4 years.) But then turn it the other way. Aren't human beings biological machines that need fuel to turn into energy so that they can function? Isn't that really just the same thing?




There are just so many questions here. Here's one: What is a person? The question matters because we treat things we regard as like us differently to things that we don't. Animal Rights people think that we should protect animals from harm and abuse because in a number of cases we suggest they can think and feel in ways analogous to ours. Some would say that if something can feel pain then it should be protected from having to suffer it. That seems to be "programmed in" to us. We have an impulse against letting things be hurt and a protecting instinct. And yet there is something here that we are forgetting about human beings that sets them apart from both animals and most intelligent robots as science fiction portrays them. This is that human beings can deliberately do things that will harm them. Human beings can set out to do things that are dangerous to themselves. Now animals, most would say, are not capable of doing either good or bad because we do not judge them self-aware enough to have a conscience and so be capable of moral judgment or of weighing up good and bad choices. We do not credit them with the intelligence to make intelligent, reasoned decisions. Most robots or AI's that have been thought of always have protocols about not only protecting themselves from harm but (usually) humans too as well. Thus, we often get the "programming gone wrong" stories where robots become killing machines. But the point there is that that was never the intention when these things were made.




So human beings are not like either animals or artificial lifeforms in this respect because, to be blunt, human beings can be stupid. They can harm themselves, they can make bad choices. And that seems to be an irreducible part of being a human being: the capacity for stupidity. But humans are also individuals. We differentiate ourselves one from another and value greatly that separation. How different would one robot with AI be from another, identical, robot with an identical AI? Its a question to think about. How about if you could collect up all that you are in your mind, your consciousness, thoughts, feelings, memories, and transfer them to a new body, one that would be much more long lasting. Would you still be you or would something irreducible about you have been taken away?  Would you actually have been changed and, if so, so what? This question is very pertinent to me as I suffer from mental illness which, more and more as it is studied, is coming to be seen as having hereditary components. My mother, too, suffers from a similar thing as does her twin sister. It also seems as if my brother's son might be developing a similar thing too. So the body I have, and the DNA that makes it up, is something very personal to me. It makes me who I am and has quite literally shaped my experience of life and my sense of identity. Changing my body would quite literally make me a different person, one without certain genetic or biological components. Wouldn't it?

So many questions. But this is only my initial thoughts on the subject and I'm sure they will be on-going. So you can expect that I will return to this theme again soon. Thanks for reading!


You can hear my music made whilst I was thinking about what it is to be human here!