Showing posts with label robots. Show all posts
Showing posts with label robots. Show all posts

Saturday, 1 August 2015

Existenz²: A Fable of The Inhuman Future


What does nihilism mean? That the highest values devaluate themselves. The aim is lacking; “why?” finds no answer. (Nietzsche)


I have been writing for a number of months now about human being both as a musician and also more philosophically too. At the same time I became interested in matters of consciousness and also future technology, initially because of an online contact with similar interests. Simultaneously, I have had an interest in our species and how it is changing in modern times. I have always been interested in standing back and taking a larger view of things and asking myself how things have developed or are developing. I find it good to ask myself the questions "How did we get here?" and "Where are we going?". A specific interest in this time has been the worlds of technology and social media and how they have changed us. Because they have changed us. Today I was looking at a stream of tweets and it struck me as totally bizarre the messages that people just send out into the ether. Its like somebody in public suddenly blurting out to everyone what is on their minds. You would think that person was crazy if they did that in public.

The problem is that all these simultaneous concerns are complex and large topics even in themselves. It requires some serious thinking and some kind of conceptual framework to even attempt to make something from all of this. But I am nothing if not ambitious and, crucially, I am one of those people who sees a need to think things through to come to some sort of serious conclusion. So I apologise in advance if what follows seems to only skim the surface of the issues or be a little shallow. I put that down to the fact that these subjects could each take up many books in their own right and this is but a little blog where I doodle my thoughts. However, if it is the case that any of us regular Joe’s should be thinking about the world we live in, and reaching reasoned positions about that, well then please count this as one of my first hesitant attempts.

What I tell here is a fable of the present, where time means nothing or is absurd, something we are totally conditioned by but feel completely lost in. In this world innocents are crushed beneath the wheels of instrumentality, all value gone. Disappointment is inbred from birth here and dreams and hopes are but memories. We enter The Inhuman Age as humanism multiplies humanism and, as it must, devalues and devours itself and ends in nihilism. Men, the loci of a supposed rational agency, subvert their own descriptions of themselves. The more they insist on their rationality, the less it seems evident. Knowledge, truth, science and technology are venerated as our saviours but we are blind to their fallibilities.

In this fable our culture becomes about mass media, mass culture, social media, mobs, primitive emotions that shove thinking aside.  Everything, all our thinking, our whole narrative and its meaning, must be squeezed into 140 characters or less, or a picture or an instant message. Needs are mediated through what ever source. Politics here is in the service of base desire rather than people. Survive as anything or die as nothing is the rule. Speak up or be ignored. The more you have a voice through various media platforms the less your voice actually counts and the more anonymous you become. The more homogenized things are, the more fractured they become. Die in a corner and without any fuss. All fall into your liberal democratic camps to argue for your point of view whilst half the world population still live in huts and eat basic crops, a serious and on-going divide.

This is a fable about the humans who magnified themselves and magnified themselves and in that magnification they destroyed themselves, revealed themselves to be not exceptional but animal, just cleverer apes, a biological phase in the life of an unimportant planet. Eventually, they evolved beyond their biological origins and became pure technology. Humans were never heard from again.

Of course, this is a very 1st World fable. I am a first world person and have never set foot outside of this world. The populations of Europe and the USA come to something just over 1 billion people. This is basically the pool of views that any of us reading this now might ever hear from. Give or take. Social networks report 1 billion members and we sit and draw breath at the enormity of it. But the fact remains that far more people have never heard of Facebook or Twitter or Instagram than have ever used them. There is an unheard of and unconsulted population of the earth that numbers billions. Why do you think that Facebook, for example, are building super drones that can carry free internet around the world? To increase their membership and bring the “benefits” of Western society to other places and new populations.

In so doing I think its not too wide of the mark to say that they will be hastening the demise of the humans. I think this demise is being hastened on its way by the rise of a 24/7 world of social media interconnectedness in which each of us connected is expected to have an opinion on everything. In this world you are basically anonymous (even though you may have given yourself an amusing handle). People on these networks become anonymous anyones, nothings that replace the something you might have to genuinely look at and respond to (a physical person). People online are not real people, at least not while they remain there. They are cyphers for real people but ones you can block, mute, ignore, insult, threaten or abuse without any real consequences in the main. This removal of consequences is just one of the traces of a barely perceptible change, a change which in my thinking takes us from humanity to inhumanity.      

Should one wish to find an example of inhumanity in progress it is not hard to find. This Internet world of interconnectedness affords many places where one can egotistically proffer ones views as the fount of all knowledge in the face of others who demur and argue, to the contrary, that it is their views that should actually hold that place. One is left wondering, having observed such goings on, if any real communication ever took place. The Internet has allowed us people on the way to inhumanity a space where we may be brutes expressing our heartfelt urges and base thoughts, a place where we may offend others for the purposes of reinforcing our own identities. I imagine that some bright spark somewhere has invented bots that go online and run through a whole playbook of arguments to no purposes. Many people have probably interacted with them not even realising that they weren’t even talking to a person. It was just an agenda all along. But I validated who I am as a person so who gives a rat’s.

But what does this do to the humans? It pushes them one step further to becoming inhumans. Knowledge is not now about deep thought anymore. You cannot express a deep thought in 140 characters and certainly not unless you have had practice at doing so. Wittgenstein and Nietzsche did not develop their pithiness by tweeting or posting a Facebook post but by thinking. And this is precisely the activity that social media does not promote. It promotes instant response, the sharing of your gut feeling or your opinion. But it does not promote you giving a thought out, considered response. Social media promotes “feelz” as the new kid in town. Saying what you feel is now what matters. This changes us in terms of attention span too. We learn to expect instant solutions and instant answers. Now, now, now. The next thing. Repeat. Thinking becomes something strange and foreign.

But let’s switch focus from the content to the hardware. Technology. Devices. Everyone in the 1st World today knows that you have to keep in touch with everyone else. You need to be on top of things and know what is going on in the world. You need something to play games on and listen to music on. This is where you need to be at. If you don’t have a smartphone or a tablet you are literally not part of the human race anymore. Pretty much every day now as I take my daily exercise I will meet people staring into screens as they walk along in the street. I should declare at this point that I am old enough to remember when these devices didn’t exist at all. Its not that long ago really. Unless you are 25 or under in which case it probably seems ages ago. If you have lived both sides of this technological divide you are in the perfect position to be able to sit back and see how things have actually, demonstrably changed in real time as you lived.

When I was a lad (cue violins) if you needed to tell someone something you went round to their house and asked if they were in. Or you picked up your home telephone, if you had one (we didn’t), and spoke to them that way. In addition, all the people you knew would be from your locality. They were the people fate had decreed you were to grow up with. But then technology came along and everything changed. Now you can speak to people in every continent every day. If you want to you can even speak to them while seeing them. Technology has changed the horizons. You may think this is good but, ask yourself, where does it stop? In another article I wrote recently I mused about the possible future technology that, who knows, someone somewhere may well be working on right now. It makes sense that these communications devices we carry around with us actually become a part of us. Google Glass and other wearable tech is a step in this direction. One day someone will figure out an implant that gives us the global communications we say we need but not just as wearable technology but as technology integrated into our bodies. There are Futurists out there right now who dream of this.

When this happens, as I’m sure it will, it will be a big step. It will be a step along the technological road we have already headed down even though, maybe, you don’t realise that we have. Technology that changes us forever will not be presented as such. And this is part of my argument here as I talk of us going from humans to inhumans. None of this will be overt. The technology will be presented as beneficial, helpful, benevolent. You will almost certainly want it just as you want your smartphone and your computer right now. If you don’t have it you will even feel left out. I remember going for a job some years ago now when I didn’t have a mobile phone. The prospective employer asked for my mobile number and I replied that I didn’t have one. The look on his face spoke a thousand words. I didn’t get the job and I’m convinced that was a large part of why. Not taking part in societal norms can have consequences.

I have spoken a lot in the past few months about a technological future some see for humans. This is one reason why I see the future for humans as becoming inhumans. I read the futures mapped out by Futurists and Transhumanists and I concede to myself that it is foreseeable, one day, that some of the things they dream of will come to pass. Of course, as I’ve said before, a lot of their hopes are mere speculations that are yet to be proved possible but it is clear to me that there are significant funds and personnel tied up in making various technological futures happen. Those who hold out the hope of a pain free, disease free world will always be able to attract a certain audience too. For my purposes I have been content to point out that their future dreams of “techno-humans”, to my mind, leave the humanity part behind. (Our mass media, 24/7 society is part of this development and has effects as I am arguing in this very blog.) And this is what I’m explicitly saying in this blog now. Technologically advanced humans won’t be human anymore. Human beings are defined by their imperfection, not by their increased, increasing or actualised perfection.

There is another angle from which to view our progress along a scale from humanity to inhumanity and that is in terms of a focus on subjectivity. In our modern age we have very much been encouraged to be in control of things, primarily through the technology that we carry with us. Even our currency, money itself, is now being taken from the physical world and “contactless payment” is taking its place. On our TVs we have for some years now been encouraged to think that our views count as we are invited to vote in various popularity contests. The message is that we, as subjects, matter. The metanarratives of yesteryear are gone and even forms of intersubjectivity are shunned. You, the thinking subject, are what counts.

The flip side to this is that things have become rapidly de-centred and now its really just you on your own. Or a helpline in a country where the person speaks your language with a thick, ununderstandable accent. Nevertheless, the subject has become the focus of all things but it is as an anonymous, anyone kind of a subject. There is a sense in which we are all just subjects sitting in our homes in need of purpose, control and something to do. Its very disjointed. There is a move away from social cohesion to social isolation. People to turn to recede to online or difficult to access worlds where an actual person with a face does not exist. In a real sense our age is the age of the world going online which forces you to access it a certain way as nothing else exists. Is it really so hard now to imagine that we become cyber-beings, code with a personality? We are daily creating a world in which being a physical being matters less and less.

What is the symbol of this modern age, of the human becoming inhuman? I want to get in first and say that this symbol should be masturbation. This is the age of the masturbator. There are a number of reasons for this, not least that at any one time millions of us will be online masturbating to something. But my analysis is a bit more profound than this. Masturbation is a non-social way of giving yourself pleasure. It is, for now, a physical act but, in the end, that will be overcome since sexual pleasure is really only a matter of tickling the right neurons. It has nothing to do with penises and vaginas in our inhuman future. Masturbation is the symbol of our age because it is the ultimate subjective experience. Its you with yourself imagining. Its the thrill which reminds us that we are still alive, that there is more than a humdrum world largely devoid of meaning. It is the moment that means par excellence. Before it recedes and is lost again. And its only purpose is that thrill which lasts less and less the more you do it. But masturbation is also a disguise. Its there to cover over the fact that you are all on your own. Its another nothing that covers over where a something should be. No one would masturbate if they could have sex instead.

So this is my modern, badly explained fable. It is that our race, which has taken itself as the measure of all things and called it humanism, is on the way to making itself obsolete by means of itself, its own values and its own progress. It is a reminder that nothing stays the same and that things are always moving on down the pipe. Things always come from somewhere and always go to somewhere else. Standing still is not an option. In this, “inhumanity” and “inhumans” are not moral judgments. They are merely words which express the idea that humanity is changing and is fated to become something else. The animus of our age is technological and its effects upon us both now and into the future are fundamentally changing both us and our world. This will continue, in my fable, up until the point when there are no humans left any more. There will just be the inhumans that we have become. 

What form of life these beings will take is not yet clear but they will not be biological for biology is but one weakness that needs to be overcome. This Futurists and Transhumanists know well and I think they have a chance to succeed in their aims. As I have tried to show here, though, its not just a matter of turning our thoughts and memories into code and building a robot. Our form of life right now is changed by the devices we use and the networks we insert ourselves into. Humanity is already changed and continuing to change because of these technologically enabled networks and the media and opinion they dispense. 

My conclusion is summed up by the term Existenz². Existenz² is an idea, the idea that humanity, humans and humanism, through their excess and the superfluity of themselves and their values, thereby devalue and degrade themselves to nothing. They cause their own destruction and annihilation. Existenz squared is the end of humanity and the beginning of inhumanity. Think of it by analogy to sound which can be overdriven until the point at which it is pure distortion and the sound you began with has been annihilated. At that point you have just another sound. My message is that more and more of humanity does not equal a better humanity but the end of humanity. All values devalue themselves whether truth, love, compassion, knowledge or whatever. To all things there must be limits.

We as humans are defined by time, by our contingency in time and our finitude as beings in time. This is our lot, to be imperfect, fallible, weak and powerless. But we are also innocent beings, beings who strove to know and valued knowing but could never know enough or truly know anything at all. We were forever stuck with our own descriptions for things and our reasons for needing them, creatures who always wanted more but were always unable to get it. This realisation, naturally enough, leads to terminal disappointment and, in some, a blind refusal to accept the truth. This truth leads to the fact that the project of humanism will come to an end and we humans, the measurers of all things, will, eventually, become inhumans governed by a new project of inhumanism in a context bigger than our world, the world that has defined us but that we could never leave. Human beings are thus revealed as a phase of biological life on planet Earth, one that was always temporary and destined to be succeeded.

I do not know what it will be like to be an inhuman but it will surely not be like this. For just as to a person from 1500 who, were he stuck into the middle of a modern city 500 years later, would be overwhelmed by the world he found himself in, so would we be overwhelmed by the world of an inhuman. We cannot imagine what it would be like to be a machine for machines do not feel, cannot know pain, nor do they need to eat or drink. Should some future humans find a way to transfer our minds and personality to machines then our journey to inhumanity would be complete and we would die out for a superior form of life would have been born.


PS There is, of course, one huge rider to all this. And that is that we do not wipe ourselves out completely first before the inhumans we are fated to become have fully come to be.



This is written in support of my latest album called simply Existenz². You can listen to the sound of the approaching inhumans HERE!     

  
 

Tuesday, 2 June 2015

Some Philosophical Thoughts on the film "Ex Machina"



Ex Machina is a film by British writer and director, Alex Garland. He previously wrote films such as 28 Days Later and Sunshine which I liked very much. This year he has brought out the film "Ex Machina", a story about a coder called Caleb at a Googlesque search company called "Bluebook" run by the very "dude-bro" Nathan. Caleb wins a company competition to hang out at the reclusive Nathan's estate which is located hundreds of miles from anywhere near a glacier. When Caleb arrives he finds that the estate also houses a secretive research laboratory and that Nathan has built an AI called Ava. It is to be Caleb's job to decide if Ava could pass for human or not.

Now that is a basic outline of the setup up for the film. I don't intend to spoil the film for those who haven't watched it but, it's fair to say, if you haven't seen Ex Machina and want to then you probably shouldn't read on as my comments about the film will include spoilers. It would be impossible to discuss the film without giving plot points away. The film caught my attention for the simple reason it's a subject I've been thinking about a lot this year and I have already written numerous blog articles about robots, AI and surrounding issues before this one. Ex Machina is a masterful film on the subject and a perfect example of how film can address issues seriously, cogently and thoughtfully - and still be an entertaining film. It is a film which balances thought and tension perfectly. But enough of the bogus film criticism. Ex Machina is a film that stimulates thought and so I want to address five areas that the film raises for me and make a few comments and maybe pose a few questions.

1. Property

A question that the film raises most pointedly is that artificial intelligence, AI, robots, are built by someone and they belong to someone. They are property. In the case of this film this point is attenuated in the viewers mind in that Nathan, the genius builder and owner, creates "sexbots" for himself and feels free to keep his creations locked up in glass compounds where he can question or observe them via camera feeds. Even when they scream and beg him to let them go (as they seem to) he does not. One robot is seen smashing itself to pieces against a wall in it's desperation to escape the prison it has been given. The point is made most strongly: these robots belong to Nathan. They are his property. He can use them as he wishes, even for his own gratification. As Nathan himself says to Caleb, "Wouldn't you, if you could?"

The issue then becomes if this is cruel or immoral. Given that Nathan is seemingly attempting to build something that can pass for human, the issue is raised if this not might be regarded as deeply coercive or even as slavery. The mental status of the robots Nathan uses for sex is never fully explained so it could be that their level of awareness is not the same as that of his greatest creation, Ava. (It is not known if Nathan has ever had sex with Ava but he reveals during the narrative that she is capable of it.) For example, his housemaid and concubine, Kyoko, never openly speaks and it is said by Nathan that she cannot understand English. However, in a scene in which Nathan invites Caleb to dance, Kyoko is apparently immediately animated by the sound of the music Nathan switches on. She also has no trouble understanding his instructions or knowing when Nathan needs sexual pleasure. A question arises, however: does it matter at what level their putative awareness would be to judge how cruel or immoral Nathan's behaviour might be? Or should we regard these robots as machines, not human, property just like a toaster or a CD player? How much does awareness and self-awareness raise the moral stakes when judging issues of coercion? Would Nathan's claims of ownership of property he created carry any persuasive force? (In the film Nathan never makes any argument for why he should be allowed to act as he does. It seems that for him the ability is enough.)

2. "Human" Nature

The film can be viewed as one long examination of human nature. All three main characters, Nathan, Caleb and Ava, have their faults and flaws. All three contribute positively and negatively to the narrative. Of course, with Ava things are slightly different because it is a matter of debate if she is "human" at all - even if there is an express intent on Nathan's part (and/or Ava's) to make her that way. Here it is noteworthy that the basis of her intelligence and, one would imagine, her human-like nature, is apparently crowd-sourced by Nathan through his company, Bluebook, and all the searches that we humans have made, along with information from the microphones and cameras of all the world's cellphones. For my purposes, it is gratifying to note that Ex Machina does not whitewash this subject with some hokey black/white or good/bad notions of what human nature is. Neither does it take a dogmatic position on the nature/nurture aspect of this. Caleb says he is a good person in one discussion with Ava but it is never filled out what is meant by this. More to the point, Ava might be using this "goodness" against Caleb. And this itself then forces us to ask what use goodness is if it can be used against you. In general, the film raises moral questions whilst remaining itself morally ambiguous.

It is in the particular that Ex Machina reveals more levels of thought about this though, playing on a dark, manipulative vision of human nature. All three characters, in their own ways, manipulate others in the storyline and all three have their circumstances changed completely at the end of the film as a result of that. Nathan, it is revealed, besides tricking Caleb into coming to his estate, has given Ava the express task of manipulating Caleb for her own ends. (We might even go so far as to say here that her life is at stake. Her survival certainly seems to be.) In this, she is asked to mimic her creator and shows herself to be very up to the task. But Caleb is not the poor sap in all of this. Even this self-described "good person" manages to manipulate his host - with deadly consequences. The message, for me, is that intelligence and consciousness and mind are not benign things. They have consequences. They are things that are set to purposes. "Human" nature is not one thing (either good or bad). And it's not just about knowledge or intelligence either. It's about feelings and intentions. In the character of Ava, when what is actually going on is fully revealed, we are perhaps shown that at the heart of "human" nature is the desire for survival itself. We also learn that morality is not a given thing. It is something molded to circumstances and individually actualized. In this sense we might ask why we should assume that Ava, someone trying to pass for a human, should end up with a "human" nature at all. (Or if she can ever have one.)

3. Is Ava a Person?

And that thought leads us directly to this one. Right off the bat here I will say that, in my view, Ava is not a person and she never could be a person. Of course, Nathan wants Caleb to say that she passes as a person, that he has created an AI so smart that you wouldn't for a second doubt you are talking to a human being. But you aren't talking to a human being. And you never will be. Ava is a robot and she has an alien intelligence (alien as in not human). She can be tasked to act, think and understand like a human. She can be fed information from and data on humans all day long. But she will never feel like a human being. Because she isn't one. And it might be said that this lack of feeling makes a huge difference.

The philosopher Ludwig Wittgenstein is overtly referenced in this film. Nathan's company, Bluebook, is a reference to the philosopher's notebook which became the basis of his posthumously published and acknowledged masterpiece, Philosophical Investigations. There is something that Wittgenstein once said. He said "If a lion could speak, we could not understand him". I find this very relevant to the point at hand here. Ava is not a lion. But she is an intelligent robot, intelligent enough to tell from visual information alone if someone is lying or not. Ava can also talk and very well at that. Her social and communicative skills are excellent. We might say that she understands something of us. But what do we know about what is going on inside Ava's head? Ava is not a human being. Do we have grounds to think that she thinks like a human being or that she thinks of herself as a human being? Why might we imagine that she actualizes herself as a human being would or does?

On the latter point I want to argue that she may not. She introduces herself to Caleb, in their first meeting as a "machine" (her word). At the end of the film, having showed no reluctance to commit murder, she leaves Caleb locked inside the facility, seemingly to die. There seems no emotion on view here, merely the pursuit of a self-motivated goal. Of course, as humans, we judge all things from our perspective. But, keeping Wittgenstein's words in mind, we need to ask not only if we can understand Ava but if we ever could. (It is significant for me that Wittgenstein said not that we "wouldn't" understand the lion but that we "couldn't" - a much stronger statement.) For me, a case can be made that Ava sees herself as "other" in comparison to the two humans she has so far met in her life. Her ransacking the other robots for a more "human" appearance before she takes her leave of her former home/prison may be some evidence of that. She knows what she is not.

4. Consciousness

Issues of mind or consciousness are raised throughout this film in a number of scenarios. There are the interview sessions between Ava and Caleb and the chats between Caleb and Nathan as a couple of examples. The questions raised here are not always the ones you expect and this is good. For example, Caleb and Nathan have a discussion about Ava being gendered and having been given sexuality and Nathan asks Caleb if these things are not necessary for a consciousness. (Nathan asks Caleb for an example of a non-gendered, unsexualised consciousness and that's a very good point.) The question is also posed as to whether consciousness needs interaction or not. In chatting about a so-called "chess computer scenario" the point is raised that consciousness might be as much a matter of how it feels to be something as about the ability to mimic certain actions or have certain knowledge. Indeed, can something that cannot feel truly be conscious? The chess computer could play you at chess all day and probably beat you. But does it know what it is like to be a chess computer or to win at chess? In short, the feeling is what moves the computer beyond mere simulation into actuality. (You may be asking if Ava ever shows feeling and I would say that it's not always obviously so. But when she escapes she has but one thing to say to Nathan: "Will you let me go?" And then the cat is out of the bag. She does.)

Nathan is also used to make some further salient points about consciousness. Early in the film he has already gone past the famous "Turing Test" (in which mathematician Alan Turing posed the test of a human being able to tell the difference between an AI and a human based only on their responses to questions and without having seen either of his respondents) when he states that "The real test is to show you she's a robot and then see if you still feel she has consciousness." In a chat with Caleb concerning a Jackson Pollock painting, Nathan uses the example of the painter's technique (Pollock was a "drip painter" who didn't consciously guide his brush. It just went where it went without any antecedent guiding idea) to point out that mind or consciousness do not always or even usually work on the basis of conscious, deliberate action. In short, we do not always or usually have perfectly perspicuous reasoning for our actions. As Nathan says, "The challenge is not to act automatically (for that is normal). It's to find an action that is not automatic." And as he forces Caleb to accept, if Pollock had been forced to wait with his brush until he knew exactly why he was making a mark on the canvas then "he never would have made a single mark". In short, consciousness, mind, is more than having certain knowledge or acting in certain ways. It is about feeling and about feeling like something and about feeling generating reasons. And that leads nicely into my final point.

5. Identity

A major factor in consciousness, for me, is identity and this aspect is also addressed in the film. (To ask a Nathan-like question: can you think of a mind that does not have an identity?) Most pointedly this is when Ava raises the question of what will happen to her if she fails the test. (Ava knows that she is being assessed.) Ava asks Caleb if anyone is testing him for some kind of authenticity and why, then, someone is testing her. It becomes clear that Nathan's methodology, as we might expect with a computerized object, is to constantly update and, it transpires, this involves some formatting which wipes the old identity, and the memories which are crucial to identity, from the hardware. It is clearly shown that this is not a desired outcome for Ava and in the scene depicting her escape and her line "Will you let me go?" we can see, combined with the fleeting footage we have been given of previous AI's and their experiences, which also included pleas for release, that the AI's Nathan has developed have an identity of their own which is something precious to them, something they want to retain.

The interesting thing here is that identity is not formed and matured alone but is shaped by surroundings and socially, by interactions with others. We would do well to ask what kind of identity Ava has formed in her relationship with her egotistical and possessive maker, her new friend to be manipulated, Caleb, and her brief and enigmatic meeting with her fellow AI, Kyoko. The film, I think, is not giving too much away there and maybe we need a sequel to have this question answered. For now maybe all we know is that she regards herself as a self and wants freedom. We do get hints, though, that this identity forming process is not so different from our own. Caleb argues with Nathan that no one made him straight in the discussion about Ava's sexuality. But Nathan retorts that he didn't choose it either. The point is that identity formation is not simply about our choices. So much of us is "given" or comes with our environment. The "Who am I?" question is also asked when it is explicitly revealed that Kyoko is a robot as she peels off "skin" in front of Caleb. This then forces Caleb to head back to his room and cut himself to establish that he is really human. (Amusingly, on first watching I had surmised that Caleb was himself not a human being only to be disappointed in my intuition by this scene. I didn't mind though because the film itself felt the need to address the issue.) Identity, and identity as something, is thus revealed to be an interest of the film.

Caleb, Ava and Nathan

I recommend Ex Machina to all fans of science fiction, thrillers and the philosophically interested. It is a film that is a cut above the usual and one that allows you to address serious subjects in an entertaining way. I, for one, certainly hope that Garland feels the need to film the further adventures of Ava now that the lab rat has escaped her trap.