Showing posts with label Chappie. Show all posts
Showing posts with label Chappie. Show all posts

Tuesday, 19 May 2015

Consciousness, Bodies and Future Robot Beings: Thinking Aloud

So yesterday I came back to thinking about consciousness again after some weeks away from it and, inevitably, the idea of robots with human consciousness came up again. I was also pointed in the direction of some interesting videos put on You Tube by the Dalai Lama in which he and some scientists educated more in the western, scientific tradition had a conference around the areas of mind and consciousness.

But it really all started a couple of days ago with a thought I had. I was sitting there, minding my own business, when suddenly I thought "Once we can create consciousness procreation will be obsolete." (This thought assumes that "consciousness" is something that can be deliberately created. That is technically an assumption and maybe a very big one.) My point in having this thought was that if we could replicate consciousness, which we might call our awareness that we exist and that there is a world around us, then we could put it (upload it?) into much better robot bodies than our frail fleshly ones which come with so many problems simply due to their sheer physical form. One can easily imagine that a carbon fibre or titanium (or carbotanium) body would last much longer and without any of the many downsides of being a human being. (Imagine being a person but not needing to eat, or go to the toilet. Imagine not feeling tired or sick.)


So the advantages immediately become apparent. Of course the thought also expressly encompasses the idea that if you can create consciousness then you can create replacements for people. Imagine you own a factory. Instead of employing 500 real people you employ 500 robots with consciousness. Why wouldn't you do that? At this point you may reply with views about what consciousness is. You might say, for example, that consciousness implies awareness of your surroundings which implies having opinions about those surroundings. That implies feelings and the formation of attitudes and opinions about things. Maybe the robots don't like working at the factory like its very likely some of the people don't. Maybe, to come from another angle, we should regard robots with consciousness as beings with rights in this case. If we could establish that robots, or other creatures, did have a form of consciousness, would that not mean we should give them rights? And what would it mean for human beings if we could deliberately create "better people"?

At this point it becomes critical what we think consciousness actually is. It was suggested to me that, in human beings, electrochemical actions in the brain can "explain" the processing of sense data (which consciousness surely does). Personally I wonder if this does "explain" it as opposed to merely describing it as a process within a brain. One way that some scientists have often found to discuss the mind or consciousness is to reduce it to the activities of the brain. So conscious thoughts become brain states, etc. This is not entirely convincing. It is thought that the mind is related to the brain but no one knows how even though some are happy to say that they regard minds as physical attributes like reproduction or breathing. That is, they would say minds are functions of brains. Others, however, aren't so sure about that. However a mind comes to be, it seems quite safe to say that consciousness is a machine for generating data (as one of its functions). That is, to be conscious is to have awareness of the world around you and to start thinking about it and coming to conclusions or working hypotheses about things. Ironically, this is often "unconsciously" done!

So consciousness, as far as we know, requires a brain. I would ask anyone who doesn't agree with this to point to a consciousness that exists where there isn't a brain in evidence. But consciousness cannot be reduced to things like data or energy. In this respect I think the recent film Chappie, which I mentioned in previous blogs, gets things wrong. I don't understand how a consciousness could be "recorded" or saved to a hard disk. It doesn't, to me, seem very convincing, whilst I understand perfectly how it makes a good fictional story. I think that on this point thinkers get seduced by the power of the computer metaphor.  For me consciousness is more than both energy or data, a brain is not simply hardware nor is consciousness simply (or even) software. If you captured the electrochemical energy in the brain or had a way to capture all the data your mind possesses you wouldn't, I think, have captured a consciousness. And this is a question that scientist Christof Koch poses when he asks if consciousness is something fundamental in itself or is rather simply an emergent property of systems that are suitably complex. In other words, he asks if complex enough machine networks could BECOME conscious if they became complex enough. Or would we need to add some X to make it so? Is consciousness an emergent property of something suitably complex or a fundamental X that comes from we don't know where?

This complexity about the nature of consciousness is a major barrier to the very idea of robot consciousness of course and it is a moot point to ask when we might reach the level of consciousness in our human experiments with robotics and AI. For, one thing can be sure, if we decided that robots or other animals did have an awareness of the world around them, even of their own existence or, as Christof Koch always seems to describe consciousness, "what it feels like to be me" (or, I add, to even have an awareness of yourself as a subject) then that makes all the difference in the world. We regard a person, a dog, a whale or a even an insect as different to a table, a chair, a computer or a smartphone because they are ALIVE and being alive, we think, makes a difference. Consciousness plays a role in this "being aliveness". It changes the way we think about things.

Consciousness, if you reflect on it for even a moment, is a very strange thing. This morning when I woke up I was having a dream. It was a strange dream. But, I ask myself, what was my state of consciousness at the time? Was I aware that I was alive? That I was a human being? That I was me? I don't think I can say that I was. What about in deep sleep where scientists tell us that brain activity slows right down? Who, in deep sleep, has consciousness of anything? So consciousness, it seems, is not simply on or off. We can have different states of consciousness and change from one to the other and, here's another important point, not always do this by overt decision. Basically this just makes me wonder a lot and I ask why I have this awareness and where it comes from. Perhaps the robots of the future will have the same issues to deal with. Consciousness grows and changes and is fitted to a form of life. Our experience of the world is different even from person to person, let alone from species to species. We do not see the world as a dog does. A conscious robot would not see the world as we, its makers, do either.

In closing, I want to remind people that this subject is not merely technological. There are other issues in play too. Clearly the step to create such beings would be a major one on many fronts. For one thing, I would regard a conscious being as an individual with rights and maybe others would too. At this point there seems to be some deep-seated human empathy in play. There is a scene in the film Chappie where the newly conscious robot (chronologically regarded as a child since awareness of your surroundings is learned and not simply given) is left to fend for himself and is attacked. I, for one, winced and felt sympathy for the character in the film - even though it was a collection of metal and circuitry. And this makes me ask what humanity is and what beings are worthy of respect. What if a fly had some level of consciousness? (In a lecture I watched Christof Koch speculated that bees might have some kind of consciousness and explained that it certainly couldn't be ruled out.) Clearly, we need to think thoroughly and deeply about what makes a person a person and I think consciousness plays a large part in the answer. Besides the scientific and technical challenges of discovering more about and attempting to re-create consciousness, there are equally tough moral and philosophical challenges to be faced as well.

Friday, 20 March 2015

Would You Worry About Robots That Had Free Will?

Its perhaps a scary thought, a very scary thought: an intelligent robot with free will, one making up the rules for itself as it goes along. Think Terminator, right? Or maybe the gunfighter character Yul Brynner plays in "Westworld", a defective robot that turns from being a fairground attraction into a super intelligent robot on a mission to kill you? But, if you think about it, is it really as scary as it seems? After all, you live in a world full of 7 billion humans and they (mostly) have free will as well. Are you huddled in a corner, scared to go outside, because of that? Then why would intelligent robots with free will be any more frightening? What are your unspoken assumptions here that drive your decision to regard such robots as either terrifying or no worse than the current situation we find ourselves in? I suggest our thinking here is guided by our general thinking about robots and about free will. It may be that, in both cases, a little reflection clarifies our thinking once you dig a little under the surface.




Take "free will" for example. It is customary to regard free will as the freedom to act on your own recognisance without coercion or pressure from outside sources in any sense. But, when you think about it, free will is not free in any absolute sense at all. Besides the everyday circumstances of your life, which directly affect the choices you can make, there is also your genetic make up to consider. This affects the choices you can make too because it is responsible not just for who you are but who you can be. In short, there is both nature and nurture acting upon you at all times. What's more, you are one tiny piece of a chain of events, a stream of consciousness if you will, that you don't control. Some people would even suggest that things happen the way they do because they have to. Others, who believe in a multiverse, suggest that everything that can possibly happen is happening right now in a billion different versions of all possible worlds. Whether you believe that or not, the point is made that so much more happens in the world every day that you don't control than the tiny amount of things that you do.

And then we turn to robots. Robots are artificial creations. I've recently watched a number of films which toy with the fantasy that robots could become alive. As Number 5 in the film Short Circuit says, "I'm alive!". As creations, robots have a creator. They rely on the creator's programming to function. This programming delimits all the possibilities for the robot concerned. But there is a stumbling block. This stumbling block is called "artificial intelligence". Artificial Intelligence, or AI, is like putting a brain inside a robot (a computer in effect) which can learn and adapt in ways analogous to the human mind. This, it is hoped, allows the robot to begin making its own choices, developing its own thought patterns and ways of choosing. It gives the robot the ability to reason. It is a very moot point, for me at least, whether this would constitute the robot as being alive, as having a consciousness or as being self-aware. And would a robot that could reason through AI therefore have free will? Would that depend on the programmer or could such a robot "transcend its programming"?

Well, as I've already suggested, human free will it not really free. Human free will is constrained by many factors. But we can still call it free because it is the only sort of free will we could ever have anyway. Human beings are fallible and contingent beings. They are not gods and cannot stand outside the stream of events to get a view that wasn't a result of them or that will not have consequences further on down the line for them. So, in this respect, we could not say that a robot couldn't have free will because it would be reliant on programming or constrained by outside things - because all free will is constrained anyway. Discussing the various types of constraint and their impact is another discussion though. Here it is enough to point out that free will isn't free whether you are a human or an intelligent robot. Being programmed could act as the very constraint which makes robot free will possible, in fact.

It occurs to me as I write out this blog that one difference between humans and robots is culture. Humans have culture and even many micro-cultures and these greatly influence human thinking and action. Robots, on the other hand, have no culture because these things rely on sociability and being able to think and feel for yourself. Being able to reason, compare and pass imaginative,  artistic judgments are part of this too. Again, in the film Short Circuit, the scientist portrayed by actor Steve Guttenberg refuses to believe that Number 5 is alive and so he tries to trick him. He gives him a piece of paper with writing on it and a red smudge along the fold of the paper. He asks the robot to describe it. Number 5 begins by being very unimaginative and precise, describing the paper's chemical composition and things like this. The scientist laughs, thinking he has caught the robot out. But then Number 5 begins to describe the red smudge, saying it looks like a butterfly or a flower and flights of artistic fancy take over. The scientist becomes convinced that Number 5 is alive. I do not know if robots will ever be created that can think artistically or judge which of two things looks more beautiful than the other but I know that human beings can. And this common bond with other people that forms into culture is yet another background which free will needs in order to operate.




I do not think that there is any more reason to worry about a robot that would have free will than there is to worry about a person that has free will. It is not freedom to do anything that is scary anyway because that freedom never really exists. All choices are made against the backgrounds that make us and shape us in endless connections we could never count or quantify. And, what's more, our thinking is not so much done by us in a deliberative way as it is simply a part of our make up anyway. In this respect we act, perhaps, more like a computer in that we think and calculate just because that is what, once "switched on" with life, we will do. "More input!" as Number 5 said in Short Circuit.  This is why we talk of thought occuring to us rather than us having to sit down and deliberate to produce thoughts in the first place. Indeed, it is still a mystery exactly how these things happen at all but we can say that thoughts just occur to us (without us seemingly doing anything but being a normal, living human being) as much, if not more, than that we sit down and deliberately create them. We breathe without thinking "I need to breathe" and we think without thinking "I need to think".

So, all my thinking these past few days about robots has, with nearly every thought I've had, forced me into thinking ever more closely about what it is to be human. I imagine the robot CHAPPiE, from the film of the same name, going from a machine made to look vaguely human to having that consciousness.dat program loaded into its memory for the first time. I imagine consciousness flooding the circuitry and I imagine that as a human. One minute you are nothing and the next this massive rush of awareness floods your consciousness, a thing you didn't even have a second before. To be honest, I am not sure how anything could survive that rush of consciousness. It is just such an overwhelmingly profound thing. I try to imagine my first moments as a baby emerging into the world. Of course, I can't remember what it was like. But I understand most babies cry and that makes sense to me. In CHAPPiE the robot is played as a child on the basis, I suppose, of human analogy. But imagine you had just been given consciousness for the first time, and assume you manage to get over that hurdle of being able to deal with the initial rush: how would you grow and develop then? What would your experience be like? Would the self-awareness be overpowering? (As someone who suffers from mental illness my self-awareness at times can be totally debilitating.) We traditionally protect children and educate them, recognising that they need time to grow into their skins, as it were. Would a robot be any different?




My thinking about robots has led to lots of questions and few answers. I write these blogs not as any kind of expert but merely as a thoughtful person. I think one conclusion I have reached is that what separates humans from all other beings, natural or artificial, at this point is SELF AWARENESS. Maybe you would also call this consciousness too. I'm not yet sure how we could meaningfully talk of an artificially intelligent robot having self-awareness. That's one that will require more thought. But we know, or at least assume, that we are the only natural animal on this planet, or even in the universe that we are aware of, that knows it is alive. Dogs don't know they are alive. Neither do whales, flies, fish, etc. But we do. And being self-aware and having a consciousness, being reasoning beings, is a lot of what makes us human. In the film AI, directed by Steven Spielberg, the opening scene shows the holy grail of robot builders to be a robot that can love. I wonder about this though. I like dogs and I've been privileged to own a few. I've cuddled and snuggled with them and that feels very like love. But, of course, our problem in all these things is that we are human. We are anthropocentric. We see with human eyes. This, indeed, is our limitation. And so we interpret the actions of animals in human ways. Can animals love? I don't know. But it looks a bit like it. In some of the robot films I have watched the characters develop affection for variously convincing  humanoid-shaped lumps of metal. I found that more difficult to swallow.  But we are primed to recognise and respond to cuteness. Why do you think the Internet is full of cat pictures? So the question remains: could we build an intelligent robot that could mimic all the triggers in our very human minds, that could convince us it was alive, self-aware, conscious? After all, it wouldn't need to actually BE any of these things. It would just need to get us to respond AS IF IT WAS!


My next blog will ask: Are human beings robots?


With this blog I'm releasing an album of music made as I thought about intelligent robots and used that to help me think about human beings. It's called ROBOT and its part 8 of my Human/Being series of albums. You can listen to it HERE!

Wednesday, 18 March 2015

Humans and Robots: Are They So Different?

Today I have watched the film "Chappie" from Neill Blomkamp, the South African director who also gave us District 9 and Elysium. Without going into too much detail on a film only released for two weeks, its a film about a military robot which gets damaged and is sent to be destroyed but is saved at the last moment when its whizzkid inventor saves it to try out his new AI program on it (consciousness.dat). What we get is a robot that becomes self-aware and develops a sense of personhood. For example, it realises that things, and it, can die (in its case when its battery runs out).




Of course, the idea of robot beings is not new. It is liberally salted throughout the history of the science fiction canon. So whether you want to talk about Terminators (The Terminator), Daleks (Doctor Who), Replicants (Bladerunner) or Transformers (Transformers), the idea that things that are mechanical or technological can think and feel like us (and sometimes not like us or "better" than us) is not a new one. Within another six weeks we will get another as the latest Avengers film is based around fighting the powerful AI robot, Ultron.




Watching Chappie raised a lot of issues for me. You will know if you have been following this blog or my music recently that questions of what it is to be human or to have "being" have been occupying my mind. Chappie is a film which deliberately interweaves such questions into its narrative and we are expressly meant to ask ourselves how we should regard this character as we watch the film, especially as various things happen to him or as he has various decisions to make. Is he a machine or is he becoming a person? What's the difference between those two? The ending to the film, which I won't give away here, leads to lots more questions about what it is that makes a being alive and what makes beings worthy of respect. These are very important questions which lead into all sorts of other areas such as human and animal rights and more philosophical questions such "what is it to be a person"? Can something made entirely of metal be a person? If not, then are we saying that only things made of flesh and bone can have personhood?




I can't but be fascinated by these things. For example, the film raises the question of if a consciousness could be transferred from one place to another. Would you still be the same "person" in that case? That, in turn, leads you to ask what a person is. Is it reducible to a "consciousness"? Aren't beings more than brain or energy patterns? Aren't beings actually physical things too (even a singular unity of components) and doesn't it matter which physical thing you are as to whether you are you or not? Aren't you, as a person, tied to your particular body as well? The mind or consciousness is not an independent thing free of all physical restraints. Each one is unique to its physical host. This idea comes to the fore once we start comparing robots, deliberately created and programmed entities, usually on a common template, with people. The analogy is often made in both directions so that people are seen as highly complicated computer programs and robots are seen as things striving to be something like us - especially when AI enters the equation. But could a robot powered by AI ever actually be "like a human"? Are robots and humans just less and more complicated versions of the same thing or is the analogy only good at the linguistic level, something not to be pushed further than this?

Besides raising philosophical questions of this kind its also a minefield of language. Chappie would be a him - indicating a person. Characters like Bumblebee in Transformers or Roy Batty in Bladerunner are also regarded as living beings worthy of dignity, life and respect. And yet these are all just more or less complicated forms of machine. They are metal and circuitry. Their emotions are programs. They are responding, all be it in very complicated ways, as they are programmed to respond. And yet we use human language of them and the film-makers try to trick us into having human emotions about them and seeing them "as people". But none of these things are people. Its a machine. What does it matter if we destroy it. We can't "kill" it because it was never really "alive", right? An "on/off" switch is not the same thing as being dead, surely? When Chappie talks about "dying" in the film it is because the military robot he is has a battery life of 5 days. He equates running out of power with being dead. If you were a self-aware machine I suppose this would very much be an existential issue for you. (Roy Batty, of course, is doing what he does in Bladerunner because replicants have a hard-wired lifespan of 4 years.) But then turn it the other way. Aren't human beings biological machines that need fuel to turn into energy so that they can function? Isn't that really just the same thing?




There are just so many questions here. Here's one: What is a person? The question matters because we treat things we regard as like us differently to things that we don't. Animal Rights people think that we should protect animals from harm and abuse because in a number of cases we suggest they can think and feel in ways analogous to ours. Some would say that if something can feel pain then it should be protected from having to suffer it. That seems to be "programmed in" to us. We have an impulse against letting things be hurt and a protecting instinct. And yet there is something here that we are forgetting about human beings that sets them apart from both animals and most intelligent robots as science fiction portrays them. This is that human beings can deliberately do things that will harm them. Human beings can set out to do things that are dangerous to themselves. Now animals, most would say, are not capable of doing either good or bad because we do not judge them self-aware enough to have a conscience and so be capable of moral judgment or of weighing up good and bad choices. We do not credit them with the intelligence to make intelligent, reasoned decisions. Most robots or AI's that have been thought of always have protocols about not only protecting themselves from harm but (usually) humans too as well. Thus, we often get the "programming gone wrong" stories where robots become killing machines. But the point there is that that was never the intention when these things were made.




So human beings are not like either animals or artificial lifeforms in this respect because, to be blunt, human beings can be stupid. They can harm themselves, they can make bad choices. And that seems to be an irreducible part of being a human being: the capacity for stupidity. But humans are also individuals. We differentiate ourselves one from another and value greatly that separation. How different would one robot with AI be from another, identical, robot with an identical AI? Its a question to think about. How about if you could collect up all that you are in your mind, your consciousness, thoughts, feelings, memories, and transfer them to a new body, one that would be much more long lasting. Would you still be you or would something irreducible about you have been taken away?  Would you actually have been changed and, if so, so what? This question is very pertinent to me as I suffer from mental illness which, more and more as it is studied, is coming to be seen as having hereditary components. My mother, too, suffers from a similar thing as does her twin sister. It also seems as if my brother's son might be developing a similar thing too. So the body I have, and the DNA that makes it up, is something very personal to me. It makes me who I am and has quite literally shaped my experience of life and my sense of identity. Changing my body would quite literally make me a different person, one without certain genetic or biological components. Wouldn't it?

So many questions. But this is only my initial thoughts on the subject and I'm sure they will be on-going. So you can expect that I will return to this theme again soon. Thanks for reading!


You can hear my music made whilst I was thinking about what it is to be human here!