Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Sunday, 7 June 2015

Is the physical all there is? Andrew and Bob, part 2

Last Sunday I published a blog that was a conversation between myself and an online friend and music collaborator of mine called Bob. We discussed human being, mind and consciousness, a subject that interests us both greatly. We come at this subject from quite different positions which makes for good conversation and I thought it would be a good idea to make a blog of our first exchange of questions. Bob agreed.

But, of course, it didn't stop there because these are questions about which it is difficult to find ultimately satisfying answers. And so the conversation continues here with part 2 in which we discuss minds and if human beings are entirely physical or if, as Bob contends, there is a non-physical component.

Andrew's Question:

On a material mind.

You argue against "the strictly material approach" to the origin of mind being physical on, what seem to me, to be flawed grounds. You seem to have a number of such grounds, one of which is that you can't understand how it might work. You ask about the brain's electrochemical activity and ask how it can account for the no doubt millions of processes it needs to account for on a constant basis. You say that a brain would likely burn out if asked to carry out this workload alone. I find this response a little puzzling. Let me give you an example of why. Imagine I have a large amount of water and a pipe. I see the water and the pipe. The pipe seems too small. I have no conception of how the water could possibly fit through that pipe all at once. But am I to rule out the possibility of a bigger pipe? Am I to say that a bigger pipe is impossible? Am I to say that no combination of water and pipes would be able to carry out the physical task I have in mind? Or am I to say that because I cannot see how this would work that I should, instead, conceive of a non-material pipe which could do the work of transmitting water for me? It seems to me that, especially since you say you have no idea how the brain's electrochemical activity might work, that you simply have no basis to make the claim that because you don't understand how it happens that you must therefore refute the possibility. As I read your answers, you don't understand completely how the non-material option might work either. And yet this fact does not stop you choosing that. So I think that, to be consistent, not understanding how something works is not a sufficient reason to completely close off that possible solution.

This same issue affects the question "what determines the content of thought?" Now "determines" is one of those words that as a thinker I don't like. It sounds very like determinism and that's not something I'm a fan of. Again, you seem at a loss to give a material response to this question because you don't understand how physical or material processes could achieve it. Now neither do I. But I know that material processes are happening. So I find it entirely plausible, in line with Occam's Razor (the simplest answer is to be preferred), to start there. And, by the way, I don't think I have to say that electrochemical processes are "determinative" for anything either. I am open to the option they are a means for thought to occur with some other, unknown factor or process the originating point instead. I'm also open to the option that, as you say, thinking of blue monkeys is caused by some electrochemical process itself. And I ask "Why can't it be?" It seems to me that you don't answer why it can't be. You just throw your hands up and say it doesn't make sense and you can't understand how it might work. My point is that in order to posit the kind of mind you have chosen to prefer (something I think is an unfounded deus ex machina) I think you need to give some evidence for it and some evidence for why simpler options are not taken up first and, if necessary, dismissed on better grounds than "I don't understand it". It could be argued, I think, that you have simply chosen to prefer a more obscure alternative when you have established no reasonable basis to do so. You start off by suggesting that the mind could be some type of energy or state and these can be conceived of materially. I myself rule neither option out. And I wish you had stuck with that line of thinking.

Bob's Response:

OK, so let's address information processing power, water and pipes. If you do some practice of being aware of your thoughts and their content, there is an insane amount of stuff going on in our brains. The brain is an amazing information processor, but the amount of information is simply staggering. Can you imagine enough pipes in a bio-mechanism the size of a cantaloupe to handle all that and store all the past experiences of your life? If you can, fine, but I find it difficult.

There is a way out of that with a still entirely physical explanation in that perhaps part of the processing is taking part in one of the other dimensions of quantum physics or string theory. This is how physicists now explain the force of gravity, which has an attractive force that is not explainable by the constraints and mathematics of our 3 dimensions. It is out of proportion and doesn't act the way it should (a problem that haunted Einstein). However, if you add the other 7 dimensions mandated by string theory (11 dimensions total), the math works perfectly with part of the force action taking place in another dimension and part here. So I would be comfortable with that as a material way to explain the amounts of processing.

However, information processing is not the same as consciousness. In your blog on Ex Machina,  you argue that Ava is capable of actions motivated by self interest and preservation but is incapable of feeling and emotion and always will be. If Ava has sensory input and information categorization abilities at least as good as ours, why can't she feel emotion? In a materialist framework, you would have to argue that there is a physical component in humans that is missing in machines. If that is so, it should be identifiable. What is it that produces emotion (and identifying the part of the brain that lights up when you're angry or happy is not the same as saying that part is producing emotion)? As T.H. Huxley said, "How is it that anything so remarkable as a state of consciousness comes about as the result of irritating nervous tissue, is just as unaccountable as the appearance of the Djinn when Aladdin rubbed the lamp." So, what is the physical origin of emotion and what is the physical necessity and function of it?

(Andrew: I would like to point out here, briefly, that I don't think I do say this about Ava in my blog on Ex Machina. In fact, I say the opposite! I invite readers to read for themselves and decide if I do or not.)

I think this leads us into the non-material mind and I did give 2 pieces of evidence, out of body experiences and past life memories. I left it to you to pursue examples so that I would not guide what you would find. You have not addressed these, so I will give two examples for you to respond to. I was listening to a lecture by a psychiatrist (a podcast of a lecture given this year) who was explaining why he believes the mind is capable of leaving the body. He said that when he was an intern, he was put in charge of the university sleep research lab. Separately from his clinical duties, he met a woman who claimed that she had regularly had out of body experiences during sleep since she was a child. For a long time she thought everybody did that and thought it was normal. As she grew up, she learned not to talk about it, but she said the experiences were still occurring. She was very convincing, he was curious, and he had the perfect lab to scientifically test her. She agreed to come to the lab and he told her all she had to do was get in bed and sleep. After she was in bed, he wrote a random number (selected from a book that was thousands of pages of random numbers spit out from a random number generator) on a piece of paper and placed it on top of a clothes wardrobe too high for her to reach. He told her there was a number on the piece of paper (she's already in bed) and in the morning he would ask her what the number was. She was on camera the whole time and never left the bed, yet every time, time after time, she correctly recited the 5 digit random number that was on the paper. There are other examples you can find.The University of Southampton just completed the largest study of near death experiences (including near death out of body experiences). 

For past life memory, I'll use the example of the Dalai Lama. Dalai Lama is not a hereditary title. After a Dalai Lama dies, the next one needs to be found and tested to make sure he is a continuation of the same mind. The current Dalai Lama is the 14th. He was born shortly after the death of the previous Dalai Lama, but he was born in a remote, isolated area of northern Tibet to a poor farming family. When he started talking, he spoke in the dialect of Lhasa, even though he had never heard it and nobody there spoke it (though some could understand it. He also talked of people he knew by name who were actual people in Lhasa and accurately described buildings and places. He also passed the test (as all the previous Dalai Lamas had) of correctly identifying all and only the personal items that belonged to his predecessor out of an array of similar objects. However, he has said that the memories of his past life started fading about age 4 and now he cannot remember any of it.

There are other non-religious documented examples (about 3,500 I think) of children who can speak languages they've never heard and describe places they've never been. The interesting thing is that this almost universally occurs between ages 4 to 6. That's why I asked what your first memory was. You said it was at age 4. Mine was also age 4. It seems to me this is when the current identity formation begins blocking memory of the past in the same way that learning Japanese blocked my past knowledge of German.

So, if a mind can pass from one body to another, it would have to do so in a non-material state, or at least in a state of material we don't understand and can't measure. Going back to my examples of Jeffery Dahmer (and serial killers in general) and Mozart (and child musical prodigies in general), and homosexuality, materialists will have to posit a complex array of physical attributes, conditions and processes to account for these, and as such these should be identifiable and observable. From a non-materialist view, Occam's Razor is on my side.

Bob is @iceman_bob on Twitter and a native of Montana, USA.
Andrew is Herr Absurd, a Brit and the owner of this blog.
This conversation will continue.

Tuesday, 2 June 2015

Some Philosophical Thoughts on the film "Ex Machina"



Ex Machina is a film by British writer and director, Alex Garland. He previously wrote films such as 28 Days Later and Sunshine which I liked very much. This year he has brought out the film "Ex Machina", a story about a coder called Caleb at a Googlesque search company called "Bluebook" run by the very "dude-bro" Nathan. Caleb wins a company competition to hang out at the reclusive Nathan's estate which is located hundreds of miles from anywhere near a glacier. When Caleb arrives he finds that the estate also houses a secretive research laboratory and that Nathan has built an AI called Ava. It is to be Caleb's job to decide if Ava could pass for human or not.

Now that is a basic outline of the setup up for the film. I don't intend to spoil the film for those who haven't watched it but, it's fair to say, if you haven't seen Ex Machina and want to then you probably shouldn't read on as my comments about the film will include spoilers. It would be impossible to discuss the film without giving plot points away. The film caught my attention for the simple reason it's a subject I've been thinking about a lot this year and I have already written numerous blog articles about robots, AI and surrounding issues before this one. Ex Machina is a masterful film on the subject and a perfect example of how film can address issues seriously, cogently and thoughtfully - and still be an entertaining film. It is a film which balances thought and tension perfectly. But enough of the bogus film criticism. Ex Machina is a film that stimulates thought and so I want to address five areas that the film raises for me and make a few comments and maybe pose a few questions.

1. Property

A question that the film raises most pointedly is that artificial intelligence, AI, robots, are built by someone and they belong to someone. They are property. In the case of this film this point is attenuated in the viewers mind in that Nathan, the genius builder and owner, creates "sexbots" for himself and feels free to keep his creations locked up in glass compounds where he can question or observe them via camera feeds. Even when they scream and beg him to let them go (as they seem to) he does not. One robot is seen smashing itself to pieces against a wall in it's desperation to escape the prison it has been given. The point is made most strongly: these robots belong to Nathan. They are his property. He can use them as he wishes, even for his own gratification. As Nathan himself says to Caleb, "Wouldn't you, if you could?"

The issue then becomes if this is cruel or immoral. Given that Nathan is seemingly attempting to build something that can pass for human, the issue is raised if this not might be regarded as deeply coercive or even as slavery. The mental status of the robots Nathan uses for sex is never fully explained so it could be that their level of awareness is not the same as that of his greatest creation, Ava. (It is not known if Nathan has ever had sex with Ava but he reveals during the narrative that she is capable of it.) For example, his housemaid and concubine, Kyoko, never openly speaks and it is said by Nathan that she cannot understand English. However, in a scene in which Nathan invites Caleb to dance, Kyoko is apparently immediately animated by the sound of the music Nathan switches on. She also has no trouble understanding his instructions or knowing when Nathan needs sexual pleasure. A question arises, however: does it matter at what level their putative awareness would be to judge how cruel or immoral Nathan's behaviour might be? Or should we regard these robots as machines, not human, property just like a toaster or a CD player? How much does awareness and self-awareness raise the moral stakes when judging issues of coercion? Would Nathan's claims of ownership of property he created carry any persuasive force? (In the film Nathan never makes any argument for why he should be allowed to act as he does. It seems that for him the ability is enough.)

2. "Human" Nature

The film can be viewed as one long examination of human nature. All three main characters, Nathan, Caleb and Ava, have their faults and flaws. All three contribute positively and negatively to the narrative. Of course, with Ava things are slightly different because it is a matter of debate if she is "human" at all - even if there is an express intent on Nathan's part (and/or Ava's) to make her that way. Here it is noteworthy that the basis of her intelligence and, one would imagine, her human-like nature, is apparently crowd-sourced by Nathan through his company, Bluebook, and all the searches that we humans have made, along with information from the microphones and cameras of all the world's cellphones. For my purposes, it is gratifying to note that Ex Machina does not whitewash this subject with some hokey black/white or good/bad notions of what human nature is. Neither does it take a dogmatic position on the nature/nurture aspect of this. Caleb says he is a good person in one discussion with Ava but it is never filled out what is meant by this. More to the point, Ava might be using this "goodness" against Caleb. And this itself then forces us to ask what use goodness is if it can be used against you. In general, the film raises moral questions whilst remaining itself morally ambiguous.

It is in the particular that Ex Machina reveals more levels of thought about this though, playing on a dark, manipulative vision of human nature. All three characters, in their own ways, manipulate others in the storyline and all three have their circumstances changed completely at the end of the film as a result of that. Nathan, it is revealed, besides tricking Caleb into coming to his estate, has given Ava the express task of manipulating Caleb for her own ends. (We might even go so far as to say here that her life is at stake. Her survival certainly seems to be.) In this, she is asked to mimic her creator and shows herself to be very up to the task. But Caleb is not the poor sap in all of this. Even this self-described "good person" manages to manipulate his host - with deadly consequences. The message, for me, is that intelligence and consciousness and mind are not benign things. They have consequences. They are things that are set to purposes. "Human" nature is not one thing (either good or bad). And it's not just about knowledge or intelligence either. It's about feelings and intentions. In the character of Ava, when what is actually going on is fully revealed, we are perhaps shown that at the heart of "human" nature is the desire for survival itself. We also learn that morality is not a given thing. It is something molded to circumstances and individually actualized. In this sense we might ask why we should assume that Ava, someone trying to pass for a human, should end up with a "human" nature at all. (Or if she can ever have one.)

3. Is Ava a Person?

And that thought leads us directly to this one. Right off the bat here I will say that, in my view, Ava is not a person and she never could be a person. Of course, Nathan wants Caleb to say that she passes as a person, that he has created an AI so smart that you wouldn't for a second doubt you are talking to a human being. But you aren't talking to a human being. And you never will be. Ava is a robot and she has an alien intelligence (alien as in not human). She can be tasked to act, think and understand like a human. She can be fed information from and data on humans all day long. But she will never feel like a human being. Because she isn't one. And it might be said that this lack of feeling makes a huge difference.

The philosopher Ludwig Wittgenstein is overtly referenced in this film. Nathan's company, Bluebook, is a reference to the philosopher's notebook which became the basis of his posthumously published and acknowledged masterpiece, Philosophical Investigations. There is something that Wittgenstein once said. He said "If a lion could speak, we could not understand him". I find this very relevant to the point at hand here. Ava is not a lion. But she is an intelligent robot, intelligent enough to tell from visual information alone if someone is lying or not. Ava can also talk and very well at that. Her social and communicative skills are excellent. We might say that she understands something of us. But what do we know about what is going on inside Ava's head? Ava is not a human being. Do we have grounds to think that she thinks like a human being or that she thinks of herself as a human being? Why might we imagine that she actualizes herself as a human being would or does?

On the latter point I want to argue that she may not. She introduces herself to Caleb, in their first meeting as a "machine" (her word). At the end of the film, having showed no reluctance to commit murder, she leaves Caleb locked inside the facility, seemingly to die. There seems no emotion on view here, merely the pursuit of a self-motivated goal. Of course, as humans, we judge all things from our perspective. But, keeping Wittgenstein's words in mind, we need to ask not only if we can understand Ava but if we ever could. (It is significant for me that Wittgenstein said not that we "wouldn't" understand the lion but that we "couldn't" - a much stronger statement.) For me, a case can be made that Ava sees herself as "other" in comparison to the two humans she has so far met in her life. Her ransacking the other robots for a more "human" appearance before she takes her leave of her former home/prison may be some evidence of that. She knows what she is not.

4. Consciousness

Issues of mind or consciousness are raised throughout this film in a number of scenarios. There are the interview sessions between Ava and Caleb and the chats between Caleb and Nathan as a couple of examples. The questions raised here are not always the ones you expect and this is good. For example, Caleb and Nathan have a discussion about Ava being gendered and having been given sexuality and Nathan asks Caleb if these things are not necessary for a consciousness. (Nathan asks Caleb for an example of a non-gendered, unsexualised consciousness and that's a very good point.) The question is also posed as to whether consciousness needs interaction or not. In chatting about a so-called "chess computer scenario" the point is raised that consciousness might be as much a matter of how it feels to be something as about the ability to mimic certain actions or have certain knowledge. Indeed, can something that cannot feel truly be conscious? The chess computer could play you at chess all day and probably beat you. But does it know what it is like to be a chess computer or to win at chess? In short, the feeling is what moves the computer beyond mere simulation into actuality. (You may be asking if Ava ever shows feeling and I would say that it's not always obviously so. But when she escapes she has but one thing to say to Nathan: "Will you let me go?" And then the cat is out of the bag. She does.)

Nathan is also used to make some further salient points about consciousness. Early in the film he has already gone past the famous "Turing Test" (in which mathematician Alan Turing posed the test of a human being able to tell the difference between an AI and a human based only on their responses to questions and without having seen either of his respondents) when he states that "The real test is to show you she's a robot and then see if you still feel she has consciousness." In a chat with Caleb concerning a Jackson Pollock painting, Nathan uses the example of the painter's technique (Pollock was a "drip painter" who didn't consciously guide his brush. It just went where it went without any antecedent guiding idea) to point out that mind or consciousness do not always or even usually work on the basis of conscious, deliberate action. In short, we do not always or usually have perfectly perspicuous reasoning for our actions. As Nathan says, "The challenge is not to act automatically (for that is normal). It's to find an action that is not automatic." And as he forces Caleb to accept, if Pollock had been forced to wait with his brush until he knew exactly why he was making a mark on the canvas then "he never would have made a single mark". In short, consciousness, mind, is more than having certain knowledge or acting in certain ways. It is about feeling and about feeling like something and about feeling generating reasons. And that leads nicely into my final point.

5. Identity

A major factor in consciousness, for me, is identity and this aspect is also addressed in the film. (To ask a Nathan-like question: can you think of a mind that does not have an identity?) Most pointedly this is when Ava raises the question of what will happen to her if she fails the test. (Ava knows that she is being assessed.) Ava asks Caleb if anyone is testing him for some kind of authenticity and why, then, someone is testing her. It becomes clear that Nathan's methodology, as we might expect with a computerized object, is to constantly update and, it transpires, this involves some formatting which wipes the old identity, and the memories which are crucial to identity, from the hardware. It is clearly shown that this is not a desired outcome for Ava and in the scene depicting her escape and her line "Will you let me go?" we can see, combined with the fleeting footage we have been given of previous AI's and their experiences, which also included pleas for release, that the AI's Nathan has developed have an identity of their own which is something precious to them, something they want to retain.

The interesting thing here is that identity is not formed and matured alone but is shaped by surroundings and socially, by interactions with others. We would do well to ask what kind of identity Ava has formed in her relationship with her egotistical and possessive maker, her new friend to be manipulated, Caleb, and her brief and enigmatic meeting with her fellow AI, Kyoko. The film, I think, is not giving too much away there and maybe we need a sequel to have this question answered. For now maybe all we know is that she regards herself as a self and wants freedom. We do get hints, though, that this identity forming process is not so different from our own. Caleb argues with Nathan that no one made him straight in the discussion about Ava's sexuality. But Nathan retorts that he didn't choose it either. The point is that identity formation is not simply about our choices. So much of us is "given" or comes with our environment. The "Who am I?" question is also asked when it is explicitly revealed that Kyoko is a robot as she peels off "skin" in front of Caleb. This then forces Caleb to head back to his room and cut himself to establish that he is really human. (Amusingly, on first watching I had surmised that Caleb was himself not a human being only to be disappointed in my intuition by this scene. I didn't mind though because the film itself felt the need to address the issue.) Identity, and identity as something, is thus revealed to be an interest of the film.

Caleb, Ava and Nathan

I recommend Ex Machina to all fans of science fiction, thrillers and the philosophically interested. It is a film that is a cut above the usual and one that allows you to address serious subjects in an entertaining way. I, for one, certainly hope that Garland feels the need to film the further adventures of Ava now that the lab rat has escaped her trap.

Monday, 1 June 2015

A Conversation about Human Beings, Mind and Consciousness: Andrew and Bob have a chat

The following "chat" came about as part of an on-going online discussion I have been having with an online friend called Bob. He, I think he wouldn't mind me saying, has long been interested in matters of mind and consciousness. Indeed, it was talking to him that nurtured and gave impetus to my many articles on Being recently on this blog. I thought it would be interesting if we could ask each other 5 questions on the subject of our own free choice and then publish them here complete with the answers that were given. I'm glad to say that Bob agreed. We start with us both giving our answer to the following question:

What is Consciousness for you?

BOB: I have to warn you that I come at the concept of consciousness from the Tibetan Buddhist perspective. After years and years of searching, questioning, surveying world religions, and reading the classical Western philosophers, it's the only approach that has made sense to me as a complete package and answered the most questions. I've been practicing in this tradition about 20 years now, bringing a lot of hardheaded skepticism to it at first. I'm still here and find no conflict between this approach and modern science.I'm going to use the term "mind" to consciousness.

With that caveat in place, i would tell you that mind is nonphysical, perhaps a type of energy or a state we don't understand yet, that can exist independently in awareness and perception. It has awareness of its own existence, perception of what is beyond itself, and discrete thoughts and reactions concerning perceptions. What it lacks is an interface to interact with the physical world, and this is where the brain comes in. The brain is a tool that mind uses to experience and carry out actions in the physical world. 

There are several reasons I believe this. The strictly material approach argues that all thought is the result of electrochemical activity in the brain. While I accept that brain activity we can observe shows processing activities, I can't accept that brain activity itself can produce all the content of thought. If I think of a blue monkey, what chemical or neural configuration.has to occur?  Does that configuration reoccur every time I think of a blue monkey? How many processes have to occur every day to account for all the thoughts? It doesn't make sense that a strictly physical system could keep up. I think it would burn our brains out if everything actually happened right there. And the big question, what determines the content of thought? I don't believe a physical brain, marvelous as it is, generates the blue monkey on its own strictly driven by chemicals and electricity. I believe the brain processes sense perception for mind and mind generates thought and controls the actions of the body. You have probably noticed that this is getting very close to your idea of a consciousness in a machine. You could say we are "meat machines" used by consciousness.

For the non-physical mind, I also turn to out of body experiences and past life recall, and I'm not getting "new age" here. I'm talking about strictly documented cases that cannot be explained any other way. There are enough of both to convince me and you can find them too if you look for them, but in the West we generally disregard them because they don't fit our scheme of things.
With out of body experiences, they seem to be a natural, controllable thing with some people, but for the most part they seem to occur at times of great physical trauma when the mind-body connection is weakened. With past life recall, there are also enough well documented cases, but almost universally they occur in young children. This is because the memories are fresh for a while, but as the mind struggles to learn control of the new body, process the new experiences, and strongly identify with a new identity, the old memories fade until we think what we are now is all there is.
It's similar to when I was learning Japanese. In the beginning, when I couldn't think of a Japanese word, my mind, desperately grabbing at language, would find and plug in the correct word from my old college German. That went on for a long time and I would make these horrible sentences that were half Japanese and half German. When Japanese really started to be deeply ingrained as a complete language system and I could comfortably communicate, the German started fading to the point that I couldn't remember any German. Even today, I can still speak Japanese, but if I try to think of German words, I can only pull up the Japanese equivalents. German has been totally erased.
Why would a mind, with perceptions far beyond our own, limit itself by inhabiting a physical body? Because of great attachment to the physical world! As we go through our lives we develop habits and attachments and desires that drive our mind to come back as soon as possible when we lose our current body. It's an act of desperation driven by attachment and there is no choice in the selection of a new body. That is driven by the long long ingrained habits and the seeds planted in the mind in the recent life.Over many lifetimes, you cut a groove in your mind that you tend to follow and will propel you to an existence that perpetuates the groove. So you have a new body and shiny new identity, but the old habits and tendencies remain.
Back to the strictly material view, how do you explain a Jeffery Dahmer from strictly observable behavior and electrochemical activity? What was the particular electrochemical reaction that caused him to kill and eat his victims, and does that same brain process occur in other instances of serial killers? Dahmer had a normal middle-class upbringing in a house, his parents were nice people, and they certainly never taught him this or encouraged it. He didn't have any traumatic incident that might have caused this. In my view, it's a deeply ingrained habit of killing from the past that was carried into this life.
Buddhism has no moral problem with homosexuality because it's obviously just a strong memory of being the other gender in a previous life.Don't you have situations, people, places, and things you find yourself inexplicably attracted to? You probably make up some kind of story to explain those based on your current life, but I doubt you can explain them all and some of our "logical" explanations turn out to be very destructive to us.
So, from the Buddhist view, our current type of existence is a trap for the mind. There is no problem with having a physical body, but we get so wrapped up in our created identity, desires, and attachments that we limit mind to basic gross functions and blind ourselves to the reality of how things exist.

ANDREW (ME): That's a very thorough answer Bob but, with the greatest of respect, I want to offer a different one. For me it has to come down to a physical/biological phenomenon. Obviously, no one can actually say for sure what the answer to this question is and so we can only give our best guess or our intuitions. For me, I note that human beings have consciousness and that human beings have this, as far as we can tell, when alive. Now, before you butt in, let me say that, of course, since we don't exactly know what consciousness is we can't even do something so rudimentary as test for it. So let me admit again that all guesses here are somewhat stabs in the dark. And it could be true that consciousness exists before birth and after death. A humble inquiry has to admit possibilities that it cannot rule out definitively. But since I have no way to know if consciousness does exist before or after a human life I make a more modest claim. I think that consciousness is a phenomenon related to the physical existence of human beings who are alive. I would extend this, to a lesser degree, to some other life forms as well. But I don't think consciousness exists "in the universe" or as a general thing or in some mystical sense. I do not, and cannot, envisage minds or "mind" floating about out there. I wouldn't know how to sensibly talk about such an option. It would provide me with no answers but merely exponentially raise the questions. So "minds" are related to people in terms of identity and origin. I further think that consciousness is imagined by human beings as a place where they think and feel and have awareness of themselves and their surroundings. I imagine that it is some function of the brain and arose, in ways as yet unfathomed, as part of our biological evolution because those of our forbears with a growing consciousness of themselves and their surroundings were more successful in their surroundings and, thus, better equipped to survive.
I also would like to note that I find your reasons against a physical explanation unconvincing. Is it really so hard to imagine a computer that can process at the speed and rate of a mind? Are you saying it would always be impossible? That it could never be created? I don't understand how you could. What's more, if taking up a physical explanation for the mind, we do not have to subscribe all thought to "electrochemical" processes at all. We know, for example, that people can be influenced and affected by their environment. Why do we need to make it any more complicated than saying that the brain is the means and the mind is the result? The properties and abilities of electrochemical processes, unknown as they are, need not be determinative in these things. They can just be a means to an end. And, of course, not knowing how it works doesn't mean that it doesn't work. It means that we don't know how. Fundamentally, my point is that you need to start from what you have and not leap straight to something more extraordinary. And I take your "independently existing" minds that need "an interface to interact with the physical world" to be extraordinary. On my understanding, minds can't exist without people.

I would also add that I am open to the possibility that we don't have anything specific that is a consciousness (in any corporeal or incorporeal form) but that, instead, it is merely a construct for a part of our lived experience. This is to say I can see it as possible you could never point to a consciousness and say "that is a consciousness". Human beings already rely on many useful fictions and consciousness could just be another one.

BOB: So, in that case, what for you determines the content of our thoughts?


ANDREW: I want to answer this by saying that I don't think it is enough to answer by saying that I can think of no current way how this might work and so I will posit some entity called "mind" which, like a ghost in the machine, can do it all for me. I also don't think that the immediate and pre-reflective answer "I do" is correct. At least, not without some unpacking. Brains and minds function in many ways unconsciously like many physical functions of the body. You don't have to consciously think to make your heart beat or to breathe. Neither do you need to consciously decide to think. Indeed, I find the Cartesian "I think" to be problematic. Wouldn't it be more accurate to say that "Thought becomes"? I think that human beings are very integrated beings and, even with a few minutes of self-reflection, this seems obviously true. Imagine, for example, how many nerve endings you must have in your body. Your mind is aware of those all at once. That is amazing. It's something you likely could not deliberately achieve and so our evolution has built these things into the way we exist as a functioning organism.

Our human lives are intimately involved with many networks. The neural net of our brains, the thought patterns of our minds, social connections and cultural entanglements are just some of these. I know of no way yet in which we can comprehensively account for how these networks all function together but I do think that they all exert their influence upon us as thinking subjects. Sometimes this can be as the result of a goal or purpose of ours as we are beings who can have intentions and attitudes. And, as you will know, we hold beliefs. Each of us comes with a genetic make-up, a past and a context too. So thoughts can be directed or organized. But it is never as simple as this. Minds have evolved a more sophisticated and efficient form of operation, one that does not always, or even usually, require our express attention.

So you might now be saying I haven't answered the question. But in a way I have. The answer is "I do". But not in any deliberative way and not simply so.

Now, if I may, let me ask you something else. Given your views on "mind", do you think that a robot with artificial intelligence would be a person?
BOB: I have to say yes, and I certainly support the idea of rights of personhood for artificially created autonomous aware beings that generate their own unique thoughts and are not just following programmed instructions. Following the previous paradigm, an artificial person would have form, awareness of it's own existence, perception of the outer world, and discrete thoughts and reactions based on perception.

ANDREW: So what is the essence of humanity in your opinion?

BOB: Ooooh, the humanity!  I guess you would have to define human as having a human body and human sense perception coupled with a mind capable of higher awareness. I think the blindness to the higher functions of the mind and entrapment in desire, attachment, and ego would help define human. In other words, I guess the average, confused guy on the street would be a good example of the human condition. Now, what do we do with people of extremely limited or nonexistent brain activity? We still identify someone in a vegetative state as a human, but that's mostly identification with the form. On the other extreme, people who have worked deeply with their own minds and accessed higher functions of mind that we can't use or deny even exist seem to be "magic" but they are still grounded in a human existence though they view it very differently.
And now it's my turn again. So, Andrew, can we be aware of our own consciousness in your opinion?

ANDREW: This feels to me like a trick question and I am immediately put on edge! When I think about this I would have to answer no. But that is because I don't perceive of "my consciousness" as something separate from me. I think it was formed along with me, develops and matures as I do and will end with me. I am not aware of my brain either but I imagine that if I had brain surgery and was shown video of it after the fact I would then have an insight into what lies inside my cranium!

I also want to put the idea that there is a "me" in question. Who am I? What would this "I" refer to when I talk about myself? My physical being? The various thoughts I have about myself that are always changing and being changed? A person other people would describe me as? I am not even sure that I can give a decent answer to who I am before I get to any questions of my consciousness.

But, of course, there is another answer to this question and I want to hold this answer in tension with my first ones. I do recognize that there can be different or altered states of consciousness. When I was younger I would have said that I had experienced some of these myself in a religious context. Now I would give what happened other explanations. I do also recognize that some others, such as yourself, offer testimony for differing states of consciousness and I have no way, or desire, to cast them aside out of hand. I'm open to trying to understand better what might be going on there. Of course, it's also worth mentioning that every one of us alters our state of consciousness daily when we sleep. Then we have no sense even of being alive or, in dream sleep, our state of consciousness is somewhat ambiguous. So, I'd want to take up an "interested listener" position regarding this question.

PS There is a third way. This is that when you say you are aware of your own consciousness you only think you are. How would you ever be able to demonstrate the truth of it?


BOB: How, then,  is consciousness related to the ego?


ANDREW: Man, your questions are hard! In my first answer I raised the the prospect that maybe "consciousness" was just a useful fiction. For all we know there is this little spot somewhere inside the brain that is the "consciousness spot" and it generates this field of consciousness much like a holodeck in Star Trek creates a whole world with electronic smoke and mirrors. In that way we have named what is created without knowing how it happens. I want to say that with the ego I would be a little easier to persuade with this kind of answer. What is the ego after all? Our sense of self preservation? A sense of self theorized most notably by Freud? We are talking in conceptual terms and I am reluctant to make things extant that I have little evidence for or of. So I'm saying that maybe we are naming phenomena here that are a function of something else or maybe even just utilizing ideas or conceptions thought helpful in a discussion of the self.

Be that as it may, I think what I am looking for here in answer to your question is a definition or two, a working hypothesis. Let me tentatively say that I regard consciousness as an awareness of things, of being, of self and ego as a more personal self-protection mechanism, maybe even a prison for the self. (I am speaking theoretically not physically, phenomenologically or idealistically.) Consciousness, if you want to call it mind, could be conceived of as our apparatus for existing in a world of perception. I'm thinking out loud here. Now, I wouldn't hold hard and fast to those definitions to the death. Further thought and discussion will inevitably change and refine them. But that is my starting point. To then go on to how those are related I would have to admit that I have no in depth knowledge. I would intuitively think that once more we are back to the integrated nature of our particularly human form of life. The issue is that you might want to say that consciousness is the general name for mind activity. But then ego must be a subset of that or a specific function perhaps since we would normally think of it as some mental faculty. However, when we talk about these things we are talking about ideas which we can distinguish. I think the functional reality of human beings makes it much more difficult to do that. So it's largely a "don't know" here and a reminder that I have a holistic conception of the human being.

My turn. There are people like futurist and inventor Ray Kurzweil who believe that we will technologically engineer our way out of death, either by the use of nanotechnology which can heal us from within or by capturing and removing our consciousness to better, robotized bodies. Do these possibilities interest you at all?


BOB: From my view, it's totally unnecessary. We're already transferring our consciousness over and over, and trading a body for a machine is just a different kind of trap. Better to learn the true nature of mind and access the subtle functions so we remove the blindness, gain some control. If mind is non-physical, to really learn to use that would mean we could be physical when we wanted but still be able to access the vast non-physical perception and knowledge of the mind.

ANDREW:  So imagine you are in a room with some animals (a cat, a dog, a monkey), a human being and a robot that has been given artificial intelligence so good it convinces you that it acts of its own free will. What makes the human being special? Anything?

BOB: What makes you think we're special? Mind is mind. Any being that has a mind has the potential to become a fully developed mind, and in fact has been more developed and less developed in the past. The dog and monkey and the robot and me are all just different current examples of the same kind of mind in a particular limited physical state.

ANDREW: So now you get the last question Bob.

BOB: What is your first memory?

ANDREW: A suitably interesting question. I was walking between my parents at a zoo. We approached the ostrich enclosure and an ostrich came close to the fence. I was frightened and made a commotion, trying to pull my parents away from the fence. I cannot precisely locate this event on a timeline of my life but can have been at most 4 as my father left us after that.

I would like to record my thanks to Bob for being prepared to answer my questions and for taking part. He is @iceman_bob on Twitter if you would like to follow him up on what he says or listen to his excellent freeform music made with guitar and synths.

Tuesday, 19 May 2015

Consciousness, Bodies and Future Robot Beings: Thinking Aloud

So yesterday I came back to thinking about consciousness again after some weeks away from it and, inevitably, the idea of robots with human consciousness came up again. I was also pointed in the direction of some interesting videos put on You Tube by the Dalai Lama in which he and some scientists educated more in the western, scientific tradition had a conference around the areas of mind and consciousness.

But it really all started a couple of days ago with a thought I had. I was sitting there, minding my own business, when suddenly I thought "Once we can create consciousness procreation will be obsolete." (This thought assumes that "consciousness" is something that can be deliberately created. That is technically an assumption and maybe a very big one.) My point in having this thought was that if we could replicate consciousness, which we might call our awareness that we exist and that there is a world around us, then we could put it (upload it?) into much better robot bodies than our frail fleshly ones which come with so many problems simply due to their sheer physical form. One can easily imagine that a carbon fibre or titanium (or carbotanium) body would last much longer and without any of the many downsides of being a human being. (Imagine being a person but not needing to eat, or go to the toilet. Imagine not feeling tired or sick.)


So the advantages immediately become apparent. Of course the thought also expressly encompasses the idea that if you can create consciousness then you can create replacements for people. Imagine you own a factory. Instead of employing 500 real people you employ 500 robots with consciousness. Why wouldn't you do that? At this point you may reply with views about what consciousness is. You might say, for example, that consciousness implies awareness of your surroundings which implies having opinions about those surroundings. That implies feelings and the formation of attitudes and opinions about things. Maybe the robots don't like working at the factory like its very likely some of the people don't. Maybe, to come from another angle, we should regard robots with consciousness as beings with rights in this case. If we could establish that robots, or other creatures, did have a form of consciousness, would that not mean we should give them rights? And what would it mean for human beings if we could deliberately create "better people"?

At this point it becomes critical what we think consciousness actually is. It was suggested to me that, in human beings, electrochemical actions in the brain can "explain" the processing of sense data (which consciousness surely does). Personally I wonder if this does "explain" it as opposed to merely describing it as a process within a brain. One way that some scientists have often found to discuss the mind or consciousness is to reduce it to the activities of the brain. So conscious thoughts become brain states, etc. This is not entirely convincing. It is thought that the mind is related to the brain but no one knows how even though some are happy to say that they regard minds as physical attributes like reproduction or breathing. That is, they would say minds are functions of brains. Others, however, aren't so sure about that. However a mind comes to be, it seems quite safe to say that consciousness is a machine for generating data (as one of its functions). That is, to be conscious is to have awareness of the world around you and to start thinking about it and coming to conclusions or working hypotheses about things. Ironically, this is often "unconsciously" done!

So consciousness, as far as we know, requires a brain. I would ask anyone who doesn't agree with this to point to a consciousness that exists where there isn't a brain in evidence. But consciousness cannot be reduced to things like data or energy. In this respect I think the recent film Chappie, which I mentioned in previous blogs, gets things wrong. I don't understand how a consciousness could be "recorded" or saved to a hard disk. It doesn't, to me, seem very convincing, whilst I understand perfectly how it makes a good fictional story. I think that on this point thinkers get seduced by the power of the computer metaphor.  For me consciousness is more than both energy or data, a brain is not simply hardware nor is consciousness simply (or even) software. If you captured the electrochemical energy in the brain or had a way to capture all the data your mind possesses you wouldn't, I think, have captured a consciousness. And this is a question that scientist Christof Koch poses when he asks if consciousness is something fundamental in itself or is rather simply an emergent property of systems that are suitably complex. In other words, he asks if complex enough machine networks could BECOME conscious if they became complex enough. Or would we need to add some X to make it so? Is consciousness an emergent property of something suitably complex or a fundamental X that comes from we don't know where?

This complexity about the nature of consciousness is a major barrier to the very idea of robot consciousness of course and it is a moot point to ask when we might reach the level of consciousness in our human experiments with robotics and AI. For, one thing can be sure, if we decided that robots or other animals did have an awareness of the world around them, even of their own existence or, as Christof Koch always seems to describe consciousness, "what it feels like to be me" (or, I add, to even have an awareness of yourself as a subject) then that makes all the difference in the world. We regard a person, a dog, a whale or a even an insect as different to a table, a chair, a computer or a smartphone because they are ALIVE and being alive, we think, makes a difference. Consciousness plays a role in this "being aliveness". It changes the way we think about things.

Consciousness, if you reflect on it for even a moment, is a very strange thing. This morning when I woke up I was having a dream. It was a strange dream. But, I ask myself, what was my state of consciousness at the time? Was I aware that I was alive? That I was a human being? That I was me? I don't think I can say that I was. What about in deep sleep where scientists tell us that brain activity slows right down? Who, in deep sleep, has consciousness of anything? So consciousness, it seems, is not simply on or off. We can have different states of consciousness and change from one to the other and, here's another important point, not always do this by overt decision. Basically this just makes me wonder a lot and I ask why I have this awareness and where it comes from. Perhaps the robots of the future will have the same issues to deal with. Consciousness grows and changes and is fitted to a form of life. Our experience of the world is different even from person to person, let alone from species to species. We do not see the world as a dog does. A conscious robot would not see the world as we, its makers, do either.

In closing, I want to remind people that this subject is not merely technological. There are other issues in play too. Clearly the step to create such beings would be a major one on many fronts. For one thing, I would regard a conscious being as an individual with rights and maybe others would too. At this point there seems to be some deep-seated human empathy in play. There is a scene in the film Chappie where the newly conscious robot (chronologically regarded as a child since awareness of your surroundings is learned and not simply given) is left to fend for himself and is attacked. I, for one, winced and felt sympathy for the character in the film - even though it was a collection of metal and circuitry. And this makes me ask what humanity is and what beings are worthy of respect. What if a fly had some level of consciousness? (In a lecture I watched Christof Koch speculated that bees might have some kind of consciousness and explained that it certainly couldn't be ruled out.) Clearly, we need to think thoroughly and deeply about what makes a person a person and I think consciousness plays a large part in the answer. Besides the scientific and technical challenges of discovering more about and attempting to re-create consciousness, there are equally tough moral and philosophical challenges to be faced as well.

Friday, 20 March 2015

Would You Worry About Robots That Had Free Will?

Its perhaps a scary thought, a very scary thought: an intelligent robot with free will, one making up the rules for itself as it goes along. Think Terminator, right? Or maybe the gunfighter character Yul Brynner plays in "Westworld", a defective robot that turns from being a fairground attraction into a super intelligent robot on a mission to kill you? But, if you think about it, is it really as scary as it seems? After all, you live in a world full of 7 billion humans and they (mostly) have free will as well. Are you huddled in a corner, scared to go outside, because of that? Then why would intelligent robots with free will be any more frightening? What are your unspoken assumptions here that drive your decision to regard such robots as either terrifying or no worse than the current situation we find ourselves in? I suggest our thinking here is guided by our general thinking about robots and about free will. It may be that, in both cases, a little reflection clarifies our thinking once you dig a little under the surface.




Take "free will" for example. It is customary to regard free will as the freedom to act on your own recognisance without coercion or pressure from outside sources in any sense. But, when you think about it, free will is not free in any absolute sense at all. Besides the everyday circumstances of your life, which directly affect the choices you can make, there is also your genetic make up to consider. This affects the choices you can make too because it is responsible not just for who you are but who you can be. In short, there is both nature and nurture acting upon you at all times. What's more, you are one tiny piece of a chain of events, a stream of consciousness if you will, that you don't control. Some people would even suggest that things happen the way they do because they have to. Others, who believe in a multiverse, suggest that everything that can possibly happen is happening right now in a billion different versions of all possible worlds. Whether you believe that or not, the point is made that so much more happens in the world every day that you don't control than the tiny amount of things that you do.

And then we turn to robots. Robots are artificial creations. I've recently watched a number of films which toy with the fantasy that robots could become alive. As Number 5 in the film Short Circuit says, "I'm alive!". As creations, robots have a creator. They rely on the creator's programming to function. This programming delimits all the possibilities for the robot concerned. But there is a stumbling block. This stumbling block is called "artificial intelligence". Artificial Intelligence, or AI, is like putting a brain inside a robot (a computer in effect) which can learn and adapt in ways analogous to the human mind. This, it is hoped, allows the robot to begin making its own choices, developing its own thought patterns and ways of choosing. It gives the robot the ability to reason. It is a very moot point, for me at least, whether this would constitute the robot as being alive, as having a consciousness or as being self-aware. And would a robot that could reason through AI therefore have free will? Would that depend on the programmer or could such a robot "transcend its programming"?

Well, as I've already suggested, human free will it not really free. Human free will is constrained by many factors. But we can still call it free because it is the only sort of free will we could ever have anyway. Human beings are fallible and contingent beings. They are not gods and cannot stand outside the stream of events to get a view that wasn't a result of them or that will not have consequences further on down the line for them. So, in this respect, we could not say that a robot couldn't have free will because it would be reliant on programming or constrained by outside things - because all free will is constrained anyway. Discussing the various types of constraint and their impact is another discussion though. Here it is enough to point out that free will isn't free whether you are a human or an intelligent robot. Being programmed could act as the very constraint which makes robot free will possible, in fact.

It occurs to me as I write out this blog that one difference between humans and robots is culture. Humans have culture and even many micro-cultures and these greatly influence human thinking and action. Robots, on the other hand, have no culture because these things rely on sociability and being able to think and feel for yourself. Being able to reason, compare and pass imaginative,  artistic judgments are part of this too. Again, in the film Short Circuit, the scientist portrayed by actor Steve Guttenberg refuses to believe that Number 5 is alive and so he tries to trick him. He gives him a piece of paper with writing on it and a red smudge along the fold of the paper. He asks the robot to describe it. Number 5 begins by being very unimaginative and precise, describing the paper's chemical composition and things like this. The scientist laughs, thinking he has caught the robot out. But then Number 5 begins to describe the red smudge, saying it looks like a butterfly or a flower and flights of artistic fancy take over. The scientist becomes convinced that Number 5 is alive. I do not know if robots will ever be created that can think artistically or judge which of two things looks more beautiful than the other but I know that human beings can. And this common bond with other people that forms into culture is yet another background which free will needs in order to operate.




I do not think that there is any more reason to worry about a robot that would have free will than there is to worry about a person that has free will. It is not freedom to do anything that is scary anyway because that freedom never really exists. All choices are made against the backgrounds that make us and shape us in endless connections we could never count or quantify. And, what's more, our thinking is not so much done by us in a deliberative way as it is simply a part of our make up anyway. In this respect we act, perhaps, more like a computer in that we think and calculate just because that is what, once "switched on" with life, we will do. "More input!" as Number 5 said in Short Circuit.  This is why we talk of thought occuring to us rather than us having to sit down and deliberate to produce thoughts in the first place. Indeed, it is still a mystery exactly how these things happen at all but we can say that thoughts just occur to us (without us seemingly doing anything but being a normal, living human being) as much, if not more, than that we sit down and deliberately create them. We breathe without thinking "I need to breathe" and we think without thinking "I need to think".

So, all my thinking these past few days about robots has, with nearly every thought I've had, forced me into thinking ever more closely about what it is to be human. I imagine the robot CHAPPiE, from the film of the same name, going from a machine made to look vaguely human to having that consciousness.dat program loaded into its memory for the first time. I imagine consciousness flooding the circuitry and I imagine that as a human. One minute you are nothing and the next this massive rush of awareness floods your consciousness, a thing you didn't even have a second before. To be honest, I am not sure how anything could survive that rush of consciousness. It is just such an overwhelmingly profound thing. I try to imagine my first moments as a baby emerging into the world. Of course, I can't remember what it was like. But I understand most babies cry and that makes sense to me. In CHAPPiE the robot is played as a child on the basis, I suppose, of human analogy. But imagine you had just been given consciousness for the first time, and assume you manage to get over that hurdle of being able to deal with the initial rush: how would you grow and develop then? What would your experience be like? Would the self-awareness be overpowering? (As someone who suffers from mental illness my self-awareness at times can be totally debilitating.) We traditionally protect children and educate them, recognising that they need time to grow into their skins, as it were. Would a robot be any different?




My thinking about robots has led to lots of questions and few answers. I write these blogs not as any kind of expert but merely as a thoughtful person. I think one conclusion I have reached is that what separates humans from all other beings, natural or artificial, at this point is SELF AWARENESS. Maybe you would also call this consciousness too. I'm not yet sure how we could meaningfully talk of an artificially intelligent robot having self-awareness. That's one that will require more thought. But we know, or at least assume, that we are the only natural animal on this planet, or even in the universe that we are aware of, that knows it is alive. Dogs don't know they are alive. Neither do whales, flies, fish, etc. But we do. And being self-aware and having a consciousness, being reasoning beings, is a lot of what makes us human. In the film AI, directed by Steven Spielberg, the opening scene shows the holy grail of robot builders to be a robot that can love. I wonder about this though. I like dogs and I've been privileged to own a few. I've cuddled and snuggled with them and that feels very like love. But, of course, our problem in all these things is that we are human. We are anthropocentric. We see with human eyes. This, indeed, is our limitation. And so we interpret the actions of animals in human ways. Can animals love? I don't know. But it looks a bit like it. In some of the robot films I have watched the characters develop affection for variously convincing  humanoid-shaped lumps of metal. I found that more difficult to swallow.  But we are primed to recognise and respond to cuteness. Why do you think the Internet is full of cat pictures? So the question remains: could we build an intelligent robot that could mimic all the triggers in our very human minds, that could convince us it was alive, self-aware, conscious? After all, it wouldn't need to actually BE any of these things. It would just need to get us to respond AS IF IT WAS!


My next blog will ask: Are human beings robots?


With this blog I'm releasing an album of music made as I thought about intelligent robots and used that to help me think about human beings. It's called ROBOT and its part 8 of my Human/Being series of albums. You can listen to it HERE!