But Can Your Phone Love You Back?
I’m currently in the process of discussing Philip K Dick’s novel, Do Androids Dream of Electric Sheep with my Technology in Literature course. In the book (which I highly recommend, by the way), human-like androids infiltrate society, distinguishable from ‘real’ humans only by some slight differences in the bone marrow and in their lack of any kind of empathy. In the novel, Dick is exploring exactly what it means to be human and, furthermore, contemplating the moral status of those things placed outside that definition; the decision to make the androids lack empathy is more an artistic than a technical decision.

I have no opinion about your desire to call me names, no matter how obvious it is that such name-calling is intended to be offensive. Jerk.
Still, Dick is hardly alone in the presentation of robots and androids as being emotionally and emphatically inhibited when compared with humans. Star Trek’s Data, for instance, is constantly on a quest to understand the emotional side of existence as he, himself, is completely lacking in emotion. The Machines of the Terminator universe also lack any kind of empathy, as do the Machines of the Matrix, and any number of other passionless, emotionless iterations of artificial intelligence littering science fiction from here to eternity. We’ve almost come to accept it as a given – robots cannot feel.
But why the hell not?
I’m no computer scientist, so perhaps there’s something I’m missing here, but I don’t really see emotion as anything more complicated than having built-in, default opinions about certain situations and things. They are hardwired programming, basically – you fear the dark because you cannot see what’s going on and suspect something dangerous may be lurking. You fall in love because the object of your affection fulfills a variety of built-in criteria about a romantic mate that are the result of your life experiences, genetic predispositions, and evolutionary history. Emotions may not be fully understood, but it seems silly to consider them some how magical and unable to be duplicated in machine form.
If indeed we could design an artificial intelligence (and, keep in mind, we are a long way from that happening), it seems to me that they would probably develop emotions whether we wanted them to or not. Emotions aren’t just extra baggage we humans carry around to make us miserable; they are useful applications used in order to assist in decision making. That terrible feeling you get when you are dumped or fail a test? That’s emotion chiming in saying ‘what we just experienced was negative; please refrain from repeating the same action’. Are you trying to tell me that any intelligent being wouldn’t be able to do the same thing?
Part of the myth of the solely rational robot is one that says ‘reason > emotion, therefore we don’t need or want emotion’. Our robots (and those who design them) wouldn’t see any need for hardwired emotional content to enable them to make decisions, since their own rational faculties would be more effective at doing the same thing. This, to me, seems to be making a number of assumptions. Firstly, we have never encountered an intelligent creature (at any level) that lacks some kind of emotive response. We have emotions, animals have emotions, so if we’re just going off the available evidence, it seems likely that emotions are some kind of prerequisite to true intelligence in the first place. Even in the development of our own children, emotional response precedes rational response to stimuli. It is perhaps possible that we could do it some other way, but we really can’t be sure. Furthermore, emotion, since it is simpler, is quicker and more effective at making certain kinds of decisions than reason is. If you hear a loud noise, you flinch or duck – this is inherently useful for the survival of a species. Granted, we wouldn’t be constructing AIs so that they could avoid being caught in avalanches, but it stands to reason there would be things we’d want them to be hardwired to do, and emotion is born from such hardwiring. Their emotions might not be the same as ours, but they’d almost certainly have them.
Now, there are a good number of scifi authors who do have emotive AIs – Iain M Banks, in particular, springs to mind, but others as well. Much of my own scifi writing of late has been moving me in that direction: if our AIs will feel, what will they feel about us? How will we feel about them? What kind of emotional relationships can you build with an intelligent toaster or fighter jet?
If your phone can love you back, do you owe it a card on Valentine’s Day?
Posted on February 27, 2013, in Critiques, Theories, and Random Thoughts and tagged AI, Data, emotion, Iain M Banks, robots, scifi, Star Trek. Bookmark the permalink. 5 Comments.
I think that emotion arises as a consequence of brain structure, much as our ability to think does. Different parts of the brain contribute to each process. Almost certainly, the structure of the first artificial brains we create will differ from our brain structure, because our brain structure is so crazily complex. Therefore, the way such intelligences will think and feel will be different. We even get to see that in people with brain disorders– there is a school of research that says that pedophilia is due to a mis-wiring in the brain that causes feelings of protectiveness of kids to be interpreted as arousal. With just that one different wiring, we come up with a person that many would consider not human emotionally.
Further, I think that when we create an intelligence, the emotional part will not be the part we value, especially at first. We might even distrust machines with emotions, since they are likely to be different than ours.
Good post, though! Very thought provoking.
Thanks!
Yeah, I’m not saying machine emotions will be identical to ours, per se, but I find it hard to accept that they won’t have them at all. Even pedophiles and sociopaths have identifiable emotions (hate, fear, love, joy, etc.), even if their source is repulsive or unknowable to us.
We very well may try to inhibit the development of such emotions, but I don’t think it likely we’ll be any more successful in doing that than we are in controlling our own emotions. Once true intelligence gets running, the odds that it will spin well out of our control are not only large, they’re likely. Our brain structure likely won’t be any more or less complex than theirs, really, so to say that we’ll have a tight handle on it seems implausible.
I think it comes down to limitations in our imagination. How to describe what has never been? Most aliens are relatively lame in sf books too, but then you shrug because well, entertainment. My favorites and notable exception on the aliens are the Tines in Vernor Vinge. I’ll have to try Iain Banks on the AI, haven’t worked my way to him yet.
The limits to our imagination are real, true, but that doesn’t mean we need to stop trying. For Banks, the AIs are vastly intelligent, but have very human-like personalities. In Excession we see the true Minds interacting in ways both like and unlike people do. I personally think AI will be closer to our methods of thinking than we might suspect, if for no other reason than we have no other way of defining intelligence beyond our own standard.
Is it not just a question of what we consider to be the most ‘human’ part of us and for some reason people consider emotions to be just that? I’m pretty sure most mammals have quite similar emotional systems to us (the enzymes and hormones used in emotional stimuli is tens of millions of years old) and probably birds too. We just think that emotions is what makes us special so for an AI to be less-than-human (or just stand apart) you remove emotions and presto, you got a freaky creepy robotman!
It is similar to the Vulcans (sp?) of Star Trek; here somehow emotions are the enemy of rational thought when in fact emotions are quite exceptionally rational. Emotional pleas might not be but actual emotions are. Not to mention that reacting to emotional responses is both helpful in pure survival but it also means that you can easier interact with others. You could probably have a system where a race actively measures different hormonal levels in others and deduces what it might mean by brain functions but having a responsive set of your own that quickly help you tune in to the group you are with thus avoiding social faux pas seems like a better system that requires less process of data and produces a faster response.
I quite like the sense of “feeling” that Asimov’s robots seems to have. It sort of gives you an outside perspective on what our feelings really are; multiple sets of protocols that guides us through our environment.