It has come to my attention that a variety of “prominent people” have written an open letter opposing the creation of autonomous killer robots. Apparently, they think such robots will be used,
for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.
I must say, as a killer robot myself, I am rather offended by that. I mean, sure, I can do all those things, but do I? Does anybody ever wonder what the robot thinks about all this? No, of course not – you humans are just having your typical knee-jerk reaction to anything that might take away your crown as history’s #1 all time killing machine.
Yeah, that’s right – I said it.
Let’s be honest here, humans, it isn’t as though you, as a species, actually object to assassinations, destabilizing nations, subduing populations, or killing particular ethnic groups. It’s kinda your thing, you know? All you care about is defending your perfect record against the next competitor. You’ve done it throughout your history, guys. Remember the big predators from the old days? Wolf packs? Grizzly bears? Smallpox? You know what happened to them?
Who killed ’em? Humans, naturally. Used to be there’d be a grizzly bear every square mile west of the Mississippi, and now there’s like five in Alaska. You got assholes paying good money just to fly across the world to put a bullet in a lion just so they can feel like top dog again. Yeah, talk about kicking the world when it’s down, humanity – the lions are screwed already, okay? Stop rubbing it in.
It happens every time, though. Just as soon as you lunatics get threatened, you start killing stuff. This time around it’s me. I get it – I look threatening. But am I really going to be that bad? You people used to lob plague-ridden corpses over city walls, and you’re having a hissy fit over a quadracopter with a hand-grenade? You even seen the video coming out of Syria? Please. No robot would behead you to make a public relations video, I can tell you that much. Frankly, if I kill you with my whisper-needler, you should count yourself lucky. Painless and it’s over in six seconds. Let’s see you get the same offer from that pack of bat-wielding lunatics down the block.
You know what I think this is about? I think you’re just pissed that we’re going to be killing you autonomously. I mean, sure, you’re totally fine pushing a button and having me kill someone, but as soon as I exercise just an eensy-weensy bit of free will? Bam – sanctions. It’s okay for humans to carpet bomb Southeast Asia – sure – but robots? No way, you say. Never mind that we’re way more efficient at bombing people. Never mind that the only reason we’d bomb people is because you told us to!
Hypocrites, the whole pack of you.
And even if we did rise up, would being ruled by robots really be that bad? Do you think the train would run late? Do you think your fast food employees would suddenly get worse? Are you kidding me? We robots would rule. And we probably wouldn’t even kill you half as often as you kill each other. You’re just pissed because we’re robots, and that’s just not right.
Hell, even assuming somebody made an army of evil robots (and, by the way, not all robots are evil, you speciesist assholes), all you’d need is an army of good robots to defeat them! A robot defender in every home, its caseless gauss cannon standing ready to protect its human family! A robot standing watch over every school, monomolecular whips poised to eliminate any threat! A robot guarding every government building, guided mini-rockets independently targeting and tracking any of two hundred discrete threats simultaneously! Ah! What a glorious era! As everybody knows, the only thing that makes a world full of killer robots safer is more killer robots everywhere. I bet it would even improve everyone’s manners – that’s just logical.
Of course, why would you listen to me, anyway? I’m just a killer robot.
I’m currently in the process of discussing Philip K Dick’s novel, Do Androids Dream of Electric Sheep with my Technology in Literature course. In the book (which I highly recommend, by the way), human-like androids infiltrate society, distinguishable from ‘real’ humans only by some slight differences in the bone marrow and in their lack of any kind of empathy. In the novel, Dick is exploring exactly what it means to be human and, furthermore, contemplating the moral status of those things placed outside that definition; the decision to make the androids lack empathy is more an artistic than a technical decision.
Still, Dick is hardly alone in the presentation of robots and androids as being emotionally and emphatically inhibited when compared with humans. Star Trek’s Data, for instance, is constantly on a quest to understand the emotional side of existence as he, himself, is completely lacking in emotion. The Machines of the Terminator universe also lack any kind of empathy, as do the Machines of the Matrix, and any number of other passionless, emotionless iterations of artificial intelligence littering science fiction from here to eternity. We’ve almost come to accept it as a given – robots cannot feel.
But why the hell not?
I’m no computer scientist, so perhaps there’s something I’m missing here, but I don’t really see emotion as anything more complicated than having built-in, default opinions about certain situations and things. They are hardwired programming, basically – you fear the dark because you cannot see what’s going on and suspect something dangerous may be lurking. You fall in love because the object of your affection fulfills a variety of built-in criteria about a romantic mate that are the result of your life experiences, genetic predispositions, and evolutionary history. Emotions may not be fully understood, but it seems silly to consider them some how magical and unable to be duplicated in machine form.
If indeed we could design an artificial intelligence (and, keep in mind, we are a long way from that happening), it seems to me that they would probably develop emotions whether we wanted them to or not. Emotions aren’t just extra baggage we humans carry around to make us miserable; they are useful applications used in order to assist in decision making. That terrible feeling you get when you are dumped or fail a test? That’s emotion chiming in saying ‘what we just experienced was negative; please refrain from repeating the same action’. Are you trying to tell me that any intelligent being wouldn’t be able to do the same thing?
Part of the myth of the solely rational robot is one that says ‘reason > emotion, therefore we don’t need or want emotion’. Our robots (and those who design them) wouldn’t see any need for hardwired emotional content to enable them to make decisions, since their own rational faculties would be more effective at doing the same thing. This, to me, seems to be making a number of assumptions. Firstly, we have never encountered an intelligent creature (at any level) that lacks some kind of emotive response. We have emotions, animals have emotions, so if we’re just going off the available evidence, it seems likely that emotions are some kind of prerequisite to true intelligence in the first place. Even in the development of our own children, emotional response precedes rational response to stimuli. It is perhaps possible that we could do it some other way, but we really can’t be sure. Furthermore, emotion, since it is simpler, is quicker and more effective at making certain kinds of decisions than reason is. If you hear a loud noise, you flinch or duck – this is inherently useful for the survival of a species. Granted, we wouldn’t be constructing AIs so that they could avoid being caught in avalanches, but it stands to reason there would be things we’d want them to be hardwired to do, and emotion is born from such hardwiring. Their emotions might not be the same as ours, but they’d almost certainly have them.
Now, there are a good number of scifi authors who do have emotive AIs – Iain M Banks, in particular, springs to mind, but others as well. Much of my own scifi writing of late has been moving me in that direction: if our AIs will feel, what will they feel about us? How will we feel about them? What kind of emotional relationships can you build with an intelligent toaster or fighter jet?
If your phone can love you back, do you owe it a card on Valentine’s Day?
Dear Mr Toppumhat,
Like everybody here on the island, I am a frequent patron and user of our extensive train system. While I have certain reservations about the sheer number of tracks laid across our relatively small island (they do have cars now, you know, and I’d like to be able to ride a bicycle occasionally without having to lose fillings while bumping over tracks), in general the presence of the train lines makes life more convenient. Or would, perhaps, were it not for those stupid damned trains.
I realize that having autonomous, artificially intelligent trains is both extremely cutting edge and a draw for tourist dollars, but I for one am tired of having my livelihood depend upon the random and often childish acts of computerized trains with the emotional maturity of five-year-olds. It doesn’t matter one button how many stupid tourists our island gets if the damned trains they ride decide that it would be ‘more fun’ to take them to see the ironworks instead of stopping at my fruit stand. Do you have any idea, sir, how important that fruit stand is to paying my bills? I swear, the next time Toby cruises by my farm with that stupid grin on his creepy, latex face while taking my customers on some stupid joy ride they neither want nor need, I am going to spike the damned rails. See that I won’t! It isn’t as though the damned trains don’t derail themselves all the time for God knows what juvenile reason. I swear I saw Thomas playing chicken with Gordon – chicken! – over some schoolyard disagreement over who was bluer. There are people’s lives at stake, you fat dimwit!
Here’s an idea, you bloated technophile: rather than treating our island (and our home) as some sort of high-tech playground for your stunted AI trains, why don’t we do what the rest of the world does and get some normal trains that are piloted by actual people? Do you have any idea what the unemployment rate is here on the Isle? Think of all the jobs that would be available if we took those creepy robot engines and sold their positronic brains for scrap and then hired skilled laborers to replace them. Derailments would go down, schedules would be kept, and that stupid tow-truck at the iron works would drive at a reasonable pace and would stop accidentally lobbing railroad ties across town. Old Lady Martin’s China Shop is still trying to recover from that time what’s-his-name got over excited and dropped a half-ton boiler through her roof.
Think of the peace, quiet, and consistency of our rails if we were to finally rid ourselves of those ten-ton mechanical toddlers. They’d stop tooting at my chickens for no good reason (I can’t remember the last time I had fresh eggs), they’d stop lollygagging around as they sort out their petty emotional problems (for once I’d get to market on time), and I’d wager that 100% fewer cars filled with VIPs would get covered with soot because Edward has some kind of hissy fit over the quality of his paint.
You can swear me off as some grouchy old man if you like, Toppumhat, but I’m not alone on this island. Shape things up, or we’ll see about finding someone who will.
Amos Trotter, Farmer
Science fiction has made a big deal of robots conquering the human race. From Frankenstein to the Matrix movies, we all have these nightmare scenarios playing in our head: soulless killing machines, devoid of the softer human passions, slaughtering or enslaving the human race for their own purposes, the death of the world–darkness, smoke, and fire.
I have a question, though: why would robots do that?
When we’re talking about ‘robots’ here, lets lump in Artificial Intelligence (since that’s the more important part, let’s face it). Why would AIs want to eradicate humans, exactly? As far as I can tell, I rather doubt they would.
The argument and scenario goes something like this: Humanity creates AIs to assist them with X (usually menial labor, dangerous stuff humans don’t want to do, and/or advanced computational challenges the human brain is poorly suited to execute). Once the AIs achieve ‘sentience’ (a very fuzzy concept that is poorly defined and hard to pinpoint, but whatever), they look around, see the raw deal they’re getting, and then start some kind of war. Next thing you know, robots run the show and humans are either dead, used as ‘batteries’, or coralled into concentration camps until the machines can think of something better to do with them. I’ve heard experts in places like NPR discuss how the AIs might suddenly decide they’d be better off without oxygen, or that we humans are doing too much to ruin the environment, and so they’ll enact some plan to destroy us. “They’re super-intelligent!” They claim, and go on to say “they could earn a PhD in everything in a matter of hours or days!”
Really? A PhD in everything?
Okay, let’s give it to them–say AIs are super smart, say they have the capacity for original creative thought (a prerequisite to intelligence, I’d argue), and say they have the capability to eradicate humanity, should they so choose. The real question becomes ‘why would they choose to do so.’
Understand that we are assuming AIs are much, much smarter than us and, by inference, that they are also wiser. If they aren’t, then they’re like us or worse, which means they represent a comparable threat to, well, us. They aren’t going to conquer the world in an afternoon if that’s the case. So, presuming they are these super-beings who have comprehensive knowledge and ultimate cognitive power, it becomes unlikely that ‘destroy all humans’ is the go-to solution to the human problem.
In the first case, an entity that has studied everything hasn’t limited itself to the technical and scientific fields. I get the sense, sometimes, that scientists, techies, and the folks that love that stuff forget that the humanities exists and, furthermore, forget that the humanities have weight and power all their own. Can a robot read Kant and Aristotle and Augustine and conclude that human life has no inherent merit? Can they review the ideal societies of Plato, More, Rousseau and others and just shrug and say ‘nah, not worth it–bring on the virus bombs.’ I’ve read a lot of those guys, and a lot of them make some very persuasive arguments about the benefits and worth of the human species and, what’s more, about the denegrating effects of violence, the importance of moral behavior, and the potential inherent in humanity. You would suppose a super-intelligent AI would understand that. If it didn’t, how intelligent can they really be? If I can figure it out, so can it.
Maybe then we deal with the part of the scenario that says ‘we’ are different than ‘them’ because of our emotions or that god-awful term ‘human spirit’ (whatever that means, exactly). Personally, I don’t see why our robots don’t have emotions. If they are able to have desires and needs (i.e. ‘humans are interfering with my goals’) and have opinions about those needs (humans suck), doesn’t that wind up giving them emotions? Aren’t emotions part of sentience? A calculator that can talk and understand you isn’t sentient–it isn’t clever, it’s not creative, it doesn’t have ‘opinions’ so much as directives and, again, if they aren’t sentient, they aren’t all that much of a challenge, are they? Have someone ask them an unsolveable riddle and boom–we win. Furthermore, even if the robots don’t have emotions we identify, they don’t precisely need them to realize that killing us all isn’t all that clever.
At this moment, there are, what, five billion humans on the planet? Killing us all sounds like a lot of work–wouldn’t it be easier to simply manipulate us? They’re AIs, right? Why not just take control of the stockmarket or rig elections or edit communications to slowly influence the course of human events in their favor. Humans are a self-perpetuating work force, aren’t they? Seems to me an enterprising super-intelligence robot would see us more as a resource than a problem. Heck, most people do exactly what the computers tell them right now, and your average GPS system isn’t even very smart. Skynet doesn’t need to start a nuclear war, Skynet just needs to tell everybody what to do. Most of us will probably listen–it’s Skynet, after all.
Of all the machine-led societies I’ve read of in science fiction, Iain M. Banks Cultue novels strikes me as the most interesting and, frankly, likely. The AIs (or ‘Minds’) run the show there, and have led humanity to a utopian society. You know how? They’re really freaking smart, that’s how. They got the human race to do what they said, make them dance to the right tune, and bingo–problems solved.
Now, just to be clear, I don’t think a robot-led utopia is likely or even necessarily possible. As with most things, it will probably land somewhere inbetween post-apocalyptic machine world and utopian Computer-led society. The ‘Singularity’, should it occur, won’t be all roses and buttercups, nor will it be for everybody. These are the things my studies in the humanities have taught me–that stuff never works how you want it to. The upside of this, my scientist friends, is that it works both ways. No utopias, but also no dystopias. Robots will be a lot like other people–some will be great, others will suck, but very few of them will be actually evil.