Of Our Robot Overlords

Science fiction has made a big deal of robots conquering the human race. From Frankenstein to the Matrix movies, we all have these nightmare scenarios playing in our head: soulless killing machines, devoid of the softer human passions, slaughtering or enslaving the human race for their own purposes, the death of the world–darkness, smoke, and fire.

I have a question, though: why would robots do that?

When we’re talking about ‘robots’ here, lets lump in Artificial Intelligence (since that’s the more important part, let’s face it). Why would AIs want to eradicate humans, exactly? As far as I can tell, I rather doubt they would. 

The argument and scenario goes something like this: Humanity creates AIs to assist them with X (usually menial labor, dangerous stuff humans don’t want to do, and/or advanced computational challenges the human brain is poorly suited to execute). Once the AIs achieve ‘sentience’ (a very fuzzy concept that is poorly defined and hard to pinpoint, but whatever), they look around, see the raw deal they’re getting, and then start some kind of war. Next thing you know, robots run the show and humans are either dead, used as ‘batteries’, or coralled into concentration camps until the machines can think of something better to do with them. I’ve heard experts in places like NPR discuss how the AIs might suddenly decide they’d be better off without oxygen, or that we humans are doing too much to ruin the environment, and so they’ll enact some plan to destroy us. “They’re super-intelligent!” They claim, and go on to say “they could earn a PhD in everything in a matter of hours or days!”

Ask me about my dissertation on the social effects of chocolate milk consumption in rural Idaho public schools.

Really? A PhD in everything? 

Okay, let’s give it to them–say AIs are super smart, say they have the capacity for original creative thought (a prerequisite to intelligence, I’d argue), and say they have the capability to eradicate humanity, should they so choose. The real question becomes ‘why would they choose to do so.’

Understand that we are assuming AIs are much, much smarter than us and, by inference, that they are also wiser. If they aren’t, then they’re like us or worse, which means they represent a comparable threat to, well, us. They aren’t going to conquer the world in an afternoon if that’s the case. So, presuming they are these super-beings who have comprehensive knowledge and ultimate cognitive power, it becomes unlikely that ‘destroy all humans’ is the go-to solution to the human problem.

In the first case, an entity that has studied everything hasn’t limited itself to the technical and scientific fields. I get the sense, sometimes, that scientists, techies, and the folks that love that stuff forget that the humanities exists and, furthermore, forget that the humanities have weight and power all their own. Can a robot read Kant and Aristotle and Augustine and conclude that human life has no inherent merit? Can they review the ideal societies of Plato, More, Rousseau and others and just shrug and say ‘nah, not worth it–bring on the virus bombs.’ I’ve read a lot of those guys, and a lot of them make some very persuasive arguments about the benefits and worth of the human species and, what’s more, about the denegrating effects of violence, the importance of moral behavior, and the potential inherent in humanity. You would suppose a super-intelligent AI would understand that. If it didn’t, how intelligent can they really be? If I can figure it out, so can it.

Maybe then we deal with the part of the scenario that says ‘we’ are different than ‘them’ because of our emotions or that god-awful term ‘human spirit’ (whatever that means, exactly). Personally, I don’t see why our robots don’t have emotions. If they are able to have desires and needs (i.e. ‘humans are interfering with my goals’) and have opinions about those needs (humans suck), doesn’t that wind up giving them emotions? Aren’t emotions part of sentience? A calculator that can talk and understand you isn’t sentient–it isn’t clever, it’s not creative, it doesn’t have ‘opinions’ so much as directives and, again, if they aren’t sentient, they aren’t all that much of a challenge, are they? Have someone ask them an unsolveable riddle and boom–we win. Furthermore, even if the robots don’t have emotions we identify, they don’t precisely need them to realize that killing us all isn’t all that clever.

We have come to give you family planning advice. Obey.

At this moment, there are, what, five billion humans on the planet? Killing us all sounds like a lot of work–wouldn’t it be easier to simply manipulate us? They’re AIs, right? Why not just take control of the stockmarket or rig elections or edit communications to slowly influence the course of human events in their favor. Humans are a self-perpetuating work force, aren’t they? Seems to me an enterprising super-intelligence robot would see us more as a resource than a problem. Heck, most people do exactly what the computers tell them right now, and your average GPS system isn’t even very smart. Skynet doesn’t need to start a nuclear war, Skynet just needs to tell everybody what to do. Most of us will probably listen–it’s Skynet, after all.

Of all the machine-led societies I’ve read of in science fiction, Iain M. Banks Cultue novels strikes me as the most interesting and, frankly, likely. The AIs (or ‘Minds’) run the show there, and have led humanity to a utopian society. You know how? They’re really freaking smart, that’s how. They got the human race to do what they said, make them dance to the right tune, and bingo–problems solved.

Now, just to be clear, I don’t think a robot-led utopia is likely or even necessarily possible. As with most things, it will probably land somewhere inbetween post-apocalyptic machine world and utopian Computer-led society. The ‘Singularity’, should it occur, won’t be all roses and buttercups, nor will it be for everybody. These are the things my studies in the humanities have taught me–that stuff never works how you want it to. The upside of this, my scientist friends, is that it works both ways. No utopias, but also no dystopias. Robots will be a lot like other people–some will be great, others will suck, but very few of them will be actually evil.

About aahabershaw

Writer, teacher, gaming enthusiast, and storyteller. I write stories, novels, and occasional rants.

Posted on September 16, 2011, in Critiques, Theories, and Random Thoughts and tagged , , , , . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: