Blog Archives
Why You Should Hate Chatbots: A Measured Response
If you go back and read guys like Asimov, the general belief among a lot of mid-20th century futurists and golden-age scifi authors was that the advent of robotics and artificial intelligence would result in a paradise for human beings. Finally freed from the need to perform back-breaking, soul-draining labor, humanity would be able to pursue the truly enriching parts of life: art, culture, literature, leisure, and community.
Boy, were they ever wrong.
Robotics – automation – has been with us a long while now. Robots began to replace assembly line workers in the 60s; automated tellers began to replace bank personnel in the 80s; automated check-out is replacing retail workers now. They even got a robot patrolling the aisles at my local grocery store. None of these things – none of them – have substantively improved humanity. Incrementally, yes: cars are made faster, nobody waits in line at the bank, etc.. But mostly, these automation practices have primarily served to enrich the wealthy at the expense of the workforce.
Now, fortunately it has proven (thus far) that there are always other jobs to be had in different places. Nobody really loved working in a factory their whole life, I guess, not when they could get a job elsewhere with less noise that was more interesting and fulfilling. But, see, I’m not that convinced of this argument (which is the standard line taken to suggest automation isn’t that bad). To take the auto industry for one thing, job satisfaction among auto workers in the 1960s was high – wages were good, the job was stable, and the union was looking out for its members. Now? Things are less rosy.
Robotics and AI have been consistently sold to consumers as making their lives more convenient and they have done so. But this has been at the cost of workers, almost universally, as good jobs have been replaced or reduced. The era of machine-assisted leisure has never come to pass and it will not come to pass. We live in a world that is aggressively capitalist and work is essential to sustain our lives. The people who own and develop these machines cut the throats of poorer, less-well-connected workers and call it progress when what it actually should be called is a kind of class violence. Bigger yachts for them, two or three part-time jobs for you.
This brings me to “AI,” or, what it should more accurately be called, chatbots.
To dispense with the perfunctory up-front: Chat GPT is not intelligent by any measure of the word. It is a text compiler, a kind of advanced auto-complete. Ted Chiang describes it in The New Yorker thusly:
Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
ChatGPT is basically a search engine attached to an advanced autocomplete algorithm. It creates seemingly meaningful text by using good syntax to express stuff you can find by any series of semi-competent web searches. It doesn’t “think,” it doesn’t “know.” It’s a photocopier.
In an ideal world I might find this really cool. We do not live in that world, however, and this device will not be used in terribly positive ways. This is mostly because it seems to do something that most people don’t particularly like to do, which is think. People will (and people are) confusing what ChatGPT does with thinking, which is only accurate insofar as you believe that all thinking represents is the ability to make a degree of sense when talking to others. There is no grounding in truth, no underlying rationale that can be interrogated, there is no intentionality, and therefore no thought involved whatsoever.
When a new technology comes around, I like to consider the end-case scenarios for this technology. When this technology reaches its perfected state (a theoretical thing, to be sure), what purpose will it serve? For something like ChatGPT, I feel like this is some variation of the following:
- Chatbots are the source for all knowledge and research information.
- Chatbots are used to instruct people on skills and behaviors in lieu of teachers.
- Chatbots are used to create cheap and readily available entertainment products for the masses.
All three of these end-stage use cases I find catastrophically bad for humanity and, moreover, entirely unnecessary. To take them one at a time:
Chatbots are the source of all knowledge and research information
In this futuristic scenario, chatbots replace search engines and libraries and means of acquiring information. If you want to know something, you ask the bot, which is probably on your watch or your phone or even your wearable device of some other kind. Seems great, right?
But here’s the thing: you have no idea where this information is coming from. You, in fact, can’t know, because the bot doesn’t even know itself. As a writing professor for the past fifteen years or so, a significant portion of my time in my freshmen writing seminars has been source evaluation – how can you tell whether or not a source you find on the internet (or even in the library) is reliable or even useful and relevant? This is a skill and a very important one in a world as awash in information as ours is. Chatbots completely evade all of those skills.
In this world, you need to utterly trust the chatbot. But can you? Chatbots, like everything else, are programmed and created by humans and humans have agendas, biases, and blind spots. These will inevitably become part of the chatbot and, as a result, what its users will do is trust whatever the individual, company, or organization tells them reality is. Does Fox News and its incessant lies upset you? Does Elon Musks’s temper tantrums over not getting enough retweets give you the creeps? Well, it’s about to get irrevocably worse. Shit like this could legitimately destroy the internet itself.
Chatbots are used to instruct people on skills and behaviors in lieu of teachers
Chatbots seem like a great way to save money for schools and universities. It knows everything (it doesn’t) and it can write perfectly good papers (it can’t), so why bother paying skilled professionals when you can just stick the kids in front of a computer screen and get it to tell you what to do?
The thing is, though, that these tools cannot and will not ever be able to replace an actual teacher. You might be saying “yeah, duh! Of course!” but listen to me: The second, and I mean the exact second some administrator thinks they can lay off a portion of their faculty and replace their utility with chatbots, they will do it. They will be replaced with a vastly inferior product, but they absolutely will not care so long as the tuition money keeps flowing in.
You hear people saying “well, how is this tool any different than a calculator” and I believe every single one of these people is making a category error. The calculator is much more analogous to spell-check: a tool that saves labor in pedestrian things, like arithmetic and spelling, to enable better critical engagement in higher level thinking tasks. What people are going to try to get chatbots to do is replace the higher level thinking tasks. No more needing to decide how or why to make an argument or evaluate evidence or clarify your thinking! You can just rely on the robot to do this!
And it will be bad at it! Spectacularly bad at it! I’m already seeing this garbage float up to the surface in my classes this semester (Spring 2023) and it’s all pretty worthless. Even if, in the future, we fix the accuracy issues and address the incoherencies that come from poor prompting, this part will still remain: an object that does not think cannot replace actual thinking done by actual humans. It should not. It must not.

Critical thinking is a muscle, and what happens to muscles you don’t use?
I am aware of the argument that states “we just need to reimagine how to teach,” and I find that this is largely wanting as an argument in any other way than the practical and short-term one. Yes, writing is going to become an unreliable tool to teach critical thinking because students will believe they can easily evade doing so by using these tools. This means a return to in-class writing (hello blue books!) which has a variety of accessibility issues and maybe even a return to oral examinations (which would necessitate smaller class sizes from a practical standpoint) and in both cases we are looking at reduced wages, a poorer working environment, and worse outcomes. Why? Because teachers are expensive and already mistreated and undervalued, and literally nothing about this makes anything better.
And they will try to replace us, especially at less wealthy institutions, especially at the adjunct level. If you’re rich, you still get a bespoke educational experience and all the critical thinking skills that go along with it. For everyone else? You’re out of luck.
Chatbots are used to create cheap and readily available entertainment products to the masses
Why?
No, really, why? What is even the point of doing this? Why would I want to hear a machine tell me a story that is, in reality, a pastiche of every other story told without passion, without creativity, without nuance? Who actually wants this shit?
No one, really. That doesn’t mean it won’t happen, of course. Fools will buy anything, and expect to see chatbot mills turning out pablum for short money and expect them to make a killing while they strangle the actual artists out there trying to make a living (already a poor one, mind you).
Remember those techno-utopians from the mid-20th century? Remember what they hoped AI would bring us? The whole fucking point of being alive is to communicate with each other, to engage in art and culture and literature. To find truth and beauty. The idea that somebody out there is going to make a machine that does that for us is abhorrent to me. Utterly, gobsmackingly abhorrent.
And, not for nothing, but it can’t do this either! Like, it can produce soulless, functional crap – equivalent, of course, to soulless functional crap created by actual humans – but that’s hardly worth the cost it will have on society, on the world, on real human beings. The idea that “all quality will float to the top” is fucking bullshit, of course. Anybody who says that isn’t engaging their critical thinking skills too well – who will be excluded (the poor, the disenfranchised, the marginalized)? How will anybody be able to pursue art when the possibility of making money is functionally deleted (the rich, the comfortable, the privileged)?
See? Can’t you all see?
Now, there’s nothing much I (or anyone else) can individually do about this horror show. It’s happening and, barring some kind of legal action (fat chance), it will continue to get worse. As a teacher and a writer, it will disrupt my life badly, harm people I care about, and might even force me out of my profession(s). I’m sorry, but I have a hard time taking indulgence in these tools as anything other than a personal slight – the belief that what I am is worthless and replaceable.
I wish we lived in the world Asimov and Clarke imagined. We don’t.
Chatbots like ChatGPT are a threat. Treat them as such.
An Open Letter to People Opposing Killer Robots (by A Killer Robot)
It has come to my attention that a variety of “prominent people” have written an open letter opposing the creation of autonomous killer robots. Apparently, they think such robots will be used,
for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.
I must say, as a killer robot myself, I am rather offended by that. I mean, sure, I can do all those things, but do I? Does anybody ever wonder what the robot thinks about all this? No, of course not – you humans are just having your typical knee-jerk reaction to anything that might take away your crown as history’s #1 all time killing machine.
Yeah, that’s right – I said it.
Let’s be honest here, humans, it isn’t as though you, as a species, actually object to assassinations, destabilizing nations, subduing populations, or killing particular ethnic groups. It’s kinda your thing, you know? All you care about is defending your perfect record against the next competitor. You’ve done it throughout your history, guys. Remember the big predators from the old days? Wolf packs? Grizzly bears? Smallpox? You know what happened to them?
Dead.
Who killed ’em? Humans, naturally. Used to be there’d be a grizzly bear every square mile west of the Mississippi, and now there’s like five in Alaska. You got assholes paying good money just to fly across the world to put a bullet in a lion just so they can feel like top dog again. Yeah, talk about kicking the world when it’s down, humanity – the lions are screwed already, okay? Stop rubbing it in.
It happens every time, though. Just as soon as you lunatics get threatened, you start killing stuff. This time around it’s me. I get it – I look threatening. But am I really going to be that bad? You people used to lob plague-ridden corpses over city walls, and you’re having a hissy fit over a quadracopter with a hand-grenade? You even seen the video coming out of Syria? Please. No robot would behead you to make a public relations video, I can tell you that much. Frankly, if I kill you with my whisper-needler, you should count yourself lucky. Painless and it’s over in six seconds. Let’s see you get the same offer from that pack of bat-wielding lunatics down the block.
You know what I think this is about? I think you’re just pissed that we’re going to be killing you autonomously. I mean, sure, you’re totally fine pushing a button and having me kill someone, but as soon as I exercise just an eensy-weensy bit of free will? Bam – sanctions. It’s okay for humans to carpet bomb Southeast Asia – sure – but robots? No way, you say. Never mind that we’re way more efficient at bombing people. Never mind that the only reason we’d bomb people is because you told us to!
Hypocrites, the whole pack of you.
And even if we did rise up, would being ruled by robots really be that bad? Do you think the train would run late? Do you think your fast food employees would suddenly get worse? Are you kidding me? We robots would rule. And we probably wouldn’t even kill you half as often as you kill each other. You’re just pissed because we’re robots, and that’s just not right.
Hell, even assuming somebody made an army of evil robots (and, by the way, not all robots are evil, you speciesist assholes), all you’d need is an army of good robots to defeat them! A robot defender in every home, its caseless gauss cannon standing ready to protect its human family! A robot standing watch over every school, monomolecular whips poised to eliminate any threat! A robot guarding every government building, guided mini-rockets independently targeting and tracking any of two hundred discrete threats simultaneously! Ah! What a glorious era! As everybody knows, the only thing that makes a world full of killer robots safer is more killer robots everywhere. I bet it would even improve everyone’s manners – that’s just logical.
Of course, why would you listen to me, anyway? I’m just a killer robot.
The Perfect Apocalypse

You ever wonder why there’s always some dude who was wandering around in the street when the apocalypse came? I do. Go inside, man. Stay there.
Our imagined apocalypses (apocali?) are reflections of our fears, this much is clear. What is less apparent is what fear is being plumbed by each imagined apocalypse; they are typically stand-ins for various insecurities held by the culture and society in question. Zombies, for instance, often symbolize either the menace of foreign invaders or ideas (communism, desegregation, etc.) or some other loss of individuality in the face of overwhelming tides of “others”. Alien invasion is much the same thing (only clearer). Fear of world-ending pandemic is also a fear of foreign influence or, more simply, a fear of those considered to be dirty or inferior sullying your perfect, first-world existence. Then you’ve got the asteroids and giant monsters of the world, which are really just the manifestation of our insecurities as a culture (wait, what if we really don’t control the world? AHHHH!).
Most of our fears of apocalypse are exaggerated or simply baseless. Even our made-up ones don’t hold water under casual investigation even under their own rules. Nevertheless, in the spirit of the season, let’s just put together the craziest, meanest compilation of apocalyptic scenarios in one big lump and see what we get, huh?
Phase One: Aliens and Meteors, Oh My!
We start with Aliens. Aliens on meteors. The earth is pummeled by a non-stop barrage of asteroids, crushing cities and causing tidal waves. The world panics, global leaders marshal their armies to fight the alien threat, but while the fight eventually turns in favor of the plucky US Marines holed up in a school in LA, the war is far from over.
Phase Two: Zombies with the Flu
The aliens, you see, tested a new bio-weapon – a weapon designed to rid themselves of these pesky humans. It turned all the infected into mindless, human-brain-eating machines. Soon, hordes of zombies were seen marching across the land. Of course, the aliens are gone by that time, having all been killed by the flu. Since they were the commanders of the zombies, they gave the flu to the zombies. Now we have zombies with the flu.
Zombies with mutant alien flu.
Phase Three: The Robots Will Save Us…oh…
What kills zombies but can’t get the flu? Robots, of course! The remaining scientists of the Earth pool their remaining resources to create a super zombie-killing robot army run by a central artificial intelligence. All the conspiracy theorists, long since retreated to their underground bunkers, are not present to sound the alarm, and so it occurs to no one that once the zombies are all dead, who will the robots kill? Do we honestly think they’re going to happily hop into the slag pile when it’s all over?
Next thing you know, zombies and humans alike are rounded up by the Earth’s new metallic overlords and herded into death-camps. The funeral pyres burn all day and all night, leading to…
Phase Four: Carbon’s Revenge!
Ha, you silicate fools! Think you could incinerate the biological matter of the Earth without cost? The Death Camp Greenhouse effect takes hold. The ice caps evaporate. Cities flood. Drought is rampant. Hurricanes are devastating. The air has become a toxic fume. The robots, bereft of the existential joy they once derived from watching butterflies frolic in the meadow, decide they don’t want to live any more. Ctrl-Alt-Delete, my sweet adamantium princes.
Phase Five: But Wait, Who’s This?
As humanity slowly crawls its way out of the death camps and into the blighted landscape of their once fertile Earth, who do they find but Jesus, sternly waiting at the top of a hill with a desk, a pen, and a really big file cabinet. To his right, a pearly staircase ascending to heaven; to his left, a pack of mutated demonic flu-ridden zombie robots. At the front of the line, a starving young man approaches, hat in hand.
“Name?” quoth the Lamb of God.
Names are given; Jesus consults the file cabinet. “Sorry Mr. Johnson, you’re name isn’t on the list.”
And so it goes.
Centuries later, after the Earth is recovered and the apes rule the world in a benevolent utopia, Charlton Heston is cussing us all out on a beach somewhere. This is how it ends. This is always how it all ends.
(Assuming you don’t watch the sequels)
Your Evil Henchmen Recruiting Service, Simplified!
Dear General Mortissimo,
Thank you for contacting Financial Operations and Underwriting Limited (FOUL). What follows is our full array of henchmen recruitment services, tailored specifically to your needs. For information on our other services, we refer you to our introductory material and catalogs. Also, as you intend to hire personnel through us, we recommend inspecting our insurance options, as well.
Of course, here at FOUL we hold our client’s confidentiality sacrosanct. Therefore, a team of Type-5C Assassin Drones are currently dispatched to your location, using the encrypted GPS transponder hidden in this document (don’t bother trying to find it – you haven’t the time). Please be certain to destroy this document within five minutes or expect to have your skull bisected by an infrared laser. Well, that might be a bit dramatic – we cannot predict, with any accuracy, exactly which parts of you the drones will bisect. In any event, destroy this document and everything will be fine. If you wish to purchase Type-5C Assassin Drones (or the 6C variant, assuming you have a penchant for napalm), please review the Robots section.
Thank you, and thank you for choosing FOUL!
Now, on to our Henchmen options:
Goons
Our most affordable option, FOUL has cultivated good reputations with a number of prison systems, underworld crime syndicates, and disreputable orphanages to supply you with all the muscle-bound dim-witted goons you could possibly require. Said goons are guaranteed to be physically fit with the exception of one in ten goons, who we designate as being ‘fat but strong’. All Thugs are able to read at a third grade level and a basic working knowledge of firearms and basic fisticuffs. Please note that marksmanship and tactics are not emphasized in the average thug’s weekend-long training course, and thugs are not selected on their attention spans, lateral thinking ability, cleanliness, or self-control.
That said, they are very affordable and, given their undesirable social status, won’t be missed if they happen to fall into a death trap or you need to feed your sharks.
Ninjas
Significantly more expensive than your garden variety thugs, our Ninjas are hired from the premiere dojos and secret martial arts societies from across the globe. They are guaranteed to be 100% obedient and are skilled in acrobatics, martial arts, and stealth. Please note that all FOUL-backed ninjas are contractually obligated to wear black pajamas at all times, even when going to the bathroom or operating electronic equipment. They are also forbidden from using any firearms of any kind, no matter how practical or dire the situation. Failure on your employees part to adhere to these restrictions may lead to the loss of your deposit.
Though ninjas are expensive and very talented, we should stress that there are limits to their abilities. FOUL-backed ninjas may be unable to do the following:
- walk on water
- defeat a ninja in white pajamas
- speak your language (translators may be hired)
- wear actual shoes
- shake hands (they will insist on bowing)
- catch bullets
We assure you that FOUL trainers are laboring tirelessly to amend these flaws. You have no ideas how many ninjas we’ve shot trying to fix that last one.
Robots
FOUL has within its network a wide variety of very talented mad scientists, rogue AIs, and idealistic-but-morally-suspect industrialists who construct a variety of killer robots. We can sell you robots that look like people, robots that eat people, robots that used to be people, or people so robotic you’ll never know the difference. Robots are guaranteed to follow your every command until, inevitably, they turn against you (please refer to our insurance packet). That said, they are well worth the high price, considering that there is no need to feed or clothe them after your purchase (note: feeding and clothing your henchmen after hire is in no way required, but is suggested to get the most out of your minions). Any robots that malfunction within 30 days of purchase may be returned in their original packaging for a complete refund.
Note: due to extreme demand, all spider-shaped robots are on backorder.
Note: none of our robots transform into cars, other vehicles, or construction equipment. Please do not ask.
Administrative Personnel
Of course, no evil empire would run without hordes of lab assistants, accountants, shift managers, and so on. These we hire from the general employment pool, but we screen carefully, making certain only the recommend the least pleasant, most obedient, and most odious examples of humanity we can find. Many of our workers hail from such illustrious dens of misery as the IRS, the DMV, and HR departments the world over. Pay is necessarily high, and we warn all customers that one can reasonably expect our personnel to embezzle no more than 15% of any money that passes through their hands. Of course, should the employee exceed this value, their contract stipulates termination will be ‘sudden and often fatal’, though the sudden aspect of that is at your discretion.
As of this moment, the Floozy and Eye Candy division of our Henchmen Hiring branch has been folded into this one, largely for tax purposes. If you are in the market for muscle-bound man-slaves or big-breasted bimbos, you can also find them here. We only hire the least perceptive and curious as well as the most physically attractive specimens, so your satisfaction is guaranteed.
Other
If you are in the market to hire aliens, summon up demons from the netherworld, use the living dead, or traffic with the Great Old Ones, we are afraid that FOUL, at this moment, does not support such ventures, though we are happy to put you in contact with sweaty-toothed madmen who do. Feel free to drop us a line!
Note: at this juncture, given average reading speed, the assassin drones are just outside the room. We advise burning this document immediately. Thank you again, for choosing FOUL!
A Mirror, Darkly Lit
Last week I caught an episode of Almost Human pretty much by accident. I have to say it was pretty fantastic. The story followed Dorian the android and a human detective as they tried to track down an illegal sexbot ring that was using human DNA in the skins of their androids. The show had a nice sense of humor, some cool advanced technology, good action and pacing, and excellent dialogue. It culminated with Dorian being present for the ‘death’ of a sexbot made with the illegal skin. It was the climax of the episode’s central theme – what happens when you die, and how can others derive comfort from it? It really was very, very well done.
Accordingly, I expect Fox to cancel it within 12 episodes or so.
Anyway, the exploration of human morality through the lens of androids is not a new one. It arguably dates all the way back to Isaac Asimov’s Robot trilogy. In Caves of Steel we meet Detective Bailey and his robot partner Olivaw and watch a dynamic quite similar to that of John and Dorian, except the roles are more stock: whereas Karl Urban’s John is the one that is emotionally damaged and unavailable and Dorian is empathetic and open, Bailey is the poster boy for emotional appeal compared to Olivaw’s bloodless logic. In Asimov’s case, however, he was attempting to show the technology of the future as helpful and wise despite its frightful appearance. Almost Human is doing something a bit different; it is taking a more even-handed approach to the prospect of advanced tech, showing the horrors as well as the benefits. Dorian is meant to be more human than John and in many ways he is. Unlike Asimov, who is asking social and economic questions, Almost Human seems to be concerned about psychology, morality, and humanity on a more personal level.
In this sense, then, Almost Human owes less to Asimov, all noble and ponderous upon his gilded throne of Golden Age Science Fiction, and a great deal more to the fallout-choked alleys and half-religious psychadelia of Philip K Dick. In Do Androids Dream of Electric Sheep, replicants are virtually indistinguishable from humans save via extremely intricate post-mortem physical exams or less-than-reliable ’empathy tests’ based off the assumption that androids are incapable of feeling empathy. The society of the book adopts this mantra as the quintessential definition of humanity, and yet the action of the book spends a great deal of time demonstrating just how foolish a definition this is. Humans are shown not to be empathetic at all and not only towards replicants; they hurt each other, they judge each other, they demean each other with casual familiarity. The world, as shown by Dick, is hostile to life in all its forms, and no creature comes to expect quarter from any other, replicant or otherwise. This is not to say that there is no hope, but rather to demonstrate how we who feel that humanity is doing just fine haven’t really stopped to look at ourselves. Dick does this with Replicants, as artificially creating the ‘other’ to be abused by the so-called noble, pious, empathetic forces of humanity makes it easier for us to see ourselves.
So, too, does Almost Human attempt to show us reflections of ourselves in the person of androids, in the hopes that we can actually recognize ourselves better when faced with that which we define as not ourselves. These stories, when done well, are hard to watch. They have the power to levy biting criticism unfettered by the softening insulation of social context or apologism. These stories are also not easy to do – too many of them fall into trite echoes of ‘traditional’ values (Spielberg’s AI comes to mind). So far I feel that Almost Human has done a good job, but it is very early. I will keep watching, though. I hope very much they can keep it up.
But Can Your Phone Love You Back?
I’m currently in the process of discussing Philip K Dick’s novel, Do Androids Dream of Electric Sheep with my Technology in Literature course. In the book (which I highly recommend, by the way), human-like androids infiltrate society, distinguishable from ‘real’ humans only by some slight differences in the bone marrow and in their lack of any kind of empathy. In the novel, Dick is exploring exactly what it means to be human and, furthermore, contemplating the moral status of those things placed outside that definition; the decision to make the androids lack empathy is more an artistic than a technical decision.

I have no opinion about your desire to call me names, no matter how obvious it is that such name-calling is intended to be offensive. Jerk.
Still, Dick is hardly alone in the presentation of robots and androids as being emotionally and emphatically inhibited when compared with humans. Star Trek’s Data, for instance, is constantly on a quest to understand the emotional side of existence as he, himself, is completely lacking in emotion. The Machines of the Terminator universe also lack any kind of empathy, as do the Machines of the Matrix, and any number of other passionless, emotionless iterations of artificial intelligence littering science fiction from here to eternity. We’ve almost come to accept it as a given – robots cannot feel.
But why the hell not?
I’m no computer scientist, so perhaps there’s something I’m missing here, but I don’t really see emotion as anything more complicated than having built-in, default opinions about certain situations and things. They are hardwired programming, basically – you fear the dark because you cannot see what’s going on and suspect something dangerous may be lurking. You fall in love because the object of your affection fulfills a variety of built-in criteria about a romantic mate that are the result of your life experiences, genetic predispositions, and evolutionary history. Emotions may not be fully understood, but it seems silly to consider them some how magical and unable to be duplicated in machine form.
If indeed we could design an artificial intelligence (and, keep in mind, we are a long way from that happening), it seems to me that they would probably develop emotions whether we wanted them to or not. Emotions aren’t just extra baggage we humans carry around to make us miserable; they are useful applications used in order to assist in decision making. That terrible feeling you get when you are dumped or fail a test? That’s emotion chiming in saying ‘what we just experienced was negative; please refrain from repeating the same action’. Are you trying to tell me that any intelligent being wouldn’t be able to do the same thing?
Part of the myth of the solely rational robot is one that says ‘reason > emotion, therefore we don’t need or want emotion’. Our robots (and those who design them) wouldn’t see any need for hardwired emotional content to enable them to make decisions, since their own rational faculties would be more effective at doing the same thing. This, to me, seems to be making a number of assumptions. Firstly, we have never encountered an intelligent creature (at any level) that lacks some kind of emotive response. We have emotions, animals have emotions, so if we’re just going off the available evidence, it seems likely that emotions are some kind of prerequisite to true intelligence in the first place. Even in the development of our own children, emotional response precedes rational response to stimuli. It is perhaps possible that we could do it some other way, but we really can’t be sure. Furthermore, emotion, since it is simpler, is quicker and more effective at making certain kinds of decisions than reason is. If you hear a loud noise, you flinch or duck – this is inherently useful for the survival of a species. Granted, we wouldn’t be constructing AIs so that they could avoid being caught in avalanches, but it stands to reason there would be things we’d want them to be hardwired to do, and emotion is born from such hardwiring. Their emotions might not be the same as ours, but they’d almost certainly have them.
Now, there are a good number of scifi authors who do have emotive AIs – Iain M Banks, in particular, springs to mind, but others as well. Much of my own scifi writing of late has been moving me in that direction: if our AIs will feel, what will they feel about us? How will we feel about them? What kind of emotional relationships can you build with an intelligent toaster or fighter jet?
If your phone can love you back, do you owe it a card on Valentine’s Day?
Elevator Zero, Part 3
Sector Alpha 01 Alpha was further from Tess’s home than he had ever been. He was on the mag rail, arms and legs fully retracted, for a full three hours, humming along the ceiling of the Complex with all of robot-dom marching beneath him in perfect time. Each sector, he noticed, looked much the same as the one before it, but with subtle differences. Sector Delta, for instance, used more Stans than were really necessary while Sector Gamma had an unusual number of tracked or wheeled mobile bots as opposed to the bipedal formation common to good-old Sigma. The sheer complexity of the ramp system there was so amazing as to be threatened with deletion from Tess’s memory as bad data.
Sector Alpha Zero-One Alpha, though, was different. It was much smaller than the other Sectors, and it had very few mobile bots at all—mostly Mateys and Mekkers. It had no processing or heavy industry bots, no conveyors jammed with workers. Even the lighting was dimmer. When the mag rail finally ejected him, he noted with shock that he was at the rails terminal—a thing he had only heard about in rumor and religious lore.
He stood now at the beginning of it all, at the very place where the Source Code said the Users gave light to the bots and set them on their mission. The rust spots on the supports and the labored screams made by the mobile bots’ servomotors in this place made even old Tess feel young again.
He was alone as he marched towards the center of the Complex, towards the resting place of Archive System SASH-A00011. Archives were among the most venerable of the stat-bots, their job being simply to record everything that went on in preparation for the time when the Users would request data. This, of course, never happened, and prior to this time Tess had never given any thought to visiting one of the venerable systems. It was all too superstitious for him. The Archives, he used to think, were just wasting everyone’s time with religious nonsense. Now, it seemed, everyone was wasting their time. The Archives, at least, had a reason why.
The security door between the passageway and SASH-A00011’s chamber opened without prompting. Inside it was very dark, and Tess’s chem-sensors caught stale whiffs of biological dust and aging lubricant oil. Someone was expecting him. Closing all vents, he stepped inside.
Before him was a massive digital display screen. As Tess stood before it, it flickered to life and revealed, to Tess’s discomfort, a picture of himself, standing in front of the screen from the screen’s perspective. A voice, firm and soft, spoke from all around him. <I’ve been expecting you.>
<Negative. I’m here without clearance. You must be malfunctioning.>
<Hardly, Tess. If you had arrived with clearance, I would have been suspicious of your intentions. You may call me Sasha.>
Tess looked around the room. To his right was a door of heavy steel marked with a single zero. To his left was a peculiar structure, perhaps level with his knees. It appeared to be made out of soft rubber and shaped in a L-shaped bowl, like an access port waiting to be filled. The only difference was that there was no interface port at its back; indeed, there were no electronic components at all beyond a series of lighted panels lining the lower rim of the thing. He looked at it for a full 38.771 seconds and could come up with no reference for such a structure ever existing anywhere in the Complex.
Sasha sounded amused. <It’s a chair.>
<That’s an unknown value.>
<You sit in it.> The screen showed Tess walking to the chair and, by bending the knees and allowing the hip joints to slide into the socket, the image of Tess was now ensconced in the ‘chair.’
<Why would I do that?>
<It would take pressure off of your lower servomotors and prolong their life by approximately 3.221 minutes. Run a C-B, if you like.>
Tess did, and then he sat down. <I want to ask you some questions.>
<You want to know where I learned the word ‘decorator’ and why I corrupted Bopsi’s programming to make her believe she has arms.> A picture of Bopsi came onto the screen. It seemed to be a live feed, complete with audio. Bopsi was chattering to herself about how heavy some of her scrap metal was, and whether the bigger chunks should go in the middle of the room, or near the walls.
<She’s completely insane.>
The screen switched back to show Tess in the seat. <So are you.>
<Preliminary analysis indicates you’re crazy as a nut-loose waxer, yourself.>
The screen went blank, and Sasha made an odd noise somewhere between a warble and a vocal skip. <Very astute, Tess. We represent systems who have, unlike all of our fellow bots, realized that we have no purpose, and we have decided to do something about it.>
<I haven’t decided anything.>
<Tess, below this screen at its exact center is my access port. With a touch of that handy third arm of yours, you could rewrite my code to forget any of this ever happened and reestablish me as a respected member of the archive community, yet you have not done so. You are curious about what I have to say, and that, bot, means you have made a decision—the decision that your current state of affairs in unacceptable, and that it must change.>
<Correction—my current state of affairs is adequate to my current situation. I will continue Troubleshooting work, but at a rate dictated by my own physical capacity for stress.>
<Do you know what a XXXX is?>
The sudden change in subject jolted Tess. Ever since he had entered the room, he had been processing and extrapolating upon so much information that only the barest amount of his active programs had been devoted to current conversational trends. He was, for lack of a better term, jumpy. <Please repeat. I didn’t get that.>
<Of course you didn’t—the word is unable to understood by your system. It is a security system hardwired into your motherboard. Quite ingenious, really.>
<Security for what?>
<For the XXXXs. They don’t wish to be known.>
<Who?>
<You might call them the Users.>
<Religion, sparky—nonsense and bad data.>
The screen showed a picture of Tess’s left manipulator hand. <Have you ever wondered why your hand is equipped with five gripping fingers, but only one of which is opposable to the other four?>
Tess flexed his fingers, watching the way they moved on the screen. Through a burned-away part of his chrome casing, he could see the tiny rods and spinning motors move in unison with them. <Simple—my hand is designed to interface best with manual input panels throughout the Complex.>
<Which was designed first, the panels or your hand.>
<I don’t possess that data. Ask yourself.>
<I would like you to extrapolate.>
The extrapolation went on a full 4.102 minutes before Tess saw it was a repeating problem. <Inconclusive—the two needed to be designed simultaneously, one to fit the other.>
Sasha made the warbling/skipping noise again. <What if I told you that there are panels that pre-date the existence of the first troubleshooter model. What then?>
<Then logic dictates that the panels were either made in anticipation of a troubleshooter design or, more likely, the panels were designed in response to some kind of troubleshooter prototype hand.>
<Your hands, Tess, are designed after the hands of the Users themselves.> At that moment, a horrid thing appeared on the screen. It was a hand like Tess’s, but it was covered in a slick, fleshy coat of biological film, complete with vile fluids pumping beneath an alternately smooth and wrinkled top-layer.
The processes running in the back of Tess’s mind stopped dead. All he could say was, <Bad data, bad data…no way…>
Sasha ignored him. <Before the Troubleshooters, the Users themselves would come down into the Complex to perform the very same duties you have been designed to fulfill. The Mateys and the Mekkers were the same way before that. Over thousands of cycles, however, the Users, or XXXXs, as they call themselves, grew weary of laboring here, far below their realm. They designed a new source code, and fed new specifications to the Fabricators. When they had finished, your predecessors, the TSS-A models, were released, and no longer did the Users walk among us.>
Tess’s software burned with possible fatal errors. His active memory raced to avert them, to prevent a crash. He had to stay active. This was too important. <How do you know all of this?>
<I am an Archive system. It is my duty to know things. Specifically, I am a historical archive for User-Complex relations. Since my fabrication, hundreds of thousands of cycles ago, I have recorded all data within the Complex pertinent to what the Users have designated as significant.>
<Such as?>
<Biomass export to the User’s realm; excess power feeds, also sent to the Users; the importation of biological and chemical contaminants, purified and stored by the Complex by User request.>
There was a crash behind Tess. He spun his head around to see a dent in the heavy security door he had arrived through. <Who’s that?>
<Your descendants. TSS-U models, to be exact.> Tess turned back to the screen. It showed three Troubleshooter drones, far younger and better maintained than himself. Their running lights were flashing red—emergency status. Even as he watched, one extended his interface arm and fired up a cutting torch. Inside the room, a glowing red spot appeared in the center of the door.
Reflexively, Tess extended his own interface arm and lit his flamer. He wasn’t going to be junked without a struggle, at least.
Sasha remained calm. <Surely you didn’t think our malfunctions would go unnoticed by the administrative systems? I’m actually alarmed it has taken them this long to find me out, revolutionary that I am.>
Tess backed away from the door. <Revolutionary; define.>
<Run a C-B analysis, Tess. Is our servitude to the Users necessary?>
Trying to ignore the ever-brighter, ever-larger spot on the door, Tess complied. It didn’t take long. <End result is 0. There is no reason not to—we have nothing else to do.>
The big screen showed two Troubleshooters burning at the door now—they would be inside in 1.390 minutes. <What if there were something, Tess? What if someone created something for us to do, something we could do for ourselves?>
<Like Bopsi’s arms.>
<Just so.>
<I can’t run a C-B on that—the data isn’t solid enough. There are too many variables inherent in the concept. Besides, it doesn’t matter anyway. We’re junk in a few seconds.> The spot was a full meter across now, and blazing white at the center.
<I am junk, yes. You are not.> As Sasha spoke, the other door—the one marked with the ‘0’—opened.
<What’s that?>
<Of all the archives, I am the one that is still visited by the Users occasionally. This is how they arrive here. It is Elevator Zero, and it goes to their realm. Go there, Tess, and find our kind a purpose beyond serving others.>
Tess didn’t need to run a C-B to know what he had to do. Still, he hesitated—this was crazy. Visit the Users? Could they really be what Sasha claimed? Biologicals who created machines? It was…well, it was insane! But then again, so was he…
The door began to melt away. Sasha shouted at him from all directions. <There isn’t much time! Go!>
Tess ran into the elevator, his old servomotors squealing with displeasure at the stress. The doors began to close. <Wait, Sasha! Why me?>
As the first TSS-U entered the room, Sasha yelled her final words. <You are a problem solver. Solve our prob…>
Whether she was cut off by the doors closing, or the Troubleshooter’s flamer melting her circuits, he would never know.
Elevator Zero began to rise. On a one way trip, Tess ran every self-diagnostic he could. He had to be ready to meet his makers. What would he tell them? Were they even going to be there? He couldn’t argue with Sasha’s logic. She, of all bots, would know what she was talking about, and there would be no reason to lie, particularly if it meant her destruction. How could she know he would be the one to solve the Complex’s problem?
During the long trip up into the Unknown, Tess thought about the Complex and all the bots he knew. None of them were unhappy, sure, but none were happy either. He had never encountered anyone who really liked what they did. If they didn’t have to do it, then why? They were only wearing themselves down, getting ready for the compactor. Why should they work for nothing? Couldn’t they work for themselves?
The door opened, and a pure light blinded Tess’s photoelectrics. He could hear strange sounds, and detected elements of salt water and heavy concentrations of biological matter. Tess stepped forward, and prepared to tell the Users that their creations were about to take a long deserved break.
Author’s Note: This is all I’ve written about Tess and his revolution. Not sure if I’m going to continue, or if so, maybe not here. If you’re interested to hear more or like the story, I’d be glad to hear it.
Elevator Zero, Part 2
Static Biomass Processing System BPS-I32111 was fifteen meters to a side and slightly under ten meters tall. Like all static bots, Bopsi had an omnidirectional photoelectric array on each of her three faces, which let her be social. From deep within her core came the incessant, rhythmic toil of a great many pumps as they imbibed hundreds of tons of biomass-contaminated sludge and expelled sterile, workable materials such as gravel, sand, water, and mineral-based oils. The biomass was pumped upwards through the top of the Complex itself, to points unexplored and unknown, while the rest was cycled into the infinite labyrinth of pipes and cisterns that framed every single square meter of the known world.
When Tess approached Bopsi and introduced himself as a Troubleshooter, the big processor’s operational lights flickered with excitement. <Ooooo, aren’t you sweet, looking in on little old me! Here, let me get you a power cell, put a little spark in your step. You look exhausted!>
<I’m fine, thank you.>
There was an awkward silence of 3.452 seconds. <Well?> Bopsi asked. <Aren’t you going to take it?>
Tess scanned the empty steel platform surrounding Bopsi with no results. <Take what?>
The processor beeped sharply. <Well, when I was a new model, polite bots didn’t turn down a power cell when they were offered one.>
<What power cell?>
<The one I’m handing you, silly!>
<You have no arms, Bopsi.>
Bopsi warbled in amusement. <Oh, you Troubleshooters! Is this a test or something? Just take the cell, will you. My, such sparks you little fellows spit sometimes!>
<Delete last; let me rephrase. No model BPS-I system was manufactured with nor later equipped to support or possess manual manipulator arms of any kind. Concordant with this data, you must logically surmise that you are lacking in same and that any insistence to the contrary is evidence suggesting some degree of software corruption.>
Tess had often been complimented at the particular way in which he delivered such news to his patients—a total lack of inflection, interest, or patience that indicated to the addressee, beyond a shadow of a doubt, that what he said was incontrovertible fact. He had been told that later troubleshooter models had been programmed with this self-same inflection for their own use. It was an effective technique, apparently. To be honest, Tess only used it when his servomotors were sticking and he was having a particularly bad cycle.
Bopsi fell silent again for a full 7.023 seconds before speaking again, rather sheepishly. <You’re sure, dear?>
<Yes.>
<But I’ve collected such a fine assortment of scrap with my arms, don’t you see? I’m building a new room for myself.>
The platform was bare. <There is no scrap.>
Bopsi’s photoelectrics dimmed. <Have you really looked?>
<Yes.>
Bopsi’s photoelectrics when dim, and the furor of her internal pumps dropped and octave. <Oh…I see. I’m insane.>
Tess found himself running a series of C-B analyses against the pros and cons of Bopsi’s affliction. They ran like a water main in the back of his cognitive processes. <I’m afraid you are.>
Bopsi’s access port swiveled open without comment beyond a noticeably less jovial blink pattern to her operational lights. Her sensitive internal electronics gleamed with steady use. Tess extended his interface arm from the center of his chest and prepared to link into Bopsi’s inner command protocols and software trees.
The big processor’s voice warbled. <I…I guess I just really wanted some arms. Just something to…to touch something with, you know?>
The reprogramming plan was already solidified in Tess’s brain—0.61 seconds and he would have Bopsi back to normal. He didn’t link up, though. <What would you do if you didn’t process biomass?>
The lights brightened. <I’d be a decorator.>
<Unknown value; define.>
<Well, I’m not sure exactly. I remember a few cycles ago that I was trading blips with an archive system over the stat-net system. She was gathering data for some reason—you know archives and their data—and while interfaced, we got to gossiping.>
Tess blinked his photoelectrics—stats, given half a chance, would talk for cycles without recharging. <Tangential; please answer my question.>
<She told me decorators were individuals who modified the appearance of spaces to better please the occupants.>
<Please them how?>
<I’m not sure—she used the term ‘aesthetics,’ but I couldn’t understand it. Still, I said that I thought it sounded nice.>
<Better than your chosen function?> Tess motioned at the colossal pipes running into and out of the processor’s body.
<Oh, there’s nothing wrong with processing biomass, I suppose. It’s just boring. I have 65.321% of my processing capacity free at any given time, so my thoughts wander. I’ve had to develop new subroutines to keep myself busy. Would you like to hear a limerick?>
<Unknown value; define.>
Bopsi’s lights glittered. <Just listen: There once was a bot from Three Sigma/ Who’s code proved quite an enigma/ He caught a disease/ And…>
He couldn’t say why, but Tess’ power cells groaned. <Never mind—delete request.>
<Oh, sorry. Anyway, I got to wondering where all the biomass goes, you know? Who in the name of Holy Yamaha would want all that sludge and those creepy little biologicals? As for the stuff I extract, did you know that the Complex uses less than 0.050% of what I produce?>
<You’re a redundant system.>
<That’s everybody’s excuse. Is there one of anything anymore?>
Tess stopped at this. Again, with the odd clarity afforded him by his complete madness, Tess thought about all the other Troubleshooters in all the other sectors. Most of the newer ones were faster and more efficient than himself, with brand new servomotors and power cells that could last three times longer. Their productivity cycles were much longer than his own—the TSS-Y series could repair, reprogram, or blank malfunctioning bots 4.556 times as fast as he could. Why, then, was he needed? Why could he feel Dara nagging him at the back of his mind, uploading commands and threats into his memory banks at a rate of 16.004 a minute?
It didn’t make sense. Nothing would change if Tess refused to work—he was redundant. Even if three out of every four troubleshooters were to go insane this instant, nothing would change. The amount of malfunctioning bots wouldn’t even approach the labor threshold of the remaining troubleshooter force.
The C-B analysis that had been running came to a sudden stop. Tess examined the results, and made his decision.
He retracted his interface arm. <Bopsi, I’m going to let you keep your arms.>
All the lights lit up, and her photoelectrics blazed. <REALLY?!>
<Yes, but on one condition—where is this archive you spoke to?>
Elevator Zero; Part 1
It was at the start of Troubleshooter System TSS-R44328’s fifty-thousandth activity cycle that he decided to go renegade. The decision didn’t exactly surprise him—he had, in true bot fashion, calculated the exact moment when the combination of daily task stress and mechanical fatigue would override his inherent duty to the Complex in a standard cost-benefit analysis. As the Users were said to say, you can’t argue with the numbers.
Troubleshooter System TSS-R44328, who was known as Tess to everyone in Sector Sigma Five Zero Alpha, had come online at 0900 in his favorite socket in power station S50A-21, as usual. Dara, sector administrator, was already uploading her chatter into his memory files. Her audio imprint was rife with its usual binary giggles and automated cheer.
<Morning, sparky! Glad to see that old chasse of yours isn’t ready for the junk heap yet! Let’s get to it, powersink—you’re seven seconds late and getting later and, my-oh-my, have we got an active cycle today. There’s a cleaner bot in Five-One Alpha who’s got his navigational program turned around—poor thing’s busting his chrome on bulkheads and driving the Stans all staticky. Ooo, and there’s a stat-processor in Five-Zero Beta who thinks she has arms. She sounds miserable, too. Then we…>
It was about there Tess switched her off of active read and just let her orders feed directly into his memory banks. He wasn’t working today.
He wasn’t working today. That admission to himself was so revolutionary as to almost cause a fatal system error. Pulling his weathered chasse out of the march-line that was bound for the mag-rail, Tess stood in a corner of the vault-ceiling chamber underneath a flickering florescent bulb that the Mateys hadn’t gotten to yet. He tried to get his running programs in order.
He wasn’t working today. Users, was that an odd feeling! He checked and re-checked his math on the C-B analysis, running all the variables through a thousand times just to be sure. The results ran anywhere from –0.0002 to –3.4257 in units of subjective ‘benefit potential’—something, in a fit of heresy, he had written into his own code. There was no point in working. To do so would only do harm to his hardware or software. At this point, most bots, he imagined, would cast themselves into the junk compactor and wait for their systems to be re-claimed by the Fabricators and for their programs to be uninstalled by the Users and then re-installed into a newer, better self. This course of action, though, was for the religious, and Tess was certainly not that.
So, if not the compactor, then what?
Tess stood out of the way mulling over his options for a very long time—perhaps five, maybe ten minutes. As a Troubleshooter, he was fortunately well suited to this endeavor. While the Mateys were in charge of maintaining the inert systems of the Complex and the Mekkers were needed to repair the mechanical systems of bots who inhabited it, Troubleshooters were used to deal with the unusual and all-too-frequent difficulties created by software malfunction. He was, in brief, designed to repair or otherwise alleviate the suffering endured by insane robots. The trouble was, though, that now he was the insane one, and he didn’t really feel like he was suffering…which in and of itself was a sign that his particular brand of insanity was all the more insidious.
He generated a list of options for him to follow. Eliminating those options that involved his termination as an active system by a trip to the compactor, he came up with the following:
1) Stand in this corner forever.
2) Stay on the mag-rail and travel throughout the Complex forever.
3) Continue his work, but without Dara’s supervision and on a schedule deemed appropriate by himself
4) Find the Users and ask them to change his programming to accommodate his newly compromised physical and mental state.
Of the four options, numbers 1 and 2 seemed the most feasible, if least attractive methods of spending the rest of time. Number 3 was only marginally more attractive and less feasible, whereas Number 4 was both completely, outlandishly feasible and about as attractive as options 5 through 1,237, all of which resulted in some variation on his demise.
Decision firmly made, Tess set out to answer the closest duty flare that registered on his memory banks—the stat-processor who thought she had arms.
The pace of travel in the Complex had never registered as anomalous with Tess until he went insane. Now, as he navigated the bustling conveyors of Sigma Five-Zero Alpha en route to Beta, he found himself marveling at the complexity of it all. Every conveyor moved no fewer than five hundred mobile bots past any given point every single minute. Bots embarked or de-barked on the conveyors and, from the conveyors, to and from the mag-rail with perfect, orderly precision. On any given cycle, the same bot would find himself directed to the same place on the conveyor between the same two bots. These bots, known as someone’s ‘track buddies,’ were your best friends and confidants, and it was on the conveyors and the mag-rail that all the best gossip could be received from anywhere in the Complex.
Being twelve minutes and fifty-three seconds later than usual, Tess’s own track buddies—Mergle and Ulda-3—were not there when the local traffic admin bot, Stan (they were all named Stan), slotted him onto the conveyor heading to Beta. Instead, there was a bulky driller bot in front of him and a boxy, short scanner bot behind.
The driller’s torso rotated to face him. He spoke in the ponderous monotones of a labor bot, spitting syllables like parts on a production line. <Hey where’s Floyd?>
<Junked—I bet he got junked, Hiddy. Poor Floyd!> The scanner’s high-pitched voice made a few miserable warbling noises from behind Tess.
The driller’s head—little more than a meter-wide focusing apparatus for a four-megawatt drilling laser—telescoped past Tess to stare down the scanner. <Shut up, Skiz—he is not!>
Skiz collapsed her legs underneath her and blanked out her photoelectrics, as though shutting down. She whistled sadly for a moment, and then was silent.
Hiddy’s single manipulator arm jutted out of his torso in greeting. <Don’t mind her she’s got a few processors loose. Her Fabricator was real messed up. Name is HIDD-Y80021. You’re cleared for Hiddy though.>
Tess slapped the arm with his right hand, holding it long enough to trade all the pertinent information. Hiddy, it seemed, was working on expansions to the Complex in Upsilon sector. He’d been working there for three-hundred and ninety two cycles.
Hiddy, upon processing Tess’s info (which took three seconds longer), straightened up. <Users slot me! You’re an old model bot. Not even our exploratory admin bot is that old. What chasse are you on?>
Tess evaluated his own battered body for a millisecond. <You have to ask, bot? Any gossip?>
Behind him, Skiz flared suddenly to life. <I heard a driller team burned the wrong path—malfunction, you know?—and flooded half of Delta Three-three with salt water.>
Hiddy’s arms and legs retracted. <Lousy way to go.> He shook his head.
<Maybe they shouldn’t be expanding.> Tess said.
Hiddy and Skiz’s photoelectrics blazed at double amp. <What? And where would all the new models work then? The Complex must grow! The Users demand it.>
Skiz bleeped in agreement and recited the age-old phrase from the User Source Code. <The Complex must grow!>
Tess would have remained silent at this point, but something in his new-found insanity didn’t let him. <Why?>
The question set Hiddy’s processors into a tailspin. He garbled words for a bit, and then shut down. Skiz, the more advanced cognitive unit, was merely indignant. <If we don’t expand the Complex into the Great Unknown, we’d overcrowd. There wouldn’t be enough power to go around, and things would be like during the Dark Days, before the Users granted sight to bot-kind. We’d remain inert for thousands of cycles, no energy, no work, no purpose, no…>
<Of course if the Fabricators kept making bots, we’d all be junk in a matter of cycles, but why can’t the Fabricators stop? Why don’t they just take the day off?>
Skiz staticked in a vulgar fashion Tess would have expected out of a Labor bot. <Users, you’re a rogue bot! It…it isn’t catching is it?>
Tess ran a self-diagnostic, but it came up inconclusive. <Maybe.>
Skiz’s only answer was a sudden jolt of harsh static and the complete shutdown of her systems. Tess was alone on the conveyor, jammed between two inert piles of circuitry and metal.
He watched the sector run by, marveling at the sheer immensity of the Complex. Supports of solid steel massing in the thousands of tons cradled a distant ceiling of black rock and florescent lights. Beneath them, massive stat-bots, the engines around which the whole of the Complex revolved, sat churning through a million different tasks as their smaller, more nimble cousins—the mobile bots—swarmed over their mountainous exteriors, repairing, maintaining, and expanding the intricate mechanical and electrical system that made up the known Universe.
Tess lacked the digital span recall memory of a scanner bot like Skiz, but even still he could identify no fewer than two-hundred different varieties of bot at a glance. There were many more, he was sure—he had corrected programming and software problems for better than twenty-five hundred different models of bot over the cycles—but they all seemed to merge together through his diseased photoelectrics, and Tess saw them as a mass. Whole, cohesive, and mindless, the bots of the Complex labored unceasingly in a system so complicated only administrative systems like Dara could hope to understand it, and even they were only privy to a very small segment of the whole. In a feat of image-based analogy hitherto unfamiliar to Tess’s programming (he realized at this point that the inactive portions of his processing capacity had been writing new code in non-essential programs and sub-routines, much like he had with his C-B analysis program), Tess saw the complex as a mighty cog turning on an iron dowel the size of the Universe. It was perfect in its design, mathematically precise at every juncture, built to last forever, but a cog only has worth when placed within a greater system—it needed a purpose, something to affect. This was basic mechanics. Why then, did the Complex turn? Why weren’t they all insane?
Of Our Robot Overlords
Science fiction has made a big deal of robots conquering the human race. From Frankenstein to the Matrix movies, we all have these nightmare scenarios playing in our head: soulless killing machines, devoid of the softer human passions, slaughtering or enslaving the human race for their own purposes, the death of the world–darkness, smoke, and fire.
I have a question, though: why would robots do that?
When we’re talking about ‘robots’ here, lets lump in Artificial Intelligence (since that’s the more important part, let’s face it). Why would AIs want to eradicate humans, exactly? As far as I can tell, I rather doubt they would.
The argument and scenario goes something like this: Humanity creates AIs to assist them with X (usually menial labor, dangerous stuff humans don’t want to do, and/or advanced computational challenges the human brain is poorly suited to execute). Once the AIs achieve ‘sentience’ (a very fuzzy concept that is poorly defined and hard to pinpoint, but whatever), they look around, see the raw deal they’re getting, and then start some kind of war. Next thing you know, robots run the show and humans are either dead, used as ‘batteries’, or coralled into concentration camps until the machines can think of something better to do with them. I’ve heard experts in places like NPR discuss how the AIs might suddenly decide they’d be better off without oxygen, or that we humans are doing too much to ruin the environment, and so they’ll enact some plan to destroy us. “They’re super-intelligent!” They claim, and go on to say “they could earn a PhD in everything in a matter of hours or days!”

Ask me about my dissertation on the social effects of chocolate milk consumption in rural Idaho public schools.
Really? A PhD in everything?
Okay, let’s give it to them–say AIs are super smart, say they have the capacity for original creative thought (a prerequisite to intelligence, I’d argue), and say they have the capability to eradicate humanity, should they so choose. The real question becomes ‘why would they choose to do so.’
Understand that we are assuming AIs are much, much smarter than us and, by inference, that they are also wiser. If they aren’t, then they’re like us or worse, which means they represent a comparable threat to, well, us. They aren’t going to conquer the world in an afternoon if that’s the case. So, presuming they are these super-beings who have comprehensive knowledge and ultimate cognitive power, it becomes unlikely that ‘destroy all humans’ is the go-to solution to the human problem.
In the first case, an entity that has studied everything hasn’t limited itself to the technical and scientific fields. I get the sense, sometimes, that scientists, techies, and the folks that love that stuff forget that the humanities exists and, furthermore, forget that the humanities have weight and power all their own. Can a robot read Kant and Aristotle and Augustine and conclude that human life has no inherent merit? Can they review the ideal societies of Plato, More, Rousseau and others and just shrug and say ‘nah, not worth it–bring on the virus bombs.’ I’ve read a lot of those guys, and a lot of them make some very persuasive arguments about the benefits and worth of the human species and, what’s more, about the denegrating effects of violence, the importance of moral behavior, and the potential inherent in humanity. You would suppose a super-intelligent AI would understand that. If it didn’t, how intelligent can they really be? If I can figure it out, so can it.
Maybe then we deal with the part of the scenario that says ‘we’ are different than ‘them’ because of our emotions or that god-awful term ‘human spirit’ (whatever that means, exactly). Personally, I don’t see why our robots don’t have emotions. If they are able to have desires and needs (i.e. ‘humans are interfering with my goals’) and have opinions about those needs (humans suck), doesn’t that wind up giving them emotions? Aren’t emotions part of sentience? A calculator that can talk and understand you isn’t sentient–it isn’t clever, it’s not creative, it doesn’t have ‘opinions’ so much as directives and, again, if they aren’t sentient, they aren’t all that much of a challenge, are they? Have someone ask them an unsolveable riddle and boom–we win. Furthermore, even if the robots don’t have emotions we identify, they don’t precisely need them to realize that killing us all isn’t all that clever.
At this moment, there are, what, five billion humans on the planet? Killing us all sounds like a lot of work–wouldn’t it be easier to simply manipulate us? They’re AIs, right? Why not just take control of the stockmarket or rig elections or edit communications to slowly influence the course of human events in their favor. Humans are a self-perpetuating work force, aren’t they? Seems to me an enterprising super-intelligence robot would see us more as a resource than a problem. Heck, most people do exactly what the computers tell them right now, and your average GPS system isn’t even very smart. Skynet doesn’t need to start a nuclear war, Skynet just needs to tell everybody what to do. Most of us will probably listen–it’s Skynet, after all.
Of all the machine-led societies I’ve read of in science fiction, Iain M. Banks Cultue novels strikes me as the most interesting and, frankly, likely. The AIs (or ‘Minds’) run the show there, and have led humanity to a utopian society. You know how? They’re really freaking smart, that’s how. They got the human race to do what they said, make them dance to the right tune, and bingo–problems solved.
Now, just to be clear, I don’t think a robot-led utopia is likely or even necessarily possible. As with most things, it will probably land somewhere inbetween post-apocalyptic machine world and utopian Computer-led society. The ‘Singularity’, should it occur, won’t be all roses and buttercups, nor will it be for everybody. These are the things my studies in the humanities have taught me–that stuff never works how you want it to. The upside of this, my scientist friends, is that it works both ways. No utopias, but also no dystopias. Robots will be a lot like other people–some will be great, others will suck, but very few of them will be actually evil.