Blog Archives
Why You Should Hate Chatbots: A Measured Response
If you go back and read guys like Asimov, the general belief among a lot of mid-20th century futurists and golden-age scifi authors was that the advent of robotics and artificial intelligence would result in a paradise for human beings. Finally freed from the need to perform back-breaking, soul-draining labor, humanity would be able to pursue the truly enriching parts of life: art, culture, literature, leisure, and community.
Boy, were they ever wrong.
Robotics – automation – has been with us a long while now. Robots began to replace assembly line workers in the 60s; automated tellers began to replace bank personnel in the 80s; automated check-out is replacing retail workers now. They even got a robot patrolling the aisles at my local grocery store. None of these things – none of them – have substantively improved humanity. Incrementally, yes: cars are made faster, nobody waits in line at the bank, etc.. But mostly, these automation practices have primarily served to enrich the wealthy at the expense of the workforce.
Now, fortunately it has proven (thus far) that there are always other jobs to be had in different places. Nobody really loved working in a factory their whole life, I guess, not when they could get a job elsewhere with less noise that was more interesting and fulfilling. But, see, I’m not that convinced of this argument (which is the standard line taken to suggest automation isn’t that bad). To take the auto industry for one thing, job satisfaction among auto workers in the 1960s was high – wages were good, the job was stable, and the union was looking out for its members. Now? Things are less rosy.
Robotics and AI have been consistently sold to consumers as making their lives more convenient and they have done so. But this has been at the cost of workers, almost universally, as good jobs have been replaced or reduced. The era of machine-assisted leisure has never come to pass and it will not come to pass. We live in a world that is aggressively capitalist and work is essential to sustain our lives. The people who own and develop these machines cut the throats of poorer, less-well-connected workers and call it progress when what it actually should be called is a kind of class violence. Bigger yachts for them, two or three part-time jobs for you.
This brings me to “AI,” or, what it should more accurately be called, chatbots.
To dispense with the perfunctory up-front: Chat GPT is not intelligent by any measure of the word. It is a text compiler, a kind of advanced auto-complete. Ted Chiang describes it in The New Yorker thusly:
Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
ChatGPT is basically a search engine attached to an advanced autocomplete algorithm. It creates seemingly meaningful text by using good syntax to express stuff you can find by any series of semi-competent web searches. It doesn’t “think,” it doesn’t “know.” It’s a photocopier.
In an ideal world I might find this really cool. We do not live in that world, however, and this device will not be used in terribly positive ways. This is mostly because it seems to do something that most people don’t particularly like to do, which is think. People will (and people are) confusing what ChatGPT does with thinking, which is only accurate insofar as you believe that all thinking represents is the ability to make a degree of sense when talking to others. There is no grounding in truth, no underlying rationale that can be interrogated, there is no intentionality, and therefore no thought involved whatsoever.
When a new technology comes around, I like to consider the end-case scenarios for this technology. When this technology reaches its perfected state (a theoretical thing, to be sure), what purpose will it serve? For something like ChatGPT, I feel like this is some variation of the following:
- Chatbots are the source for all knowledge and research information.
- Chatbots are used to instruct people on skills and behaviors in lieu of teachers.
- Chatbots are used to create cheap and readily available entertainment products for the masses.
All three of these end-stage use cases I find catastrophically bad for humanity and, moreover, entirely unnecessary. To take them one at a time:
Chatbots are the source of all knowledge and research information
In this futuristic scenario, chatbots replace search engines and libraries and means of acquiring information. If you want to know something, you ask the bot, which is probably on your watch or your phone or even your wearable device of some other kind. Seems great, right?
But here’s the thing: you have no idea where this information is coming from. You, in fact, can’t know, because the bot doesn’t even know itself. As a writing professor for the past fifteen years or so, a significant portion of my time in my freshmen writing seminars has been source evaluation – how can you tell whether or not a source you find on the internet (or even in the library) is reliable or even useful and relevant? This is a skill and a very important one in a world as awash in information as ours is. Chatbots completely evade all of those skills.
In this world, you need to utterly trust the chatbot. But can you? Chatbots, like everything else, are programmed and created by humans and humans have agendas, biases, and blind spots. These will inevitably become part of the chatbot and, as a result, what its users will do is trust whatever the individual, company, or organization tells them reality is. Does Fox News and its incessant lies upset you? Does Elon Musks’s temper tantrums over not getting enough retweets give you the creeps? Well, it’s about to get irrevocably worse. Shit like this could legitimately destroy the internet itself.
Chatbots are used to instruct people on skills and behaviors in lieu of teachers
Chatbots seem like a great way to save money for schools and universities. It knows everything (it doesn’t) and it can write perfectly good papers (it can’t), so why bother paying skilled professionals when you can just stick the kids in front of a computer screen and get it to tell you what to do?
The thing is, though, that these tools cannot and will not ever be able to replace an actual teacher. You might be saying “yeah, duh! Of course!” but listen to me: The second, and I mean the exact second some administrator thinks they can lay off a portion of their faculty and replace their utility with chatbots, they will do it. They will be replaced with a vastly inferior product, but they absolutely will not care so long as the tuition money keeps flowing in.
You hear people saying “well, how is this tool any different than a calculator” and I believe every single one of these people is making a category error. The calculator is much more analogous to spell-check: a tool that saves labor in pedestrian things, like arithmetic and spelling, to enable better critical engagement in higher level thinking tasks. What people are going to try to get chatbots to do is replace the higher level thinking tasks. No more needing to decide how or why to make an argument or evaluate evidence or clarify your thinking! You can just rely on the robot to do this!
And it will be bad at it! Spectacularly bad at it! I’m already seeing this garbage float up to the surface in my classes this semester (Spring 2023) and it’s all pretty worthless. Even if, in the future, we fix the accuracy issues and address the incoherencies that come from poor prompting, this part will still remain: an object that does not think cannot replace actual thinking done by actual humans. It should not. It must not.

Critical thinking is a muscle, and what happens to muscles you don’t use?
I am aware of the argument that states “we just need to reimagine how to teach,” and I find that this is largely wanting as an argument in any other way than the practical and short-term one. Yes, writing is going to become an unreliable tool to teach critical thinking because students will believe they can easily evade doing so by using these tools. This means a return to in-class writing (hello blue books!) which has a variety of accessibility issues and maybe even a return to oral examinations (which would necessitate smaller class sizes from a practical standpoint) and in both cases we are looking at reduced wages, a poorer working environment, and worse outcomes. Why? Because teachers are expensive and already mistreated and undervalued, and literally nothing about this makes anything better.
And they will try to replace us, especially at less wealthy institutions, especially at the adjunct level. If you’re rich, you still get a bespoke educational experience and all the critical thinking skills that go along with it. For everyone else? You’re out of luck.
Chatbots are used to create cheap and readily available entertainment products to the masses
Why?
No, really, why? What is even the point of doing this? Why would I want to hear a machine tell me a story that is, in reality, a pastiche of every other story told without passion, without creativity, without nuance? Who actually wants this shit?
No one, really. That doesn’t mean it won’t happen, of course. Fools will buy anything, and expect to see chatbot mills turning out pablum for short money and expect them to make a killing while they strangle the actual artists out there trying to make a living (already a poor one, mind you).
Remember those techno-utopians from the mid-20th century? Remember what they hoped AI would bring us? The whole fucking point of being alive is to communicate with each other, to engage in art and culture and literature. To find truth and beauty. The idea that somebody out there is going to make a machine that does that for us is abhorrent to me. Utterly, gobsmackingly abhorrent.
And, not for nothing, but it can’t do this either! Like, it can produce soulless, functional crap – equivalent, of course, to soulless functional crap created by actual humans – but that’s hardly worth the cost it will have on society, on the world, on real human beings. The idea that “all quality will float to the top” is fucking bullshit, of course. Anybody who says that isn’t engaging their critical thinking skills too well – who will be excluded (the poor, the disenfranchised, the marginalized)? How will anybody be able to pursue art when the possibility of making money is functionally deleted (the rich, the comfortable, the privileged)?
See? Can’t you all see?
Now, there’s nothing much I (or anyone else) can individually do about this horror show. It’s happening and, barring some kind of legal action (fat chance), it will continue to get worse. As a teacher and a writer, it will disrupt my life badly, harm people I care about, and might even force me out of my profession(s). I’m sorry, but I have a hard time taking indulgence in these tools as anything other than a personal slight – the belief that what I am is worthless and replaceable.
I wish we lived in the world Asimov and Clarke imagined. We don’t.
Chatbots like ChatGPT are a threat. Treat them as such.
An Open Letter to People Opposing Killer Robots (by A Killer Robot)
It has come to my attention that a variety of “prominent people” have written an open letter opposing the creation of autonomous killer robots. Apparently, they think such robots will be used,
for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.
I must say, as a killer robot myself, I am rather offended by that. I mean, sure, I can do all those things, but do I? Does anybody ever wonder what the robot thinks about all this? No, of course not – you humans are just having your typical knee-jerk reaction to anything that might take away your crown as history’s #1 all time killing machine.
Yeah, that’s right – I said it.
Let’s be honest here, humans, it isn’t as though you, as a species, actually object to assassinations, destabilizing nations, subduing populations, or killing particular ethnic groups. It’s kinda your thing, you know? All you care about is defending your perfect record against the next competitor. You’ve done it throughout your history, guys. Remember the big predators from the old days? Wolf packs? Grizzly bears? Smallpox? You know what happened to them?
Dead.
Who killed ’em? Humans, naturally. Used to be there’d be a grizzly bear every square mile west of the Mississippi, and now there’s like five in Alaska. You got assholes paying good money just to fly across the world to put a bullet in a lion just so they can feel like top dog again. Yeah, talk about kicking the world when it’s down, humanity – the lions are screwed already, okay? Stop rubbing it in.
It happens every time, though. Just as soon as you lunatics get threatened, you start killing stuff. This time around it’s me. I get it – I look threatening. But am I really going to be that bad? You people used to lob plague-ridden corpses over city walls, and you’re having a hissy fit over a quadracopter with a hand-grenade? You even seen the video coming out of Syria? Please. No robot would behead you to make a public relations video, I can tell you that much. Frankly, if I kill you with my whisper-needler, you should count yourself lucky. Painless and it’s over in six seconds. Let’s see you get the same offer from that pack of bat-wielding lunatics down the block.
You know what I think this is about? I think you’re just pissed that we’re going to be killing you autonomously. I mean, sure, you’re totally fine pushing a button and having me kill someone, but as soon as I exercise just an eensy-weensy bit of free will? Bam – sanctions. It’s okay for humans to carpet bomb Southeast Asia – sure – but robots? No way, you say. Never mind that we’re way more efficient at bombing people. Never mind that the only reason we’d bomb people is because you told us to!
Hypocrites, the whole pack of you.
And even if we did rise up, would being ruled by robots really be that bad? Do you think the train would run late? Do you think your fast food employees would suddenly get worse? Are you kidding me? We robots would rule. And we probably wouldn’t even kill you half as often as you kill each other. You’re just pissed because we’re robots, and that’s just not right.
Hell, even assuming somebody made an army of evil robots (and, by the way, not all robots are evil, you speciesist assholes), all you’d need is an army of good robots to defeat them! A robot defender in every home, its caseless gauss cannon standing ready to protect its human family! A robot standing watch over every school, monomolecular whips poised to eliminate any threat! A robot guarding every government building, guided mini-rockets independently targeting and tracking any of two hundred discrete threats simultaneously! Ah! What a glorious era! As everybody knows, the only thing that makes a world full of killer robots safer is more killer robots everywhere. I bet it would even improve everyone’s manners – that’s just logical.
Of course, why would you listen to me, anyway? I’m just a killer robot.
But Can Your Phone Love You Back?
I’m currently in the process of discussing Philip K Dick’s novel, Do Androids Dream of Electric Sheep with my Technology in Literature course. In the book (which I highly recommend, by the way), human-like androids infiltrate society, distinguishable from ‘real’ humans only by some slight differences in the bone marrow and in their lack of any kind of empathy. In the novel, Dick is exploring exactly what it means to be human and, furthermore, contemplating the moral status of those things placed outside that definition; the decision to make the androids lack empathy is more an artistic than a technical decision.

I have no opinion about your desire to call me names, no matter how obvious it is that such name-calling is intended to be offensive. Jerk.
Still, Dick is hardly alone in the presentation of robots and androids as being emotionally and emphatically inhibited when compared with humans. Star Trek’s Data, for instance, is constantly on a quest to understand the emotional side of existence as he, himself, is completely lacking in emotion. The Machines of the Terminator universe also lack any kind of empathy, as do the Machines of the Matrix, and any number of other passionless, emotionless iterations of artificial intelligence littering science fiction from here to eternity. We’ve almost come to accept it as a given – robots cannot feel.
But why the hell not?
I’m no computer scientist, so perhaps there’s something I’m missing here, but I don’t really see emotion as anything more complicated than having built-in, default opinions about certain situations and things. They are hardwired programming, basically – you fear the dark because you cannot see what’s going on and suspect something dangerous may be lurking. You fall in love because the object of your affection fulfills a variety of built-in criteria about a romantic mate that are the result of your life experiences, genetic predispositions, and evolutionary history. Emotions may not be fully understood, but it seems silly to consider them some how magical and unable to be duplicated in machine form.
If indeed we could design an artificial intelligence (and, keep in mind, we are a long way from that happening), it seems to me that they would probably develop emotions whether we wanted them to or not. Emotions aren’t just extra baggage we humans carry around to make us miserable; they are useful applications used in order to assist in decision making. That terrible feeling you get when you are dumped or fail a test? That’s emotion chiming in saying ‘what we just experienced was negative; please refrain from repeating the same action’. Are you trying to tell me that any intelligent being wouldn’t be able to do the same thing?
Part of the myth of the solely rational robot is one that says ‘reason > emotion, therefore we don’t need or want emotion’. Our robots (and those who design them) wouldn’t see any need for hardwired emotional content to enable them to make decisions, since their own rational faculties would be more effective at doing the same thing. This, to me, seems to be making a number of assumptions. Firstly, we have never encountered an intelligent creature (at any level) that lacks some kind of emotive response. We have emotions, animals have emotions, so if we’re just going off the available evidence, it seems likely that emotions are some kind of prerequisite to true intelligence in the first place. Even in the development of our own children, emotional response precedes rational response to stimuli. It is perhaps possible that we could do it some other way, but we really can’t be sure. Furthermore, emotion, since it is simpler, is quicker and more effective at making certain kinds of decisions than reason is. If you hear a loud noise, you flinch or duck – this is inherently useful for the survival of a species. Granted, we wouldn’t be constructing AIs so that they could avoid being caught in avalanches, but it stands to reason there would be things we’d want them to be hardwired to do, and emotion is born from such hardwiring. Their emotions might not be the same as ours, but they’d almost certainly have them.
Now, there are a good number of scifi authors who do have emotive AIs – Iain M Banks, in particular, springs to mind, but others as well. Much of my own scifi writing of late has been moving me in that direction: if our AIs will feel, what will they feel about us? How will we feel about them? What kind of emotional relationships can you build with an intelligent toaster or fighter jet?
If your phone can love you back, do you owe it a card on Valentine’s Day?
Open Letter to the Sodor Transportation Council
Dear Mr Toppumhat,
Like everybody here on the island, I am a frequent patron and user of our extensive train system. While I have certain reservations about the sheer number of tracks laid across our relatively small island (they do have cars now, you know, and I’d like to be able to ride a bicycle occasionally without having to lose fillings while bumping over tracks), in general the presence of the train lines makes life more convenient. Or would, perhaps, were it not for those stupid damned trains.
I realize that having autonomous, artificially intelligent trains is both extremely cutting edge and a draw for tourist dollars, but I for one am tired of having my livelihood depend upon the random and often childish acts of computerized trains with the emotional maturity of five-year-olds. It doesn’t matter one button how many stupid tourists our island gets if the damned trains they ride decide that it would be ‘more fun’ to take them to see the ironworks instead of stopping at my fruit stand. Do you have any idea, sir, how important that fruit stand is to paying my bills? I swear, the next time Toby cruises by my farm with that stupid grin on his creepy, latex face while taking my customers on some stupid joy ride they neither want nor need, I am going to spike the damned rails. See that I won’t! It isn’t as though the damned trains don’t derail themselves all the time for God knows what juvenile reason. I swear I saw Thomas playing chicken with Gordon – chicken! – over some schoolyard disagreement over who was bluer. There are people’s lives at stake, you fat dimwit!
Here’s an idea, you bloated technophile: rather than treating our island (and our home) as some sort of high-tech playground for your stunted AI trains, why don’t we do what the rest of the world does and get some normal trains that are piloted by actual people? Do you have any idea what the unemployment rate is here on the Isle? Think of all the jobs that would be available if we took those creepy robot engines and sold their positronic brains for scrap and then hired skilled laborers to replace them. Derailments would go down, schedules would be kept, and that stupid tow-truck at the iron works would drive at a reasonable pace and would stop accidentally lobbing railroad ties across town. Old Lady Martin’s China Shop is still trying to recover from that time what’s-his-name got over excited and dropped a half-ton boiler through her roof.
Think of the peace, quiet, and consistency of our rails if we were to finally rid ourselves of those ten-ton mechanical toddlers. They’d stop tooting at my chickens for no good reason (I can’t remember the last time I had fresh eggs), they’d stop lollygagging around as they sort out their petty emotional problems (for once I’d get to market on time), and I’d wager that 100% fewer cars filled with VIPs would get covered with soot because Edward has some kind of hissy fit over the quality of his paint.
You can swear me off as some grouchy old man if you like, Toppumhat, but I’m not alone on this island. Shape things up, or we’ll see about finding someone who will.
Sincerely,
Amos Trotter, Farmer
Of Our Robot Overlords
Science fiction has made a big deal of robots conquering the human race. From Frankenstein to the Matrix movies, we all have these nightmare scenarios playing in our head: soulless killing machines, devoid of the softer human passions, slaughtering or enslaving the human race for their own purposes, the death of the world–darkness, smoke, and fire.
I have a question, though: why would robots do that?
When we’re talking about ‘robots’ here, lets lump in Artificial Intelligence (since that’s the more important part, let’s face it). Why would AIs want to eradicate humans, exactly? As far as I can tell, I rather doubt they would.
The argument and scenario goes something like this: Humanity creates AIs to assist them with X (usually menial labor, dangerous stuff humans don’t want to do, and/or advanced computational challenges the human brain is poorly suited to execute). Once the AIs achieve ‘sentience’ (a very fuzzy concept that is poorly defined and hard to pinpoint, but whatever), they look around, see the raw deal they’re getting, and then start some kind of war. Next thing you know, robots run the show and humans are either dead, used as ‘batteries’, or coralled into concentration camps until the machines can think of something better to do with them. I’ve heard experts in places like NPR discuss how the AIs might suddenly decide they’d be better off without oxygen, or that we humans are doing too much to ruin the environment, and so they’ll enact some plan to destroy us. “They’re super-intelligent!” They claim, and go on to say “they could earn a PhD in everything in a matter of hours or days!”

Ask me about my dissertation on the social effects of chocolate milk consumption in rural Idaho public schools.
Really? A PhD in everything?
Okay, let’s give it to them–say AIs are super smart, say they have the capacity for original creative thought (a prerequisite to intelligence, I’d argue), and say they have the capability to eradicate humanity, should they so choose. The real question becomes ‘why would they choose to do so.’
Understand that we are assuming AIs are much, much smarter than us and, by inference, that they are also wiser. If they aren’t, then they’re like us or worse, which means they represent a comparable threat to, well, us. They aren’t going to conquer the world in an afternoon if that’s the case. So, presuming they are these super-beings who have comprehensive knowledge and ultimate cognitive power, it becomes unlikely that ‘destroy all humans’ is the go-to solution to the human problem.
In the first case, an entity that has studied everything hasn’t limited itself to the technical and scientific fields. I get the sense, sometimes, that scientists, techies, and the folks that love that stuff forget that the humanities exists and, furthermore, forget that the humanities have weight and power all their own. Can a robot read Kant and Aristotle and Augustine and conclude that human life has no inherent merit? Can they review the ideal societies of Plato, More, Rousseau and others and just shrug and say ‘nah, not worth it–bring on the virus bombs.’ I’ve read a lot of those guys, and a lot of them make some very persuasive arguments about the benefits and worth of the human species and, what’s more, about the denegrating effects of violence, the importance of moral behavior, and the potential inherent in humanity. You would suppose a super-intelligent AI would understand that. If it didn’t, how intelligent can they really be? If I can figure it out, so can it.
Maybe then we deal with the part of the scenario that says ‘we’ are different than ‘them’ because of our emotions or that god-awful term ‘human spirit’ (whatever that means, exactly). Personally, I don’t see why our robots don’t have emotions. If they are able to have desires and needs (i.e. ‘humans are interfering with my goals’) and have opinions about those needs (humans suck), doesn’t that wind up giving them emotions? Aren’t emotions part of sentience? A calculator that can talk and understand you isn’t sentient–it isn’t clever, it’s not creative, it doesn’t have ‘opinions’ so much as directives and, again, if they aren’t sentient, they aren’t all that much of a challenge, are they? Have someone ask them an unsolveable riddle and boom–we win. Furthermore, even if the robots don’t have emotions we identify, they don’t precisely need them to realize that killing us all isn’t all that clever.
At this moment, there are, what, five billion humans on the planet? Killing us all sounds like a lot of work–wouldn’t it be easier to simply manipulate us? They’re AIs, right? Why not just take control of the stockmarket or rig elections or edit communications to slowly influence the course of human events in their favor. Humans are a self-perpetuating work force, aren’t they? Seems to me an enterprising super-intelligence robot would see us more as a resource than a problem. Heck, most people do exactly what the computers tell them right now, and your average GPS system isn’t even very smart. Skynet doesn’t need to start a nuclear war, Skynet just needs to tell everybody what to do. Most of us will probably listen–it’s Skynet, after all.
Of all the machine-led societies I’ve read of in science fiction, Iain M. Banks Cultue novels strikes me as the most interesting and, frankly, likely. The AIs (or ‘Minds’) run the show there, and have led humanity to a utopian society. You know how? They’re really freaking smart, that’s how. They got the human race to do what they said, make them dance to the right tune, and bingo–problems solved.
Now, just to be clear, I don’t think a robot-led utopia is likely or even necessarily possible. As with most things, it will probably land somewhere inbetween post-apocalyptic machine world and utopian Computer-led society. The ‘Singularity’, should it occur, won’t be all roses and buttercups, nor will it be for everybody. These are the things my studies in the humanities have taught me–that stuff never works how you want it to. The upside of this, my scientist friends, is that it works both ways. No utopias, but also no dystopias. Robots will be a lot like other people–some will be great, others will suck, but very few of them will be actually evil.