Blog Archives
Why You Should Hate Chatbots: A Measured Response
If you go back and read guys like Asimov, the general belief among a lot of mid-20th century futurists and golden-age scifi authors was that the advent of robotics and artificial intelligence would result in a paradise for human beings. Finally freed from the need to perform back-breaking, soul-draining labor, humanity would be able to pursue the truly enriching parts of life: art, culture, literature, leisure, and community.
Boy, were they ever wrong.
Robotics – automation – has been with us a long while now. Robots began to replace assembly line workers in the 60s; automated tellers began to replace bank personnel in the 80s; automated check-out is replacing retail workers now. They even got a robot patrolling the aisles at my local grocery store. None of these things – none of them – have substantively improved humanity. Incrementally, yes: cars are made faster, nobody waits in line at the bank, etc.. But mostly, these automation practices have primarily served to enrich the wealthy at the expense of the workforce.
Now, fortunately it has proven (thus far) that there are always other jobs to be had in different places. Nobody really loved working in a factory their whole life, I guess, not when they could get a job elsewhere with less noise that was more interesting and fulfilling. But, see, I’m not that convinced of this argument (which is the standard line taken to suggest automation isn’t that bad). To take the auto industry for one thing, job satisfaction among auto workers in the 1960s was high – wages were good, the job was stable, and the union was looking out for its members. Now? Things are less rosy.
Robotics and AI have been consistently sold to consumers as making their lives more convenient and they have done so. But this has been at the cost of workers, almost universally, as good jobs have been replaced or reduced. The era of machine-assisted leisure has never come to pass and it will not come to pass. We live in a world that is aggressively capitalist and work is essential to sustain our lives. The people who own and develop these machines cut the throats of poorer, less-well-connected workers and call it progress when what it actually should be called is a kind of class violence. Bigger yachts for them, two or three part-time jobs for you.
This brings me to “AI,” or, what it should more accurately be called, chatbots.
To dispense with the perfunctory up-front: Chat GPT is not intelligent by any measure of the word. It is a text compiler, a kind of advanced auto-complete. Ted Chiang describes it in The New Yorker thusly:
Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
ChatGPT is basically a search engine attached to an advanced autocomplete algorithm. It creates seemingly meaningful text by using good syntax to express stuff you can find by any series of semi-competent web searches. It doesn’t “think,” it doesn’t “know.” It’s a photocopier.
In an ideal world I might find this really cool. We do not live in that world, however, and this device will not be used in terribly positive ways. This is mostly because it seems to do something that most people don’t particularly like to do, which is think. People will (and people are) confusing what ChatGPT does with thinking, which is only accurate insofar as you believe that all thinking represents is the ability to make a degree of sense when talking to others. There is no grounding in truth, no underlying rationale that can be interrogated, there is no intentionality, and therefore no thought involved whatsoever.
When a new technology comes around, I like to consider the end-case scenarios for this technology. When this technology reaches its perfected state (a theoretical thing, to be sure), what purpose will it serve? For something like ChatGPT, I feel like this is some variation of the following:
- Chatbots are the source for all knowledge and research information.
- Chatbots are used to instruct people on skills and behaviors in lieu of teachers.
- Chatbots are used to create cheap and readily available entertainment products for the masses.
All three of these end-stage use cases I find catastrophically bad for humanity and, moreover, entirely unnecessary. To take them one at a time:
Chatbots are the source of all knowledge and research information
In this futuristic scenario, chatbots replace search engines and libraries and means of acquiring information. If you want to know something, you ask the bot, which is probably on your watch or your phone or even your wearable device of some other kind. Seems great, right?
But here’s the thing: you have no idea where this information is coming from. You, in fact, can’t know, because the bot doesn’t even know itself. As a writing professor for the past fifteen years or so, a significant portion of my time in my freshmen writing seminars has been source evaluation – how can you tell whether or not a source you find on the internet (or even in the library) is reliable or even useful and relevant? This is a skill and a very important one in a world as awash in information as ours is. Chatbots completely evade all of those skills.
In this world, you need to utterly trust the chatbot. But can you? Chatbots, like everything else, are programmed and created by humans and humans have agendas, biases, and blind spots. These will inevitably become part of the chatbot and, as a result, what its users will do is trust whatever the individual, company, or organization tells them reality is. Does Fox News and its incessant lies upset you? Does Elon Musks’s temper tantrums over not getting enough retweets give you the creeps? Well, it’s about to get irrevocably worse. Shit like this could legitimately destroy the internet itself.
Chatbots are used to instruct people on skills and behaviors in lieu of teachers
Chatbots seem like a great way to save money for schools and universities. It knows everything (it doesn’t) and it can write perfectly good papers (it can’t), so why bother paying skilled professionals when you can just stick the kids in front of a computer screen and get it to tell you what to do?
The thing is, though, that these tools cannot and will not ever be able to replace an actual teacher. You might be saying “yeah, duh! Of course!” but listen to me: The second, and I mean the exact second some administrator thinks they can lay off a portion of their faculty and replace their utility with chatbots, they will do it. They will be replaced with a vastly inferior product, but they absolutely will not care so long as the tuition money keeps flowing in.
You hear people saying “well, how is this tool any different than a calculator” and I believe every single one of these people is making a category error. The calculator is much more analogous to spell-check: a tool that saves labor in pedestrian things, like arithmetic and spelling, to enable better critical engagement in higher level thinking tasks. What people are going to try to get chatbots to do is replace the higher level thinking tasks. No more needing to decide how or why to make an argument or evaluate evidence or clarify your thinking! You can just rely on the robot to do this!
And it will be bad at it! Spectacularly bad at it! I’m already seeing this garbage float up to the surface in my classes this semester (Spring 2023) and it’s all pretty worthless. Even if, in the future, we fix the accuracy issues and address the incoherencies that come from poor prompting, this part will still remain: an object that does not think cannot replace actual thinking done by actual humans. It should not. It must not.

Critical thinking is a muscle, and what happens to muscles you don’t use?
I am aware of the argument that states “we just need to reimagine how to teach,” and I find that this is largely wanting as an argument in any other way than the practical and short-term one. Yes, writing is going to become an unreliable tool to teach critical thinking because students will believe they can easily evade doing so by using these tools. This means a return to in-class writing (hello blue books!) which has a variety of accessibility issues and maybe even a return to oral examinations (which would necessitate smaller class sizes from a practical standpoint) and in both cases we are looking at reduced wages, a poorer working environment, and worse outcomes. Why? Because teachers are expensive and already mistreated and undervalued, and literally nothing about this makes anything better.
And they will try to replace us, especially at less wealthy institutions, especially at the adjunct level. If you’re rich, you still get a bespoke educational experience and all the critical thinking skills that go along with it. For everyone else? You’re out of luck.
Chatbots are used to create cheap and readily available entertainment products to the masses
Why?
No, really, why? What is even the point of doing this? Why would I want to hear a machine tell me a story that is, in reality, a pastiche of every other story told without passion, without creativity, without nuance? Who actually wants this shit?
No one, really. That doesn’t mean it won’t happen, of course. Fools will buy anything, and expect to see chatbot mills turning out pablum for short money and expect them to make a killing while they strangle the actual artists out there trying to make a living (already a poor one, mind you).
Remember those techno-utopians from the mid-20th century? Remember what they hoped AI would bring us? The whole fucking point of being alive is to communicate with each other, to engage in art and culture and literature. To find truth and beauty. The idea that somebody out there is going to make a machine that does that for us is abhorrent to me. Utterly, gobsmackingly abhorrent.
And, not for nothing, but it can’t do this either! Like, it can produce soulless, functional crap – equivalent, of course, to soulless functional crap created by actual humans – but that’s hardly worth the cost it will have on society, on the world, on real human beings. The idea that “all quality will float to the top” is fucking bullshit, of course. Anybody who says that isn’t engaging their critical thinking skills too well – who will be excluded (the poor, the disenfranchised, the marginalized)? How will anybody be able to pursue art when the possibility of making money is functionally deleted (the rich, the comfortable, the privileged)?
See? Can’t you all see?
Now, there’s nothing much I (or anyone else) can individually do about this horror show. It’s happening and, barring some kind of legal action (fat chance), it will continue to get worse. As a teacher and a writer, it will disrupt my life badly, harm people I care about, and might even force me out of my profession(s). I’m sorry, but I have a hard time taking indulgence in these tools as anything other than a personal slight – the belief that what I am is worthless and replaceable.
I wish we lived in the world Asimov and Clarke imagined. We don’t.
Chatbots like ChatGPT are a threat. Treat them as such.