I’m currently in the process of discussing Philip K Dick’s novel, Do Androids Dream of Electric Sheep with my Technology in Literature course. In the book (which I highly recommend, by the way), human-like androids infiltrate society, distinguishable from ‘real’ humans only by some slight differences in the bone marrow and in their lack of any kind of empathy. In the novel, Dick is exploring exactly what it means to be human and, furthermore, contemplating the moral status of those things placed outside that definition; the decision to make the androids lack empathy is more an artistic than a technical decision.
Still, Dick is hardly alone in the presentation of robots and androids as being emotionally and emphatically inhibited when compared with humans. Star Trek’s Data, for instance, is constantly on a quest to understand the emotional side of existence as he, himself, is completely lacking in emotion. The Machines of the Terminator universe also lack any kind of empathy, as do the Machines of the Matrix, and any number of other passionless, emotionless iterations of artificial intelligence littering science fiction from here to eternity. We’ve almost come to accept it as a given – robots cannot feel.
But why the hell not?
I’m no computer scientist, so perhaps there’s something I’m missing here, but I don’t really see emotion as anything more complicated than having built-in, default opinions about certain situations and things. They are hardwired programming, basically – you fear the dark because you cannot see what’s going on and suspect something dangerous may be lurking. You fall in love because the object of your affection fulfills a variety of built-in criteria about a romantic mate that are the result of your life experiences, genetic predispositions, and evolutionary history. Emotions may not be fully understood, but it seems silly to consider them some how magical and unable to be duplicated in machine form.
If indeed we could design an artificial intelligence (and, keep in mind, we are a long way from that happening), it seems to me that they would probably develop emotions whether we wanted them to or not. Emotions aren’t just extra baggage we humans carry around to make us miserable; they are useful applications used in order to assist in decision making. That terrible feeling you get when you are dumped or fail a test? That’s emotion chiming in saying ‘what we just experienced was negative; please refrain from repeating the same action’. Are you trying to tell me that any intelligent being wouldn’t be able to do the same thing?
Part of the myth of the solely rational robot is one that says ‘reason > emotion, therefore we don’t need or want emotion’. Our robots (and those who design them) wouldn’t see any need for hardwired emotional content to enable them to make decisions, since their own rational faculties would be more effective at doing the same thing. This, to me, seems to be making a number of assumptions. Firstly, we have never encountered an intelligent creature (at any level) that lacks some kind of emotive response. We have emotions, animals have emotions, so if we’re just going off the available evidence, it seems likely that emotions are some kind of prerequisite to true intelligence in the first place. Even in the development of our own children, emotional response precedes rational response to stimuli. It is perhaps possible that we could do it some other way, but we really can’t be sure. Furthermore, emotion, since it is simpler, is quicker and more effective at making certain kinds of decisions than reason is. If you hear a loud noise, you flinch or duck – this is inherently useful for the survival of a species. Granted, we wouldn’t be constructing AIs so that they could avoid being caught in avalanches, but it stands to reason there would be things we’d want them to be hardwired to do, and emotion is born from such hardwiring. Their emotions might not be the same as ours, but they’d almost certainly have them.
Now, there are a good number of scifi authors who do have emotive AIs – Iain M Banks, in particular, springs to mind, but others as well. Much of my own scifi writing of late has been moving me in that direction: if our AIs will feel, what will they feel about us? How will we feel about them? What kind of emotional relationships can you build with an intelligent toaster or fighter jet?
If your phone can love you back, do you owe it a card on Valentine’s Day?
Say the technology of cybernetics/genetic engineering gets to the point where it isn’t just available to replace the stuff you need (new organs, prosthetic limbs, injury healing, disease resistance, etc.), but you can, electively, enable yourself to surpass the human norm. Super bionic strength, running for days without fatigue, seeing in the dark, breathing water…
Do you get them?
Let’s skip past some of the practical scientific problems here (which are legion) and go straight to the ethical/moral/aesthetic concerns. What kind of stuff would sell? What kind of stuff would be developed? Given our current sporting culture, it’s safe to assume that the sporting world wouldn’t embrace augmented superhumans into their ranks. So, if you can’t dunk from half-court in the NBA, why do you need the super-jumping? Seeing in the dark would be nice, I suppose, but wouldn’t going under the knife in surgery (and all the risk that entails) be a rather large inconvenience for a problem that, let’s be honest, you don’t really have? Couldn’t we just make night-vision goggles cheaper, maybe stick them in regular glasses or even contact lenses? If we wanted to jump really high, wouldn’t it be easier to build a pair of super-jump pants or something?
Well, okay, maybe/maybe not. I, personally, find the idea of augmenting the human form past its normal physical parameters to be mildly distasteful. I confess that I don’t find the idea of boosting one’s mental capacity quite so problematic, so my problem is more aesthetic than it is anything else. Then again, I also wouldn’t get a tattoo (I cannot think of a word or picture that I would love to see on my body until the day I die), don’t find body piercing all that attractive, and am perfectly fine with the color my hair is right now, thank you. Perhaps I’m a poor sampling for the kinds of things people are willing to do to their bodies just for the hell of it.
Cyberpunk is full of the concept of human alteration. It’s used, primarily, as a symbol of the machine corrupting the human temple with its hard, passionless, lifeless influence. I would be surprised if many of the authors in that subgenre actually found such alterations to be positive things, since so much of that genre demonstrates, in the moment of catharsis, the primacy of living humanity over lifeless silicon. When sci-fi authors want to present body augmentation in a positive light exclusively, they tend to do it with biological agents–they grow new glands, produce stronger antibodies, develop stronger natural muscles (Banks’ Culture novels spring to mind here, as does some aspects of Herbert’s Dune saga). Still, the sci-fi audience’s fascination with bionic improvements, all the way from Lee Majors to Keanu Reeves, seems to imply that we do, in fact, like the idea of having kung-fu implanted into our brains. The symbolic appeal is clear, of course (we get to be stronger, better, faster, defeat our enemies, be better looking, etc, etc.). I do wonder, though, that if this stuff were actually available and you could actually afford it, how many of us would go for it?
I guess it would be similar to the amount of cosmetic plastic surgery done in this country, which totaled about 1.5-ish million in 2011. This isn’t a huge number, really. Now, granted, if they were offering things somewhat more impressive than big boobs, maybe that number would go up. Maybe if they made it cheaper, the number would go up. I don’t know, though. It sounds cool on paper, but in practice I feel like it gets creepier and creepier. Would a kind of cybernetic arms race develop among the population? Would all the cool kids in school be able to change their skin-tone at will thanks to sub-dermal pigmentation generators? Would our standards of beauty change? Maybe. Probably, even.
Still, though, there would probably be a pretty sizable chunk of ‘norms’ standing on the sidelines, shaking their heads and muttering to themselves “when that Logan kid gets to be eighty and it’s a cold day, he’s going to wish his skeleton weren’t made of metal. God, think of the arthritis!”
Just how long can you go getting everything you want?
The simpleton answer is ‘forever’, but you need to think a bit harder than that. Consider how human beings use their time at the moment. In the so-called Third World, much time is spent surviving – getting food, getting water, maintaining shelter, etc., etc.. Proportionally less time can be spent enjoying oneself thanks to the insecurity of their situation. Move up to the so-called First World, and ‘survival’, as such, is generally easier. We spend a lot of our time working to make money, yes, but we have more opportunity to entertain ourselves and much greater ability to acquire whatever it is we want, though that is limited by income. Still, when compared to the huge number of people in the world who make less than a dollar a day, your ~$700 a week job is pretty sweet.
Still, we in the First World aren’t satisfied – we want more money, more property, better vehicles, better skin, bigger muscles, smaller waists. We want the train to show up at the exact moment we step onto the platform, we want our iPhones to function while miles above or beneath the surface of the Earth, we want our fridge to re-fill itself with ice cream all by itself, and for that ice cream to be somehow healthy for us. These are, in common parlance, “First World Problems”.
Okay, so say we solve all those problems. Eternal youth and health. Unlimited fun and games. No work at all. No danger.
Cancer cured, traffic eliminated, energy for free, and all the healthy ice cream you can eat forever and ever and ever and ever. Then what?
In Utopia, we probably start complaining about even smaller things. We want to re-arrange the freckles on our face into a pattern more aesthetically pleasing. We want our dogs to talk to us in Scottish accents that are more realistic than the ones we genetically engineered them to talk in now. We think it’s really inconvenient having to hold our breath underwater, so we push for federal legislation mandating all children be able to breathe water.
So, eventually, say we get all that. Then what?
If you take away all the challenge, all the struggle, all the potential for failure…what do you have left? Iain M Banks explores this (somewhat) in his Culture series, and Arthur C Clarke goes through Utopian ennui in Childhood’s End. Others have covered it, as well. Even Idiocracy, to some extent, wonders what a society of near-perfect comfort would do to us. To my mind, it isn’t positive. It would have negative social effects we have difficulty imagining.
I write this, now, just as Johns Hopkins is discovering a way to regenerate adult blood cells into embryonic stem cells. It’s still unclear what this might mean for humanity, of course, but it has great potential to make the comfortable even more comfortable. I think about that a lot – and talk about it often on this blog. How much comfort do we really need, anyway? When did living into your seventies/eighties and dying equal ‘dying too young?’
What I hope for these technologies is that they aren’t simply used to make the wealthy and the powerful (in which I include most residents of the First World) immortal – they really, really don’t need to be. What I’d rather see is these technologies deployed so that all of us – all humanity – can live in the state of relative comfort that we First Worlders do now. I think this because, ultimately, First World Problems are good problems to have – not too terrible, but not so easy that we forget what it means to be alive, to struggle, and to achieve.
I just finished my syllabus for my Technology in Literature elective this coming semester. Students will have to write two short research papers and, just for fun, I thought I’d post the assignment here and see what folks think of it. Heck, if you like, go ahead and write the papers (don’t you dare send it to me to grade, though–I’ve got enough of that already). Anyway, here we go:
The overall focus of this course is the portrayal of science and technology in literature and how those portrayals illuminate the concerns and hopes of humans living in a certain era. It tells us a lot about how they thought, what they believed, and also can tell us some things about how we have changed, if at all, from those times. In class we will be discussing certain individual works from certain time periods and analyzing them closely, but we won’t be able to fully explore everything. Your task, in two short research papers, is to expand upon our class discussions and deepen your understanding of one or several of the works we are studying, bringing in outside sources and other contemporary works to develop a unique and compelling thesis regarding the cultural and, perhaps, even scientific significance of your chosen work.
Accordingly, your precise topic is left to your discretion. I will provide suggestions below, but you needn’t be bound by them—if you can come up with a different topic that interests you more, please explore that. In general, however, you are writing an in-depth literary analysis of one or more works from either the first half (for paper 1) or the second half (for paper 2) of the twentieth century. All papers should incorporate at least 6 sources (including the primary sources), be 6-8 pages in length (approximately 1700-2400 words), feature double-spaced Times New Roman 12-point font, have numbered pages, stapled, with a works cited page in MLA format. A rough draft for each paper is allowable, but is strictly optional. If you wish to receive your rough draft back in time to make revisions for your final draft, be certain to submit it a week or more prior to the due date. Papers may be handed in at any time during the semester up until the due date. Late work is not accepted.
Paper 1 (pre-1960)
- How did the idea of British world supremacy influence HG Wells’ Time Machine?
- Is The Time Machine racist? If so, why and how? How is it related to Kipling’s “White Man’s Burden”?
- How does Asimov’s opinion of the Soviet Union affect the themes inherent in Foundation?
- Does the Galactic Empire in Foundation symbolize Ancient Rome? If so, why does Asimov choose Rome as the analog? If not, what does it symbolize instead and why?
- Is Heinlein aware of the fascist undertones to his society in Starship Troopers? What is his attitude towards fascism as depicted in the book? How does it differ, if at all, from the kind of fascism demonstrated by the Nazis in 1940s-era Germany?
Paper 2 (post-1960)
- Gibson’s depiction of cyberspace in Neuromancer represents a kind of ‘wild frontier’, in a sense (Case is a ‘cowboy’, those who operate in the matrix are apart from society, etc.). What is the meaning of this metaphor? Where does Gibson think the ‘matrix’ (what we now know as the Internet) will lead us?
- Explain and explore the role of religion and spirituality in Neuromancer. What does it mean? Why does Gibson include it?
- In Snow Crash, why does Stephenson choose to use the Mafia as protagonists and how does this differ from other late-20th century depictions of the mob and why?
- What is the symbolic significance of Hiro and Raven’s shared heritage in Snow Crash? What, if anything, is Stephenson trying to say about the future of America?
- In Banks’ Culture, he shows us a ‘perfect’ symbiosis between man and machine. How does Banks choose to portray this symbiosis? Why?
- Explore the significance of gender roles in The Player of Games and how does this parallel the changing understanding of those roles in late 20th century Western culture.
This is more me thinking out loud than expositing a theory: Do/Have/Will Social Constructions (i.e. governments, political ethos, economic theory, social mores) constitute a kind of technology?
The knee-jerk answer is ‘no’. Technology is most commonly applied to engineering and the harder sciences – it involves
tools, gizmos, or arrangements of same in ways to ease our lives. If we consider technology in wider sense, however – as from the Greek tekhnologia, which means ‘systematic treatment’ – couldn’t social constructions fit? The modern postal service, for instance, is a systematic treatment involving, at its heart, a societal convention of what constitutes ‘mail’, how it should be treated, and who is responsible for it. Yes, the crunchier kind of technology is involved, but those are merely time-savers. The inherent social construction of ‘mail’ is something else and, I feel, somehow technological.
I’m thinking about this for two reasons at the moment. First is that I’m teaching a class on Technology in Literature this spring, featuring a lot of science fiction works that we will be analyzing in historical contexts, and I’m noticing just how much society dictates technology and vice versa (more on that in a minute). The second reason is that, given all the social upheaval in the world (Lybia, Syria, Italy, the OWS movement, etc., etc.), one is forced to wonder if there isn’t a better system that we could implement to organize ourselves. Science Fiction is awash in such theories, from Heinlein’s various and sundry new societies in novels like Starship Troopers and The Moon is a Harsh Mistress all the way to Iain M. Banks Culture novels or Star Trek’s Federation of Planets. Could any of that stuff work, one wonders? Is the reason it hasn’t so far is that we just haven’t ‘invented’ such a society yet?
Getting back to my first point above, it’s fairly clear that technology has a formidable influence over social constructions (just look at Facebook or, hell, look at the compass) AND that social constructions have a formidable influence over technology. After all, the reason why Europe wound up conquering most of the Earth isn’t because they were inherently smarter or better, but because they had a fractured social landscape that encouraged warfare and emphasized the acquisition of land in such a way that encouraged the growth and development of military technology to the point where they were simply the best at it (and please don’t start pleas for the skill and mastery of this or that indigenous people at warfare – the results really speak for themselves; the British Pound still trades favorably against all international currencies and the Zulu nation are a disaffected minority group in a mid-level African country holding a mere fraction of Britain’s much-faded influence and power. Guess who won that conflict?).
One of the problems, perhaps, with thinking about social structures in terms of technology is that we like to think of
technology as a linear progression, no matter how many technological dead-ends and reversals have shown themselves throughout the millennia. Societies, we have been trained to think, are not better or worse than each other so much as they are different. You can’t sit there in judgement of Russia’s predilection for Vodka and insist it is ‘less advanced’ than the cultural constructions of other places. Society doesn’t really work that way, does it? We aren’t taking steady strides towards the Social Singularity, are we?
Or is it the other way? Is technology not actually striding towards anything so much as it is following one of many, many possible paths that may or may not pay off, but does not indicate the ‘right’ way to do anything. What kind of world would we live in, then, if Betamax had trounced VHS, or where Tesla had overcome Edison? Still better: what kind of world would we have lived in where that would have been possible?
Wheels within wheels within wheels…
So, recently my attention was drawn to this diagram floating around the internet that traces the history of science fiction. If you haven’t seen it, you should check it out. I agree with much if not all of its suggestions (it gets a bit muddy towards the end there, but that is to be expected) and, in particular, I am drawn to the two words that are crouching atop its very beginnings: Fear and Wonder. Since I like the word better, I’m going to talk about them as Wonder and Terror.
Speculative fiction of all types derive their power, chiefly, from those two basic human emotions. Interestingly, they both primarily relate to what could be and not what is. Wonder is being stunned by something new you had never imagined before and Terror is dreading the manifestation of the same thing. These emotions led to the creation of pantheons of Gods, endless cycles of mythology, sea monsters, HG Wells, Jules Verne, the drawings of DaVinci and so on and so forth. Wonder and Terror–what could be and what we hope won’t be.
These emotions are the engines of human progress. They have brought us from the bands of nomadic hominids staring up and a night sky and led us all the way to this–the Internet. The endless tales we have told one another throughout the aeons regarding what we wonder and live in terror of have inspired humanity to strive for change and avoid the many pitfalls our progress may afford us. Though we haven’t been successful in all our endeavors, we still try. We try because we can’t stop wondering and we can’t stop quailing in terror at our collective futures.
The balance of these forces change, as well, as time marches on. Our relationship with technology and progress–whether we live in awe of its possibilities or in fear of its consequences–is in constant flux, dependent not only on the power of the technology itself, but also upon the mood of the society itself. In the times of Jules Verne, for instance, science was the great gateway to a better world–the engines of technology would wipe away the injustices of man, clear up the cloudy corners of his ignorance, and lead him to a bright new tomorrow. That tomorrow wound up being the early 20th century, with its horrifying wars and human atrocities, and so we read the works of Orwell and Huxley and even HG Wells, who cautioned us against unguarded optimism and warned of the terrible things to come. The cycle was to be repeated again, with the optimism of the 1950s (Asimov, Clarke) giving way to the dark avenues of writers like Philip K Dick and even William Gibson.
Where do we stand now? I’m not sure; I’m inclined to say this is a dark age for speculative fiction. We look to the future with pessimism, not optimism. Our visions of apocalypse (zombie or otherwise) are numerous and bleak. With every era there are your bright lights of hope–the Federations of Planets and Cultures–that say that yes, one day humankind will rise to meet its imagined destiny with wonders of glorious power, but for every Player of Games there seems to be a World War Z or The Road. Perhaps I’m wrong.
This coming spring, if all works out (and it looks like it might), I will be exploring this idea in much greater detail in a class I’ll be teaching on Technology and Literature. I’ve been wanting to teach this elective for a long time, and I can’t wait to see what I can teach but, more importantly, to see what I’ll learn in the process.