Skip to content

In the Battle of Man vs Machines, Are We the Baddies?

11
Share

In the Battle of Man vs Machines, Are We the Baddies?

Home / In the Battle of Man vs Machines, Are We the Baddies?
Books Science Fiction

In the Battle of Man vs Machines, Are We the Baddies?

By ,

Published on July 12, 2017

11
Share

We all know the story. First we create intelligent nonhuman life, then it kills us. It’s as old as Frankenstein (though admittedly Dr. Frankenstein’s monster didn’t actually kill him, it just murdered his brother and his fiancée; he died in the Arctic, seeking revenge. But nobody would argue it had a happy ending).

Take Terminator, for example. When the global computer network Skynet becomes self-aware, its first action is to trigger a nuclear war to try and wipe out humanity. In the TV series Battlestar Galactica, humans create sentient machines, and again, extermination is the default response. In Daniel H. Wilson’s novel Robopocalypse, Archos R-14, the powerful AI, becomes self-aware, and… you guessed it, immediately begins plotting the destruction of humankind.

What is it with us? Why do we keep making evil robots, against all the evidence that it’s a bad idea? Why is this such a compelling trope in the stories we tell ourselves about the future?

The easy answer is that we’re worried about our powers getting away from us. Perhaps AI apocalypses are just updated versions of The Sorcerer’s Apprentice, with gleaming metal machines standing in for self-sweeping (and self-replicating) brooms.

It certainly makes sense. Every new technology we create comes with a heaping side-order of fear and guilt about what we’ll do with it. But what if these stories are actually grasping at a deeper truth? A darker fear? The fear that when we finally create intelligent machines, they’re going to see us for what we really are, and judge us. Maybe it’s not really the ruthlessness of the Terminator we’re afraid of after all, but the possibility that it might be right.

What if we’re the baddies?

We weren’t at first. Look at the two science fiction classics Battlestar Galactica and Westworld, both rebooted in the 21st century. In the originals, robots were enemies to be conquered—unemotional killing machines. Yul Brynner’s gunslinger looked human, but ironically the metallic Cylons at least had a motive for killing humans: we were the competition, the threat. The gunslinger was just a broken appliance. In both stories, the plucky humans and their struggle to survive is at the core of the narrative, a narrative with a long history. We create a new terror out of hubris but we eventually overcome it because of our unique human qualities. We sometimes even learn something in the process.

In the 21st century TV reboots, the stories are not so simple. Not only are the machines complex and relatable, but often they are more innocent, more victimized and perhaps even more humane than we are. It’s no accident that the Cylons look like humans now, or that the show spends almost as much time exploring their characters as it does the human protagonists. Nor is it an accident that the most compelling protagonists in the new Westworld are the robot “hosts.” In an amusement park where humans can act out their base desires for cruelty and domination without fear of consequence, humans are the antagonists. In both shows, there are harrowing scenes where humans torture intelligent machines, who clearly appear to suffer. It’s often hard to watch.

So what’s going on? Yes, the advent of “peak TV” has brought greater complexity and thoughtfulness to the plots of SF shows, catching up with some of the work done years earlier in novels and short fiction. But it’s more than that. Books like Madeleine Ashby’s Vn series, and Charles Stross’s Saturn’s Children have also taken the robot’s point of view. Spielberg’s AI and Alex Garland’s recent Ex Machina have done the same in film. There seems to be a trend.

Part of it lies in societal change, in the expansion of our spheres of empathy. Increasingly we’re recognizing the rights of the non-human intelligences who already share the planet with us. Every year there’s more evidence that our unique human capacities are unique only in degree, not in kind. Many species share our capacity for suffering, for empathy, language, and tool use. Parrots and pigs can become psychologically damaged if they’re deprived of companionship and stimulation. Chimps, elephants, dolphins—arguments have been made that they all deserve legal rights, and perhaps even personhood status. It’s not too far of a stretch to imagine how our machines will eventually be similar.

We’re also trying to come to terms with our own dark experiences of dominant groups and the people they’ve marginalized. Whole categories of humans have barely been considered persons in recent history. It’s not hard to look at slavery, as well as the treatment of women and ethnic minorities, and worry about how we’ll behave as a species when we create a new category of beings explicitly designed to satisfy our needs and desires.

Charles Stross’s novel Saturn’s Children is a perfect example. The book’s protagonist is Freya, an android designed to please humans, brought to life a century after humanity has become extinct in a time when the solar system is populated by our robotic descendants. We are gone, but our ghosts linger. Many of the characters are victims of inherited trauma (Freya’s original progenitor, for example, was programmed to be submissive through rape), and the plot revolves around an experiment to try and resurrect humans. Freya’s programming runs deep; if humans can be brought back, she’ll lose her free will and become nothing but a smart sex doll. The moment when she (and the reader) fears the experiment will be a success is a crucial scene in the novel. The monster in the closet, under the bed: it is us.

It’s a similar story in the movie Ex Machina. The story of Ava initially revolves around the performance of a Turing test by programmer protagonist Caleb, at the behest of his tech-bro billionaire boss Nathan, who created Ava. But it quickly becomes more than this. Ava is intelligent enough to manipulate Caleb into feeling sympathy for her and helping her escape; this is the true Turing test. It’s a test that Nathan fails. He’s arrogant, narcissistic. He uses intelligent robots as sexual toys with no thought about how they might suffer. He too is the monster under the bed, and Ava’s eventual escape from him is a hopeful thing. It is the birth of something new.

In his recent essay The AI Cargo Cult, Kevin Kelly criticizes many of the assumptions underlying the expectation of superhuman AI taking over the world. As fiction writers, we don’t really want to accept this, since it’s so much more interesting to wonder what might happen if they do. But in the essay, Kelly brings up something thought-provoking: artificial intelligence is a religious idea. In many ways, AI in fiction often serves as a substitute for God, or takes the form of a dark angelic being. The Eschaton in Charles Stross’s novels Singularity Sky and Iron Sunrise, for example, is a human creation, but takes the form of an almost omniscient and omnipotent force in the universe. AI and robots can be beautiful, alien and other, yet disturbingly like us (the character of Six in the new Battlestar Galactica). We’re drawn to their beauty by our base desires, and the objects of our desire use our weakness against us. They see us for who we really are.

In the Old Testament, angels are messengers from God. They come to guide, but also to warn, to punish, to destroy. And now we’re back to the Terminator, aren’t we? We are sinners, we are eternally flawed. We fear that when we create a new type of life, we will treat it as badly as we have treated each other and the creatures we share the Earth with. It will judge us harshly. And we will deserve it.

Gloomy, isn’t it? Here’s a little balm, right from the bottom of Pandora’s box.

We could rise to the occasion. It’s the theme of the Star Trek universe after all: the possibility of species-wide self-improvement, of maturity. In the classic Star Trek: The Next Generation episode “The Measure of a Man,” Jean-Luc Picard asks, if we create a race of androids, “won’t we be judged by how we treat that race?” Creating a framework of nonhuman rights now might just save us in the future, should Kevin Kelly be wrong and we actually manage to create machine sentience.

Or, finally, what if our AI and robot creations are our true children? We see kernels of this at the end of some AI stories. Our distant descendants in Spielberg’s movie are intelligent machines. The end result of the new Battlestar Galactica is a merging of Cylon and human into a new species: us. Perhaps there’s a measure of peace in accepting the inevitability of being eclipsed by our creations. If you’re a parent you might recognize this particular kind of mingled anticipation and fear for the future. Will our children carry on our traditions and culture? Will they be like we are—or might they, one day, be better?

Top image: Westworld (2016)

Andrew Neil Gray and J. S. Herbison are partners in life as well as in writing. The Ghost Line is their first fiction collaboration, but won’t be their last: a novel is also in the works. They have also collaborated in the creation of two humans and preside over a small empire of chickens, raspberries and dandelions on Canada’s West Coast. There are many types of non-human intelligence in The Ghost Line. From a talking ship to synthetic dancing girls, to something more subtle. We hope we do a little justice to the idea that AI can be more than just a rampaging menace.

About the Author

Andrew Neil Gray

Author

Learn More About Andrew Neil

About the Author

J.S. Herbison

Author

Learn More About J.S.
Subscribe
Notify of
guest
11 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments