Beyond the Torment Nexus: How Does Science Fiction Help Us Prepare for the Future?


Fans and creators of science fiction have often suggested that the genre offers a unique advantage: it prepares readers to face the future before it arrives. Like a telescope seeking potential asteroids, it allows us to shift, prevent, or prepare for impacts well in advance. Framed positively, we can aim for more appealing technologies and social structures; more cynically, we have many options for how to handle blueprints for the Torment Nexus.

Science fiction is now old enough to track its own record of future preparedness, and it’s time we did so. Other communities, such as Silicon Valley, self-mythologize their ability to shape the future—and from outside, it’s easy to see that this sometimes reshapes more egos than asteroids. At the same time, science fiction has at least anecdotally staffed NASA, helped society question gender norms, and inspired technological research and development from smartphones to satellites. People alarmed by imminent dystopia push 1984 and The Handmaid’s Tale onto bestseller lists, then attend protests bearing the heraldry of Leia Organa and Katniss Everdeen.

Ursula Le Guin famously argued that the goal of science fiction is not to predict. She made this argument in the context of The Left Hand of Darkness, a book which Jo Walton not-quite-so-famously pointed out “is one of those books that changed the world, so that reading it now, in the world it helped grow, it isn’t possible to have the same experience as reading it in the world it was written in and for.” Prediction, under such circumstances, becomes irrelevant. Le Guin helped readers prepare for a world of relaxed gender roles and presentations, and in so doing helped them build such a world. But this was rare at the time and remains a challenge even today, when science fiction draws as often on social science as physics. And preparation for the future was unevenly distributed even among Le Guin’s readers—as anyone can report who’s attended a talk by a man who can’t name any female authors other than Le Guin.

Or to put it another way: if we’re so well-prepared, why are we so bad at creating utopias? In Washington, DC, where I live, it was until recently difficult to join a policy planning discussion where a Doctor Who joke wouldn’t be welcome—but people still made contracts with Palantir, a company essentially named Use Our Tempting Product And We Will Spy On You. It’s hard to prepare for the future, well-read or otherwise, when a predictable 12-month budget has become entirely speculative. On the flip side, there are companies that make millions for being wrong about the future of their proposed technologies—again, an incentive that overrides meaningful speculation.

Even when readers pay attention, science fiction has a mixed track record for telling us what to prepare for—and the most popular tropes often undermine that planning. Take the future of the ’60s, which has now had plenty of time to let the dice settle. A core tenet of SFnal faith comes from Heinlein: “Interplanetary travel is waiting at your front door—C.O.D. It’s yours when you pay for it.” Several years later he complained that we still didn’t have O’Neill Colonies, a thing we don’t have because physics doesn’t actually work that way, or zero-gravity nursing homes, a thing we don’t have because biology doesn’t actually work that way. Kelly and Zach Weinersmith’s A City on Mars makes a strong case that space hawks have prepared for exactly the wrong interplanetary future. Also yet to pan out from the mid-century canon: strong Sapir-Whorf languages that limit thought or create geniuses, non-moving-target sapient AI, psychohistory, and the preservation of ’50s-style nuclear families.

Of course, many of us living in more futuristic family shapes can also cite Heinlein, who proposed about fifty different new marriage norms and didn’t bet on any of them. It may be that we get the best preparation, not from stories of specific technologies or social structures, but from rich exploration of how things can change. Bujold and Martine and Leckie and Pinsker all write about very different kinds of neural implants, more concerned with risks and benefits than specific blueprints—and this is an area where the real technology shows promise. These broad and varied ideas about human-machine cyborging also have the potential to help us think more deeply about how we use, and depend on, non-implanted technologies.

The potential, but not the automatic ability. As someone who’s written about technology designed to support rather than undermine mental health, this is obviously something I’d like to see us prepare for. But Sarah Pinsker’s We Are Satellites will only encourage you to stop doomscrolling if that’s what your own imagination takes from her tale of poorly-tested enhancements.

It’s easy to come up with a long list of things depicted in science fiction that later happened (in some form) in real life, and a longer list of things that didn’t happen. But my real question—the one that’s harder to answer—isn’t about prediction, or about inventors and social reformers inspired by favorite stories, but about the vast majority of us who mostly brace ourselves for whatever comes next. About preparation itself.

For example: How many of us, at the start of the COVID pandemic, thought of Mira Grant’s Feed and took pandemic mitigation a little more seriously? How many decided that since there weren’t actual zombies, it was fine to go around unmasked—then or later? How many thought of stories where an authoritarian government uses a crisis to limit freedoms, and got themselves swept up in QAnon conspiracies?

Pandemic response is a relatively easy place to ask concrete questions about what “preparation” means, even if we don’t have the data to answer them. When we get to more nebulous questions, like whether reading stories about resistance in dystopia makes it easier to fight back when an actual dystopia rears its head, answers get even fuzzier. Or smaller ones: are science fiction readers any better at learning new technologies, or at telling which ones represent transformative change versus passing hype, or at figuring out which ones are actually a terrible idea?

The question that makes me really nervous is: Where does preparation go wrong? Grumbling about the continued lack of L5 colonies is relatively harmless, provided you don’t volunteer as an indentured servant in a hypothetical Martian colony. Treating a large language model like a sapient being, or even like a functional search engine, might ruin your legal career. Making moral decisions based on the overwhelming value of a hundred billion far-future perfectly-happy uploaded brains… could be very bad for a lot of not-so-perfectly-happy people who exist right now.

The best answer I have, so far, is that we prepare best when we take from science fiction some humility about our future plans, combined with confidence that real change—good or bad or just confusing—is not only possible but probable. If you’re very sure of what comes next, whether that’s your five-year career roadmap or the Singularity, you’re likely to get tripped up. If you go forward with the conviction that the unexpected is a certainty in some form, you’re more likely to notice and admit when things actually have changed. Maybe, just maybe, that could prepare you to catch waves when they start to form—or to recognize the flood before it overwhelms you and try to change the world, again, into something better. icon-paragraph-end



Source link

Scroll to Top