I’m becoming a bigger and bigger fan of Evgeny Morozov, author of “To Save Everything, Click Here: The Folly of Technological Solutionism.” He’s not the first person to come up with these ideas – there’s hardly anything about culture change that the great Philip Rieff didn’t say first – but he presents the issues so well, forcing me to reconsider points I’ve been flogging for years.
His New York Times editorial “Machines of Laughter and Forgetting” is one such article.
On the one hand, Morozov is suggesting that we need to re-evaluate our relationship with technology – and how it affects our habits of thought – in critical ways. (I’m going to quote at length):
“(T)echnology can save us a lot of cognitive effort, for “thinking” needs to happen only once, at the design stage. We’ll surround ourselves with gadgets and artifacts that will do exactly what they are meant to do — and they’ll do it in a frictionless, invisible way. “The ideal system so buries the technology that the user is not even aware of its presence,” announced the design guru Donald Norman in his landmark 1998 book, “The Invisible Computer.” But is that what we really want?
The hidden truth about many attempts to “bury” technology is that they embody an amoral and unsustainable vision. Pick any electrical appliance in your kitchen. The odds are that you have no idea how much electricity it consumes, let alone how it compares to other appliances and households. This ignorance is neither natural nor inevitable; it stems from a conscious decision by the designer of that kitchen appliance to free up your “cognitive resources” so that you can unleash your inner Oscar Wilde on “contemplating” other things. Multiply such ignorance by a few billion, and global warming no longer looks like a mystery.
Whitehead, it seems, was either wrong or extremely selective: on many important issues, civilization only destroys itself by extending the number of important operations that we can perform without thinking about them. On many issues, we want more thinking, not less.”
Yes … yes … absolutely, but … by the same token, there’s a difference between not falling unaware into habits of thought (or lack of thought, in this case) and not needing to go over every otherwise settled question every time we want to make a phone call.
There’s a biological parallel here: most of our “thinking” in the world takes the form of habitual activities. We don’t think about how we walk after we’ve learned to do it – we just walk. We don’t hink about how to drive a car once we’ve learned to do it: we just drive. We don’t think about what the nature of empathy and compassion might be in most social situations: we just say “Oh, I’m so sorry” in a modulated tone meant (without even thinking!) to convey sympathy.
The trouble comes in when we are incapable of examining our own habits: indeed, the essence of many forms of meditation is the mindfulness to not fall into habitual thinking. That option must always be available to us, and I’m grateful to Morozov for pointing out the compelling fact that our technology encourages habits of thought that we are generally not mindful of: that are invisible to us.
This is important. Vital.
But … how much are we supposed to think about our toaster in order to get an English Muffin?
Here’s the future Morozov envisions:
(D)esigned differently, our digital infrastructure could provide many more opportunities for reflection. In a recent paper, a group of Cornell researchers proposed that our browsers could bombard us with strange but provocative messages to make us alert to the very information infrastructure that some designers have done their best to conceal. Imagine being told that “you visited 592 Web sites this week. That’s .5 times the number of Web pages on the whole Internet in 1994!”
The goal here is not to hit us with a piece of statistics — sheer numbers rarely lead to complex narratives — but to tell a story that can get us thinking about things we’d rather not be thinking about. So let us not give in to technophobia just yet: we should not go back to doing everything by hand just because it can lead to more thinking.
…
Recently, designers in Germany built devices — “transformational products,” they call them — that engage users in “conversations without words.” My favorite is a caterpillar-shaped extension cord. If any of the devices plugged into it are left in standby mode, the “caterpillar” starts twisting as if it were in pain.
Does it do what normal extension cords do? Yes. But it also awakens users to the fact that the cord is simply the endpoint of a complex socio-technical system with its own politics and ethics. Before, designers have tried to conceal that system. In the future, designers will be obliged to make it visible.”
Do I want that? I’m deeply conflicted. In the highest scenario, I want to use my technology to construct newer and better narratives (such as a short story), rather than constantly wrestle with the narratives a do-gooding designer thinks would be helpful to have thrust upon me. No man is an island, but these well-intentioned interruptions are really a kind of commercial message – the equivalent of my phone pausing a text with my girlfriend to ask if I’ve called my mother lately, and then chiming “Knowing is half the battle!” This seems an unwarranted interruption of my own capacity to use technology to make meaning in the world. The well-meaning designer is not actually on my side as an artist.
At a lower level … look … any feelings of sympathy I have towards my extension cord are misplaced, if not to say wasted. “Compassion fatigue” is a very real problem in the world today, as is “outrage fatigue” – we are exhausted trying to offer the correct responses to an infinite number of deserving stimuli. At some level, my reserve of human compassion is limited, and if a designer is going to be playing with my head I’d much rather he get me into a conversation with an old person who doesn’t have anyone to talk to than the extension cord that runs across my study.
Indeed, isn’t “having conversations with our devices” the exact opposite of humanism? Isn’t it actually better if our devices are transparent and intangible – so that they don’t take attention away from the real people we can connect with?
I don’t know. I fear Morozov may be wrong on this one, but I hope he keeps writing. He’s churned up what once were settled questions for me, and I’m grateful.