Back in 1997, when IBM’s computer “Deep Blue” beat the world’s (human) champion at chess, the news world erupted: were human beings on the way out?
Well, were they?
Today it doesn’t seem like it. I doubt you can come up with a single substantive way that a computer being better at chess than Gary Kasparov has affected your life. Sure, you use computers even more now, in even more ways, than you did ten years ago … but that no longer feels threatening. In fact, when IBM’s newest supercomputer “Watson” beat the all-time (human) Jeopardy champion in a test match, nobody panicked.
The success of Artificial Intelligence (AI) doesn’t seem to threaten humanity at all.
But the failure of AI may be doing lasting and terrible damage.
Artificial Intelligence is much better understood through its failures than its successes. Sure, we’ve taught a computer to win at Jeopardy, but was that actually something we set out to do? No – the field set out to create true “thinking machines.” In 1963 the scientists at Stanford’s Artificial Intelligence Laborabory (SAIL) anticipated that making a computer capable of truly understanding the world as people do would take about a decade. Alan Turing expected AI machines to be able to make moral judgments.
Today we’re not even close – even Watson, the Jeopardy winning computer, doesn’t “understand” the world, it just searches the web for terms that are linked together. It’s found that “Jericho” is the link between “Joshua,” “city,” and “walls fell.”
But rather than admitting failure and thereby celebrating what human intelligence is, AI researchers … and the business world … are trying to pull human intelligence down to the level of a machine.
They don’t see it this way, of course. Futurists like Larry Kurzweil and Hans Moravec always think we’re just … this … close to a true AI, because all human brains are doing is really complicated computing, so we just need slightly better computers.
Do you see what just happened there? In order to justify creating better computers, they’ve tried to tear down human intelligence: human beings have gone from being people capable of creative thought, moral judgment, and artistic impulses, to really complicated machines.
That’s not the same thing. It’s nowhere near the same thing. But increasingly both scientists and popular media are conflating the two. When computer programmers taught computers to write music that was explicitly derivative, they defended it by saying that everything human composers write is derivative too. AI’s failures to keep its promises aren’t seen as a result of the extraordinary nature of people, but as a simple failure of computing power. They’re saying, in effect, that Ptolemy would have been right about the solar system if he’d just had better models.
Unfortunately, this attitude has consequences: dehumanizing people in theory leads them to be dehumanized in practice.
Our increasing reliance on standardized testing in schools is one manifestation: we’ve taken complicated concepts like “learning” and “understanding” and reduced them to multiple-choice answers. Can you fill in the correct bubble? Yes? Then you must understand the material. This, of course, reduces “understanding” to regurgitation … and also ensures that only questions that are easily measured will ever be asked in the classroom. But if all learning is a computational process, why does that matter?
Social Networking is another example: it’s a wonderful tool, but it also explicitly limits the kinds of interactions participants can have. That’s no problem if it supplements and enhances real human interaction, but increasing experience suggests that children who grow up with it believe the hype that a “friend” on Facebook is no different than a “friend” in person – and are therefore becoming less comfortable with messy, complicated, human interactions. Social networking has failed to come close to capturing what makes human friendships meaningful, and so it’s dragging the definition of friendship down to the level of the screen.
Sherry Turkle, MIT professor of anthropology, has identified another disturbing possibility. Advances in robotics and AI are making it increasingly possible to create devices (from robotic human heads to robotic puppies) that simulate love and affection. They are now being introduced to senior centers and retirement homes as a way of giving companionship to lonely elders. And it works: people desperate for love and affection are taking to the machines … machines which can never really love back.
But if “love” is only a series of expressions and signals, what does it matter?
It matters, Turkle points out, because we’re using it as a cheap and synthetic way of ignoring our real responsibility – to take care of people. Robot pets and computer therapists are a way of coping with loneliness, but they’ll never actually solve it.
“We have invented inspiring and enhancing technologies, and yet we have allowed them to diminish us,” Turkle writes in her latest book.
The lesson may be that human beings are only as human as the things we compare ourselves to. Once, we were “made” in the image of God; now, we tell ourselves we are essentially no different than our own plastic imitations.
It’s not anti-technology to know human beings are better than that.
— Benjamin Wachs