Skip to main content
Fascinating Blog

Science with Bad Metaphors

By July 5, 2015December 20th, 2015No Comments

In a recent post I suggested that Dungeons & Dragons is the most under-appreciated cultural force of this era.  One impact I didn’t suggest it had was on science – but I suspect it’s true.

A number of years ago I noticed that when you read the work of theoretical physicists, you discover that their work positing hypothetical entities that could exist based on a limited view of reality and a few mathematical tools sounds *exactly* like it was lifted from the Dungeon Masters Guide.

Seriously:  “The Multiverse?”  “Branes?”  Quarks that travel back and forth through time?  “Dark Matter” and “Dark Energy?”  You could find all of these concepts in D&D handbooks in the 80s.

Is that a coincidence?  Remember that the current crop of theoretical physicists (at least in the U.S.) was the one that grew up with D&D – and that the upcoming generation came of age with the terms and concepts firmly established in geek culture. 

I was thinking about the impact of the metaphoric “frames” through which unknowingly view the universe when I read an interview with Daniel Dennett, an undoubtedly brilliant mind (and far older that D&D) who nevertheless seems to exhibit everything I find disturbing about the way in which scientific hubris has leapt ahead of scientific fact.

For years, Dennett has championed “homuncular functionalism” – an idea most famously championed by Steven Pinker suggesting that the brain is made up of different “modules,” each of which is responsible for certain kinds of functions (language, sensory data, etc.), and each of which is essentially an algorithmic computer running equations across neurons.   Consciousness is the result of the modules interacting with each other.

It was (or should have been) easily recognizable as wrong from the beginning:  the data has never really indicated that the brain has easily separable “modules,” and there were plenty of documented manifestations of brain plasticity which blew this theory all to hell.  It was a theoretical failure too because it never really addressed the nature of consciousness as we experience it:  there was neither a reason the interplay of “modules” should have lead to consciousness, nor any explanation for how it could in the first place.

Dennett acknowledges that now (without ever tipping his hat to those of us who were calling it nonsense 20 years ago).  From the interview:

“I’m trying to undo a mistake I made some years ago… Everybody knew is was an over-simplification, but people didn’t realize how much, and more recently it’s become clear to me that it’s a dramatic over-simplification,”

This is honorable, and I appreciate his willingness to go there.  What’s toubling is that even in the face of this model’s collapse, he still can’t let go of the underpinning idea – that the “brain” is a “computer.”  He recognizes it’s problematic, but he clings to it like a college sophomore defending his first thesis in front of the class.

“The brain’s a computer, but it’s so different from any computer that you’re used to. It’s not like your desktop or your laptop at all, and it’s not like your iPhone except in some ways. It’s a much more interesting phenomenon.”

So … the brain isn’t like any computer that exists, or ever has existed.  But it’s still a computer.  Got it?

But why?    Why is it better to say “the brain is like a computer that I someday hypothesize will exist, even though I have no idea how it will work or what it will be like,” than to just drop the metaphor?

It’s because Dennett, for all his genius, comes at the problem of consciousness through an agenda, and the computer metaphor serves that agenda.  As long as the “brain” is a “computer” he can study consciousness is reductive, mechanistic terms.  Take away the metaphor, and the next step might be to admit that a reductive, mechanistic approach doesn’t cover the essential bases.  “Computers” are “scientific” and “technical” – who knows what a new metaphor would suggest.  Dennett has no intention of taking any chances.

Which is to say that scientists are no less given to metaphoric overreach than any other part of the population, and they often cling to these metaphors in the face of new evidence.  This isn’t a new insight – it’s the essence of Thomas Kuhn’s philosophy of science – but it’s always worth pointing it out when it appears.