Tag Archives: Science

Is the study of Bullshit itself bullshit?

Bullshit may be the dominant form of expression in the early 21st century:  we’ve reached a point where it’s impossible to have any cultural literacy at all if bullshit isn’t your second language.

So I was one of the people who celebrated when Henry Frankfurt published “On Bullshit,” his philosophical study of the unique language characteristics of bullshit.  I’m not sure he really added anything to Orwell’s take on the subject, but, the more rigorous looks we have at bullshit, and why it’s a weed infesting our language and choking our culture, the better.

Except that a recent study out of the University of Waterloo (Canada) illustrates just how careful we have to be when interrogating this subject.  One of bullshit’s most dangerous characteristics is that it’s sticky – and if we get it on our hands we have a hard time not spreading it around.

Continue reading Is the study of Bullshit itself bullshit?

What “Big Data” doesn’t understand about literature could fill a book that it would never read

Hey, remember how the internet was going to end racism?  How the digital revolution would close the gaps between the haves and have-nots?  Maybe eliminate money altogether?

Remember that?

It’s cute when little children assign their toys superpowers.  It’s nothing but trouble when grown-ups do it.

Today we’re told that digital technology will change everything about the study of literature:  quantifying it, taking out all the messy subjectivity, and reveal stunning new insights.

Promises promises.

The case is made, most recently, by Marc Egnal writing in the New York Times.

“Can the technologies of Big Data, which are transforming so many areas of life, change our understanding of American novels?” he asks.

Notice how no one who asks that question ever says “no.”  It’s a giveaway that we’re playing games rather than engaged in serious scholarship.  Serious scholars do not ask questions to which they are already messianically convinced of the answer, unless it involves ordering off a menu or tenure.

Sure enough:  “After conducting research with Google’s Ngram database, which tabulates the frequency of words used in more than five million books, I believe the answer is yes.” Continue reading What “Big Data” doesn’t understand about literature could fill a book that it would never read

My Favorite Luddite – and the nature of cultural suicide

Complaining about new technology is a genre at least as old as the printing press, which doesn’t mean any given complaint is invalid but does suggest there’s a high standard for thinking anyone needs to read your particular screed.  For all the trouble with Twitter, it beats hand writing a copy of the Bible, in ink, on vellum.

Bearing that in mind, what is exactly is your problem with the digital revolution?

Progressives tend to be at a disadvantage in a debate over new technology because – hey, they’re in favor of progress, right?  For a liberal critique of technology to make any sense at all it has to be grounded in first principles, penetrating, and with a sense of irony at the machinations of history.

For the most part, contemporary liberalism isn’t up to the task.  First principles aren’t organic enough, penetration is gender normative, and the machinations of history depended upon oppressed labor.   If you want a critique of technology done right, you have to go to a conservative – if you can find one with anything approaching a sense of irony.

My favorite contemporary Luddite is unquestionably Matt Labash, longtime writer at The Weekly Standard, a magazine whose noxious spewings of Movement Conservatism (which is to real conservatism what “playing doctor” is to either surgery or sex) hides a small stable of brilliant cultural writers.

Labash’s (relatively) recent article on Twitter is surprisingly good to read, given how little there is to say about Twitter that hasn’t already been bitl.ly ‘d;  but it was reading his superb article on a “Meme Conference” that gave me a crystal clear insight into the nature of our cultural decline. Continue reading My Favorite Luddite – and the nature of cultural suicide

Reason I don’t trust Google #4 – they have no sense of the limitations of their own culture

It seems obvious in hind-sight that Google’s infamously difficult job interview questions weren’t actually going to separate the wheat from the chaff.

Here’s the thing:  it was pretty obvious in foresight, too.

I mean, come on:  “How many piano tuners are there in the entire world?”  “How much should you charge to wash all the windows in Seattle?” “Design an evacuation plan for San Francisco?”

What exactly were these ever going to accomplish?  Given their “on the spot” nature, how could they possibly reward anything besides the ability to spout bullshit – or to over-simplify the world down to a level at which it would never actually work in practice?

That should have been obvious to anyone:  but Google persisted in asking these questions for years.  They even made it a selling point:  THIS is how exclusive we are.  THIS is how difficult it is to break into our culture.  It made them think their wheat had extra fiber.  As Laszlo Bock (Google’s sinisterly named “senior vice president of people operations”) told the New York Times:  “They don’t predict anything. They serve primarily to make the interviewer feel smart.”

Google loves to feel smart.  More than actually being smart, apparently.  Because isn’t it fascinating how a company that is so driven by “data” – everything has to have a quantifiable rationale – instituted a series of major hiring policies that never had any data to back them.

How did this happen?  How could the company that epitomizes Big Data spend years on a course that never had any research backing it up, and was (self-evidently) flat out wrong?

Because:  Google loves to feel smart.  And it doesn’t get that about itself:  it doesn’t understand just how dangerous its assumptions about its own intelligence are.  Continue reading Reason I don’t trust Google #4 – they have no sense of the limitations of their own culture

Let’s map the cracks in reality!

William Eggington has an essay in the New York Times’ “The Stone” blog about the way in which Borges and Kant(among others) prefigured the Heisenberg uncertainty principle.

Now that’s not supposed to happen.  You’re not supposed to be able to predict the world through theory, philosophy, and imagination – but Kant and others clearly did it, and it’s been commented many times (so many as to be an unfortunate trope) that quantum mechanics is predicted to a startling detail by Buddhist and Hindu epistemology.

He quotes Borges:  “(W)e have dreamt the world. We have dreamt it resistant, mysterious, visible, ubiquitous in space and firm in time; but we have left in its architecture tenuous and eternal interstices of unreason, so that we know it is false.”

We are always observing the world, Eggington notes, as subjective beings who piece together evidence that we pick-and-choose – and this creates cracks in the world, ones that philosophers and artists are often far ahead of scientists at mapping.

The problem of accurately observing the world is one that science has profitably ignored in order to make advances – but it has also mistakenly come to believe that those very advances have also solved the problem of accurately observing the world.  Yet nothing could be less true.  Continue reading Let’s map the cracks in reality!

“Innovation” is for poor people

I’d been meaning to write about the so-called “essay grading” software programs in the context of Evgeny Morozov’s concept of “solutionism” – the idea that we use technology to solve “problems” that aren’t really problems, but are accessible by technology.

After all, the “problem with essays” isn’t that they require someone to understand them:  that’s the whole point.  Ideally, someone is writing an at least slightly unique take on the subject under discussion.  To “grade” it without “understanding” it isn’t an improvement at all.  What will happen – inevitably – is that you’ll get students writing (and eventually thinking) down to the things the software can process.

After all, it’s not like the software is capable of evaluating what it’s reading in any original sense.  It’s not even like it’s particularly good at it.  From the New York Times article:  Continue reading “Innovation” is for poor people

How transparent do we want our technology to be?

I’m becoming a bigger and bigger fan of Evgeny Morozov, author of “To Save Everything, Click Here: The Folly of Technological Solutionism.”  He’s not the first person to come up with these ideas – there’s hardly anything about culture change that the great Philip Rieff didn’t say first – but he presents the issues so well, forcing me to reconsider points I’ve been flogging for years.

His New York Times editorial “Machines of Laughter and Forgetting” is one such article.

On the one hand, Morozov is suggesting that we need to re-evaluate our relationship with technology – and how it affects our habits of thought – in critical ways.  (I’m going to quote at length):

“(T)echnology can save us a lot of cognitive effort, for “thinking” needs to happen only once, at the design stage. We’ll surround ourselves with gadgets and artifacts that will do exactly what they are meant to do — and they’ll do it in a frictionless, invisible way. “The ideal system so buries the technology that the user is not even aware of its presence,” announced the design guru Donald Norman in his landmark 1998 book, “The Invisible Computer.” But is that what we really want?

The hidden truth about many attempts to “bury” technology is that they embody an amoral and unsustainable vision. Pick any electrical appliance in your kitchen. The odds are that you have no idea how much electricity it consumes, let alone how it compares to other appliances and households. This ignorance is neither natural nor inevitable; it stems from a conscious decision by the designer of that kitchen appliance to free up your “cognitive resources” so that you can unleash your inner Oscar Wilde on “contemplating” other things. Multiply such ignorance by a few billion, and global warming no longer looks like a mystery.

Whitehead, it seems, was either wrong or extremely selective: on many important issues, civilization only destroys itself by extending the number of important operations that we can perform without thinking about them. On many issues, we want more thinking, not less.”

Yes … yes … absolutely, but … by the same token, there’s a difference between not falling unaware into habits of thought (or lack of thought, in this case) and not needing to go over every otherwise settled question every time we want to make a phone call. Continue reading How transparent do we want our technology to be?

If Aristotle covered it, it’s not news

The great value of science is in its capacity to prove counter-intuitive concepts.  But like any pearl of great price, such events are rare and hard to find.  Far more often we see a scientific sheen being put over common sand, forming the epistemological equivalent of costume jewelry.

Exhibit A is Matthew Hutson’s article in the New York Times “Our Inconsistent Ethical Instincts.”

Hutson, author of “The 7 Laws of Magical Thinking,” is one of those thinkers who delights in using all the formidable capacities gained by achieving a B.S. in cognitive neuroscience (and an M.S. in science writing) to show us how little we know ourselves.

“We like to believe that the principled side of (morality) is rooted in deep, reasoned conviction,” he writes in the Times.  “But a growing wealth of research shows that those values often prove to be finicky, inconsistent intuitions, swayed by ethically irrelevant factors. What you say now you might disagree with in five minutes.”

A “growing wealth of research” shows that our ethical intuitions can be shaped by subjective factors?  Wow, that would be shocking … except that Aristotle already covered it.

The whole point of the Nicomachean Ethics is that ethical behavior is formed by and through subjective states of mind, rather than an abstract knowledge of “the good.”  Aristotle writes about how young people “are in a condition like permanent intoxication, because youth is sweet and they are growing,” and that “With regard to excellence, it is not enough to know, but we must try to have and use it.”  The idea that someone’s judgement was susceptible to how hungry they were, or the mood they were in, was a given.  Continue reading If Aristotle covered it, it’s not news

What have you done for a cyborg lately?

I’m not the least surprised by this article noting how vital human judgment is to the smooth working of so many of the processes we assume have been fully automated by now:

“People evaluate, edit or correct an algorithm’s work. Or they assemble online databases of knowledge and check and verify them — creating, essentially, a crib sheet the computer can call on for a quick answer. Humans can interpret and tweak information in ways that are understandable to both computers and other humans.

Question-answering technologies like Apple’s Siri and I.B.M.’s Watson rely particularly on the emerging machine-man collaboration. Algorithms alone are not enough.

Twitter uses a far-flung army of contract workers, whom it calls judges, to interpret the meaning and context of search terms that suddenly spike in frequency on the service.”

The increasing dependence upon machine intelligence to run our world isn’t really “dependence” in the classical sense at all:  human intelligence is becoming increasingly vital in making the thing work.  Continue reading What have you done for a cyborg lately?

Have scientists invented rat telepathy?

I was shocked when I read media reports about an experiment in which a lab rat with brain implants was able to mentally send instructions to a second rat with brain implants.  It creates dizzying vistas of a coming world.

Here’s Slate’s description of the experiment:

Researchers implanted one set of electrodes in the brain of the rat in Brazil, and another set of electrodes in the brain of a second rat at Duke University. Via an Internet connection, they set it up so that a signal from the brain of the rat in Brazil would be sent, in simplified form, directly to the brain of the rat in North Carolina. The rat in North Carolina also faced two levers, but had no information to go on as to which one to press—except for the signal coming from the first rat’s brain.

And here’s how the reporter described the implications, in taking with the experimenter, neuroscientist Miguel Nicolelis of Duke

Nicolelis believes this opens the possibility of building an “organic computer” that links the brains of multiple animals into a single central nervous system, which he calls a “brain-net.” Are you a little creeped out yet? In a statement, Nicolelis adds:

We cannot even predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves.

It’s fantastic to think about, and I hope a sci-fi novel is already being written.  But after reading the report itself, I felt like I’d witnessed an expert game of 3-card monty, rather than a scientific breakthrough. Continue reading Have scientists invented rat telepathy?