It seems obvious in hind-sight that Google’s infamously difficult job interview questions weren’t actually going to separate the wheat from the chaff.
Here’s the thing: it was pretty obvious in foresight, too.
I mean, come on: “How many piano tuners are there in the entire world?” “How much should you charge to wash all the windows in Seattle?” “Design an evacuation plan for San Francisco?”
What exactly were these ever going to accomplish? Given their “on the spot” nature, how could they possibly reward anything besides the ability to spout bullshit – or to over-simplify the world down to a level at which it would never actually work in practice?
That should have been obvious to anyone: but Google persisted in asking these questions for years. They even made it a selling point: THIS is how exclusive we are. THIS is how difficult it is to break into our culture. It made them think their wheat had extra fiber. As Laszlo Bock (Google’s sinisterly named “senior vice president of people operations”) told the New York Times: “They don’t predict anything. They serve primarily to make the interviewer feel smart.”
Google loves to feel smart. More than actually being smart, apparently. Because isn’t it fascinating how a company that is so driven by “data” – everything has to have a quantifiable rationale – instituted a series of major hiring policies that never had any data to back them.
How did this happen? How could the company that epitomizes Big Data spend years on a course that never had any research backing it up, and was (self-evidently) flat out wrong?
Because: Google loves to feel smart. And it doesn’t get that about itself: it doesn’t understand just how dangerous its assumptions about its own intelligence are.
Look at the questions themselves: the idea that someone with no relevant experience or history could come up with a reasonable approximation (or approach to an approximation) for how many piano tuners there are in the entire world (or how much to charge to wash all the windows in Seattle) carries in it the assumption that if you’re smart enough to work for Google, you can figure out the world on your terms, not its terms. You don’t need to know anything about the history of piano tuners, or the different cultures that produce pianos, or the instrument itself. In fact (from what one hears about these interviews) an attempt to say “Well, the question is too complicated to really get a handle on without a lot of involved research” was exactly what the interviewers didn’t want to hear. No no: you can figure out how the world works based on a knowledge of math, a couple baseline assumptions, and a little calculation. It’s not that the world is a complicated, unpredictable, place: it’s that you just don’t simplify it enough.
The result is a self-perpetuating culture of people who think they’re smart enough to figure out the world, but lack the imagination to realize that it might be more complex than they think – and who assume that anyone who tells them otherwise just isn’t smart enough to understand how smart they are.
This is more than just a data failure: it’s a cultural failure. An inevitable one, given how little Google values self-critique. Has Google learned anything from it?
They’ve dropped the puzzle questions, at least. They’re now engaging in a way more conventional set of interview questions and approaches. And there’s more:
“One of the things we’ve seen from all our data crunching,” Bock said, “is that G.P.A.’s are worthless as a criteria for hiring, and test scores are worthless — no correlation at all except for brand-new college grads, where there’s a slight correlation. Google famously used to ask everyone for a transcript and G.P.A.’s and test scores, but we don’t anymore, unless you’re just a few years out of school. We found that they don’t predict anything.”
This is good too – a noticeable improvement. Although again one that was already well known to people who follow the research on testing. And one has to give Google credit for at least admitting that it can make a mistake.
Except … has it admitted that it’s made a mistake?
They’ve acknowledged that there’s room for improvement … but in all the articles I’ve read about this issue, I haven’t heard anyone say “Yeah, it was weird that we spent so much time getting so invested in these quiz questions and school metrics that never meant a think. I guess we screwed that up.” Or even “We were wrong about that.”
That’s the problem. Because if Google can’t admit the size and scope of the failure … or even that there was one … it will never examine the underlying assumptions about big data and quantifiability and how the world works. Or how these things play out in its own culture.
You could say these things don’t matter, except that this very failure led to years of terrible hiring practices and worthless assumptions.
If Google can’t examine its own cultural assumptions, this is the kind of mistake they’re inevitably going to repeat.
It’s probably happening right now.