Because most philosophies that frown on reproduction don't survive.

Monday, November 26, 2012

When A Middling Amount of Knowledge is a Dangerous Thing

In a long post which is mostly a review of Jim Manzi's Uncontrolled and Nate Silver's Signal and the Noise (of the two, he recommends Uncontrolled, and being three quarters of the way through it I'd certainly recommend it) Razib makes an interesting side point about the much discussed questions on polling and forecasting the recent election:
But after soft-pedaling my confidence in polling averages, why did I think the pro-Romney people were delusional? The simple answer is 2004 and 2008. When the polling runs against you consistently and persistently motivated reasoning comes out of the wood-work. There’s a particularly desperate stink to it, and I smelled it with the “polls-are-skewed” promoters of 2012. In 2004 there were many plausible arguments for why the polls underestimated John F. Kerry’s final tally. And in 2008 there were even weirder arguments for why McCain might win. In 2012 it went up to a whole new level, with a lot of the politically conservative pundit class signing on board because of desperation.
...
After the election was over I actually started reading some of the arguments about why the polls were skewed, and I find that they are extremely plausible to me. And not just me, John Hawks owes me a drink because he simply didn’t believe the turnout models which suggested a demographic more like 2008. The reality is that my instinct was to go with John. I too was very skeptical of the proposition that Obama could turnout the same voters as he did in 2008. And yet he did turnout those voters!
What struck me as interesting here is that Razib essentially made a choice to stick with one basic but fairly well established thing that he knew (averages of large numbers of polls are pretty good at predicting elections) and knowing that he wasn't going to be a true expert in all the mitigating factors which might cause one to question that rule, chose simply not to look at all the argumentation and to stick with his one basic piece of knowledge. Looking at the situation after the fact, he found that indeed the arguments put forward against the basic rule were in fact fairly convincing, but this just served to reinforce his judgement that he had not been in a position to correctly weigh the merits of counterarguments to the basic rule.

This is really interesting from a more general point of view, and I think it underscores some of the pitfalls of having a middling amount of knowledge about a subject. If our knowledge of a topic is basic, and we know it's basic, and so all we do is apply and well agreed upon rules, it's likely that at the very least we won't embarrass ourselves by coming up with something way outside the mainstream. The point when we often get into trouble is when we study a topic a bit more and start to think we know enough to get creative and know when things won't behave in the way that the most basic rules would suggest. It's at this middling stage of knowledge that we're likely to make big mistakes and make them with excessive confidence.

3 comments:

Jenny said...

My clarinet professor's favorite saying was, "He knows enough to be dangerous."

Josiah Neeley said...

My experience was a little different from Razib's. I spend most of my time around professional conservatives in one form or another, so it really wasn't possible for me to learn the substance of conservative anti-polling arguments. As tends to be the case with such things, the arguments ranged in quality from the absurdly awful (e.g. making fun of Nate Silver for being gay) to the quite sophisticated. The stuff about the polls oversampling Democrats sounded plausible to me, but there really was no explanation for why the polls had suddenly started to show this bias all at the same time and by roughly the same amount. The idea that, for example, Fox News was deliberately skewing its polls in favor of Obama was just laughable.

In addition, I've been a fan of prediction sites like Intrade, and while Intrade tended to give higher odds of a Romney victory than Nate Silver, it still showed that Obama was more likely to win. Of course Intrade might also be wrong, but the arguments I was seeing didn't seem strong enough to overcome this presumption.

And, like Razib, I did notice something a bit frantic about the nature of conservative attacks that didn't inspire a lot of confidence. If you are sure Silver is wrong, then why spend so much energy attacking him in October, when if you just wait a couple of weeks you could beat him over the head with the actual election results (there is, of course, the argument that Silver's predictions might become a self-fulfilling prophesy, but notably this is only the sort of argument one hears before an election; to my knowledge no one has claimed that Silver's predictions caused Obama's re-election, even though there is nothing in the election returns per se that disproves the idea).

As someone who counts himself a member of Team Conservative, I found the whole experience quite troubling. My assumption is that if the situation were reversed liberals would have made similar fools of themselves, such that the problem here is one of human psychology rather than conservative ideology. Still, I wish we had done better.

Darwin said...

The skewed poll arguments I pretty much always found laughable. While the basic complaint that polls appeared to be oversampling Democrats would be persuasive on any one poll, the idea that it was operative in all polls just seemed like a reach -- and far too much like 2004 when we had Michael Moore running around insisting that Kerry would be ahead in the polls if only they would call more people without landlines.

Where I feel like I fell into the plausibility trap was in the last couple weeks of the campaign, when first Romney was ahead in the polls for a few weeks, and then Obama pulled into a very slim lead. That Romney had actually been ahead in the national polling averages opened up a lot of plausible explanations for why one might rely on the national numbers rather than the state numbers, how the enthusiasm gap would come into play, etc. Even when Obama pulled back ahead by a few basis points, since Romney had been ahead for a couple weeks it was easy to embrace the idea that the resumed Obama lead didn't represent a decisive shift in the race.

Plus, there's the side bias that my work tends to cause me to lean against (usually with good reason) big proprietary Excel models like Silver's in which some wiz-bang analyst pulls together huge numbers of factors and his own "secret sauce" of weighting and such in order to try to make predictions better than the basic data (in this case, polling averages.) In my experience, these usually have a short run of looking really smart and then crash and burn rather embarrassingly. So given the nature of Silver's approach, I really _wanted_ to see it fail, even aside from wanting to see Obama lose.