Buzz Blog: How did Nate Silver Get the Election Odds so Wrong?

What if you look all the battleground states? What are the chances that Obama would win every one of the states that he was favored in, and Romney would win all the ones he was favored in?  That is, what are the chances that the most likely thing happened for every battleground state? The answer, it turns out is 14.5%. On the flip side, there was an 85.5% chance that at least one of the battleground states should have gone to the less favored candidate. That means Silver only had a one in seven chance of getting them all right, if by ‘right’ we mean the candidate who the model favored, by even the slimmest of margins, actually won.

This is where things start to get suspicious. In the 2012 presidential election, in every state in the nation (assuming Florida goes for Obama as expected), the candidate that the model favored won. Not even once did either candidate beat the odds. That’s why this map of nationwide results as reported by CNN . . .

Buzz Blog: How did Nate Silver Get the Election Odds so Wrong?

Nate screwed up his odds by assuming that people make their decisions independently. That means he would have used the normal distribution. Wrong! Remember the heavy positive feedback loop I always talk about. People do not make decisions independently. All kinds of things like other people and events influence decisions. That means people can easily act as a group under the right conditions. The independence assumption goes right out the window.

Nate used the normal distribution in his calculations instead of using the power-law distribtuion.

The geniuses that develop derivatives models also use the normal distribution. I’ve had to dervive some derivatives models in the past, so I know about this distribution problem. Some big event happens, and people act as a group causing the models to start blowing up.  What they thought was a 200 year event is in fact a 20 year event.

Leave a Reply