Wednesday, November 14, 2012

The Limits of Models

Okay, I missed my call on the Presidential election. Readers of The Naked Portfolio Manager know that I'm a huge proponent of using empirically-based rule sets to make decisions and predictions. So are Kenneth Bickers and Michael Berry, two University of Colorado professors, it seems. The two developed a widely cited (by the media) election forecasting model and back-tested it for every election since 1980. They discovered the key economic factors that were most important in determining election results. So what went wrong with their model to cause them to miss the election too?

Bickers and Berry, as they developed their model, failed to consider a major factor, which was the ethnicity of the candidate. Despite anemic job growth and an extremely weak economy, President Obama received 93% of the black vote and 71% of the Hispanic vote.

As mentioned, the two professors developed their model using data going back to the last eight elections. Eight is a really small sample when building a model. Quite frankly there were no data points in which an incumbent minority was running for election with an extremely weak economy.

When Billy Beane, subject of the infamous baseball economics book Moneyball, was developing models for the Oakland Athletics to identify winning baseball players, he employed thousands of data points.

Which brings me to an important point about models: Models that are developed from relatively few data points are much less reliable than models developed from larger data samples.Sounds like an obvious statement, but as you can see with the University of Colorado election forecasting model, faulty data is often highly cited, to the detriment of some.
Bookmark and Share
posted by Bob Fischer at 0 Comments

Monday, November 5, 2012

Why Obama Will Lose The Election

Watching the news programs on Sunday, you would have thought that President Obama was a lock to win the election. But I have a very different prediction.

In The Naked Portfolio Manager, I argued that statistical prediction, or using rule sets developed from empirical data, was the most reliable method of making a prediction. I watched the commentators drone on about how the recent epic storm made President Obama "look presidential" and how his visit to New Jersey and the big bear hug he got from Governor Christie made him look like a bi-partisan leader. The pundits said this was a significant display in a close election.

One of the problems we have as thinkers is we often assign more weight to recent facts and discount data that we have received in the past, even if the past data should carry more weight. I think that the political commentators predicting an Obama victory are discounting the economic data which, although it is less current, should carry much more weight in their predictions.

Two political scientists at the University of Colorado have been predicting a Romney victory for several weeks now. This is based on an empirical model developed by rigid mathematical testing. If we are interested in predicting how uncertain future events will unfold, we should give much more weight to this type of research than to the talking heads on television.

Could President Obama still pull off a win on Tuesday? Sure, no model is right 100 percent of the time. But based on the University of Colorado model, Romney should be a heavy favorite.
Bookmark and Share
posted by Bob Fischer at 0 Comments

Newer›  ‹Older