My source for most of my polling data is Real Clear Politics. They do excellent work in collecting polling data and also calculate their own poll averages. For national polling I currently use their average.
For individual states I do my own calculation based on a weighted average of polls. Newer polls are given more weight than older polls, bigger polls are given more weight than smaller polls, and likely voter polls are given more weight than registered voter polls.
For both national and state polls I estimate a confidence interval around the number – given what we know is the current poll how far could the real results actually stray from that poll. In a state with little polling that could be 5-10 points in either direction, in more heavily polled states the ranges are narrower. In short, the more polling data I have, the more confident I am.
This information (what we know about the race in each state, what we know about the national race) is fed into the model which uses it to simulate a national election. I repeated the simulation many times, and report out the aggregate results. Some examples in the 9/13 simulation:
- · President Obama won 84% of the time
- · He won the popular vote in Wisconsin 78% of the time
- · He won the popular vote in Indiana 2% of the time
- · There was an electoral college tie 0.2% of the time
- · 91% of the time, the winner of Michigan was the same as the winner of the election overall.
The simulation is repeated many times because although the model doesn’t know what will happen, it does know how likely various outcomes are. Perhaps the right mix of voters will turnout for Romney in OH and FL, or perhaps a different mix turns out and Obama wins both states. Perhaps something will happen that swings the election dramatically in Romney’s favor. The model doesn’t know which will happen, but it can estimate how likely each of these events are. You have to repeat that simulation many times to capture all the possible realities and see how they play out together, on average.