Jed was reading crop circles...

On Wednesday, November 7, 2012, Jed Rothwell wrote:

> OrionWorks - Steven V Johnson <[email protected]<javascript:_e({}, 
> 'cvml', '[email protected]');>
> > wrote:
>
> Well, Jed, you predicted final count would be 303.
>>
>> U wuz right.
>>
>
>
> Plus I said 2% of the popular vote, which was right on the nose.
>
> Obama will probably take FL so I was too conservative. But FL is amazingly
> close. You might as well say both sides won it.
>
> I can't really take credit. I was just picking the pollsters I trust most,
> and Nate Silver, who sure knows a lot about statistics and s/n ratios.
>
> Gallup veered far from the other pollsters, but in the end fell almost
> back into line. It was almost right. You have to hand it to those
> people. As I suspected, their definition of "likely voter" (LV) was a
> little too conservative. See:
>
> http://www.gallup.com/poll/election.aspx
>
> Their LV is off by -3%; their registered voter number is off by +1%. The
> LV polling ended on Nov. 4. The gap was closing. If they had continued to
> Nov. 6 the gap might have been within 1% instead of 3%.
>
> I think their main mistake was to underestimate the youth vote and
> the Hispanic vote.
>
> It is uncanny how good modern polling has become. It is scary. The truth
> is, there were practically no surprises in this election for people who
> understand polling, statistics, margin of error, sample size and other
> issues.
>
> You should try to understand these issues. They are important in cold
> fusion and other experimental science.
>
> Rasmussen is controversial. I don't think they are so bad. Their number
> are reliable if you add +3% to the Democratic side. In other words, they
> have a fixed bias. This means they have good methodology for collecting
> data, with a proper random set of respondents, a good set of questions, and
> a large enough sample. But they introduce a bias in post-interview
> processing. This seems clear to me when I read the the actual questions
> they asked during the telephone interviews, and the processing methodology
> they describe in their literature. The questions they asked during
> telephone interviews and procedures seem well-designed.
>
> Every poster has to have some degree of postprocessing or the answers will
> be meaningless. For one thing, you have to adjust your responses to fit the
> population. For example, if you poll people at random and reach only ~5%
> Hispanics, you have to weigh their responses to represent ~10% of the
> likely voter (LV) population. Overly conservative posters put them at 8%
> instead of 10%, because they assumed Hispanic turnover would be lower than
> it was. This was a judgement call. This -- plus random variation -- is why
> there are differences between poll estimates.
>
> - Jed
>
>

Reply via email to