I’ve gotten an absolutely unprecedented number of requests to write about RFK Jr’s Rolling Stone article about the 2004 election.
RFK Jr’s article tries to argue that the 2004 election was stolen. It does a wretched, sloppy, irresponsible job of making the argument. The shame of it is that I happen to believe, based on the information that I’ve seen, that the 2004 presidential election was stolen. But RFK Jr’s argument is just plain bad: a classic case of how you can use bad math to support any argument you care to make. As a result, I think that the article does just about the worst thing that it could do: to utterly discredit anyone who questions the results of that disastrous election, and make it far more difficult for anyone who wanted to do a responsible job of looking into the question.
Let’s get right into it. He starts his argument by claiming that the exit polls indicated a different result than the official election results:
The first indication that something was gravely amiss on November 2nd, 2004, was the inexplicable discrepancies between exit polls and actual vote counts. Polls in thirty states weren’t just off the mark — they deviated to an extent that cannot be accounted for by their margin of error. In all but four states, the discrepancy favored President Bush.
The key sentence that indicates just how poorly RFK Jr understands the math? “they deviated to an extent that cannot be accounted for by their margin of error”. That is a statement that is, quite simply, nonsensical. The margin of error in a poll is a statistical measure based on the standard deviation. Contrary to popular opinion, a poll with a margin of error of “4%” doesn’t mean that the actual quantity being measured must be within plus or minus 4% of the poll result.
A margin of error is measured to within a level of confidence. Most of the time, the MoE that we see cited is the MoE with 95% confidence. What this means is that 95% of the time, the sampled (polled) result is within that +/- n% range. But there is no case in which a result is impossible: the margin of error is an expression of how confident the poller is in the quality of their measurement: nothing more than that. Like any other measurement based on statistical sampling, the sample can deviate from the population by any quantity: a sample can be arbitrarily bad, even if you’re careful about how you select it.
Elections have consistently shown a bias in the exit polls: a bias in favor of the democratic vote. For some reason, which has not been adequately studied, exit polls almost always err on the side of sampling too many democratic voters. This could be the result of any number of factors: it could be a question of time (when were the polled people asked?); it could be a question of location (where were the pollsters located relative to the polling place?); it could be a social issue (the group of people that consistently votes for the democratic party may be more willing/have more time to answer pollsters questions); it could be something else entirely.
But you can’t conclude that an election was stolen on the basis of a discrepancy between official election results and exit polls. The best you can do is conclude that you need to look at both the election and the exit polling process to try to determine the reason for the discrepancy.
According to Steven F. Freeman, a visiting scholar at the University of Pennsylvania who specializes in research methodology, the odds against all three of those shifts occurring in concert are one in 660,000. ”As much as we can say in sound science that something is impossible,” he says, ”it is impossible that the discrepancies between predicted and actual vote count in the three critical battleground states of the 2004 election could have been due to chance or random error.”
That entire quote is, to put it crudely, utter bullshit. Anyone who would make that statement should be absolutely disqualified from ever commenting on a statistical result.
Now, thanks to careful examination of Mitofsky’s own data by Freeman and a team of eight researchers, we can say conclusively that the theory is dead wrong. In fact it was Democrats, not Republicans, who were more disinclined to answer pollsters’ questions on Election Day. In Bush strongholds, Freeman and the other researchers found that fifty-six percent of voters completed the exit survey — compared to only fifty-three percent in Kerry strongholds.(38) ”The data presented to support the claim not only fails to substantiate it,” observes Freeman, ”but actually contradicts it.”
Again, nonsense. There are two distinct questions in that paragraph, which are being deliberately conflated:
- In each given polling place, what percentage of people who voted were willing to participate in exit polls?
- In each given polling place, what percentage of the people who were willing to participate in exit polls were voters for each of the major parties?
The fact that a smaller percentage of people in places that tended to vote for the democratic candidate were willing to participate in exit polls is entirely independent of whether or not in a specific polling place a larger percentage of democratic voters than republican voters were willing to participate in the exit polls. This is a deliberate attempt to mislead readers about the meanings of the results – aka, a lie.
What’s more, Freeman found, the greatest disparities between exit polls and the official vote count came in Republican strongholds. In precincts where Bush received at least eighty percent of the vote, the exit polls were off by an average of ten percent. By contrast, in precincts where Kerry dominated by eighty percent or more, the exit polls were accurate to within three tenths of one percent — a pattern that suggests Republican election officials stuffed the ballot box in Bush country.
It could indicate that. It could also indicate that democratic voters were consistently more willing to participate in exit polls than republican voters. And therefore, in polling places that were strongly democratic, the sampling was quite representative; but in polling places that were strongly republican, the sampling was lousy.
Just to give an idea of how this works. Suppose we have two polling places, each of which has 20,000 voters. Suppose in district one, there is a 60%/40% split in favor of democratic voters; and in district two, there’s the opposite; 60% republican, 40% democrat. And let’s just use similar numbers for simplicity; suppose that in both polling places, 60% of democrats were willing to participate in exit polls, and 40% of republicans were willing. What’s the result?
- District one will have 12000 votes for the democrat, and 8000 for the republican. The exit polls will get 7200 democratic voters, and 3200 republican voters, or 69% of the vote going to democrats according to exit poll, versus 60% actual.
- District two will have the opposite number of votes: 8000 for the democrat, and 12000 for the republican. The exit polls would get 4800 democrats, and 4800 votes for republicans – predicting a 50/50 split.
The democratic margin of victory in the democratic area was increased; the republican margin was decreased by slightly more.
It continues very much in this same vein: giving unreliable samples undue evidence; bait-and-switch of statistics; and claims of measurement errors being impossible. But none of the mathematical arguments are true.
Was there fraud in the election? Almost certainly. Particularly in Ohio, there are some serious flaws that we know about. But this article manages to mix the facts of partisan manipulation of the election with so much gibberish that it discredits the few real facts that it presents.
RFK Jr. should be ashamed of himself. But based on his past record, I rather doubt that he is.