Cognative Dissident

Thursday, January 27

Stochastic Error and Election Do-overs

There's been a lot of hue and cry from both Democrats and Republicans about the gubanatorial election in Washington state. I haven't paid a lot of attention other than to note that it does seem an awful lot like the Democrats have stolen it, but I stumbled on (ok, I followed a link from Instapundit) this the other day:

It is built on the principle that government is subservient to the will of the people. Elections are merely a tool for measuring the will of the people. If an election doesn’t measure the will of the people and is just a contest about counting pieces of paper, you might as well just let the candidates pick a winner by playing a game of Rock, Scissors, Paper. And if the elections system that we have today isn’t good enough to measure the will of the people within the margin of sloppiness, incompetence and illegal voting, then we don’t just suck it up for four years with a governor we don’t want. We say enough is enough, this will not stand, we fix the elections system and we measure the will of the people again

It's an interesting point that elections are simply a tool for measuring the will of the people. I work as a software engineer for a firm that writes software which wouldn't need to exist if lots of companies didn't have trouble accurately performing very many, relatively-simple measurements. It's a central truth of applied statistics that all measurement processes have a certain level of random error associated with them. You can do lots of things to reduce it, but you can't ever get rid of it entirely. With any measurement, you never know the true value of what it is you're measuring.

Which is why you need to know about margin of error. GE's vaunted "six sigma," total-quality management program is based around the measurement of error rate and it's systematic reduction to less than "3.4 defects per million opportunities" which means a error rate of only .00034 %. The reason that the six sigma program is so highly regarded is that it's really, really hard to get error rates that low. (Note for those who will point out to me that measured error rates and margin of error in a measurement aren't exactly the same thing: I know, but they're sufficiently analogous for this discussion.)

So if we get back to the idea that elections are a measurement tool, we have to acknowledge that elections have a margin of error in the same way that any other measurement process does. If the results of an election are within the margin of error*, than the true result is literally unknowable.

So what do we do then? I'm sure that lots of people would be tempted to say, "Give it to the person who actually won the vote," which while intuitively tempting, has some problems. First, we don't know what the actual count is. As we've seen in elections since 2000, the result changes every time the votes are counted. We're supposed to imagine that each successive count brings us closer to the "real" count, but I havn't seen any evidence to suppor that. To my mind it's entirely plausible that hand recounts are less accurate than the machine counts that proceed them. The second problem with giving the winner of the last count the election is that it creates an incentive to cheat in a close election, particularly when it comes to recounts.

Alternatively we could flip a coin. This is the solution I favor and is functionally identical to giving the election to the winner of the last count without the incentive to cheat. But it's not very satisfying to have your governor, representative or councilperson chosen by a coin flip, particularly in bitterly contested elections. In a close election where the electorate doesn't perceive much difference between the candidates, this could be a viable solution, but in situations like Florida 2000, or Washington 2004, you're going to have a hard time selling this to the party that ends up losing, and it could exacerbate the perception that the winner didn't really win.

Which, as far as I can tell, leaves us with only one other option, namely holding a new election, and hoping that the new election gives a result that is outside the margin of error. I think a good solution would be to pass laws triggering an automatic re-vote in the case where a) The margin of victory is within some pre-specified range, perhaps 0.1 % of the votes cast and b) a single recount reduces the margin of victory (which includes changing the victor).

One would hope that a second election would lead to an undisputed victory, but if that were not the case, one could just continue re-voting until a clear victor emerged.

*It would be nice if we knew what that margin of error in a given election was, but unless people were willing to fill out a separate error-check ballot, we'd initially have to guess. If we were determined, we'd be able to measure the various components of the process of balloting and tabulating results, and come up with a good estimate of it's margin of error.


  • Flipping a coin athough fair, would always result in the (approximate) plurality feeling as if they'd been cheated, and could easily lead to coups. In fact, even in disputed votes, when there have been allegations of cheating (I'm thinking the kind of things that Greg Palast got worked up about in the last two elections), most Americans still seemed to believe that the result was fair. A coin toss would destroy the illusion of the fairness of the process, which is all you have sometimes.

    By Anonymous kyb, at 3:56 PM  

  • Of course, there is also the aspect that some electoral systems encourage two party systems and adversarial politics, while others reward people who can appeal to people with a range of views. I'm a big fan of condorcet methods. More at

    By Anonymous kyb, at 3:58 PM  

Post a Comment

<< Home