Bernie Would Have Lost

How the rigging of the 2020 Democratic Primary demonstrates why Bernie Sanders could never have won the general election


It’s September 2020. The DNC has selected Joe Biden as the Democratic nominee.

Try to think back to the beginning of 2020 for a moment.

Are you left with a vague sense of deja vu?

Do you remember when a “brokered convention” seemed like a legitimate worry?

Are you wondering how a candidate with all the momentum was beaten by a candidate who looked poised to fade away after a third failed bid to become the nominee?

What the hell happened?

Ask the media and the ex-post facto explanation is that, in a packed field of candidates, the moderate wing coalesced a few days before Super Tuesday while the progressive wing remained split between Sanders and a defiant Warren.

If you’re somewhat more critically minded, you’ll no doubt also attribute some of Biden’s surge to a biased corporate media landscape that relentlessly undermined Sanders and his broad base of support during the campaign.

And if you voted in a major city on Super Tuesday, particularly if you voted in a lower income areas or around a college, there’s a good chance you experienced frustrating long lines, missing registrations, and closed or relocated polls.

And the truth is, all of these factors combined to create a seemingly insurmountable hurdle for the Sanders campaign. But what if, in spite of all these efforts, Sanders was still on track to win the damn thing?

The answer: you rig the election.

Much noise had been made about claims of election rigging over the past 4 years. But the question remains: who can you trust? On one hand, you have articles and opinion pieces from news outlets whose names you recognize confidently dismissing exit polls as being notoriously unreliable and assuring you, with not so subtle condescension, there was no rigging.

And on the other, you have a handful of weird nerds on websites you’ve never heard of posting statistical analysis peppered throughout pages upon pages of unintelligible screeds that claim to show incontrovertible proof of rigging.

Even if you find yourself sympathetic to the latter, there’s only so far these sympathies can take you before you shrug your shoulders and move on, resigned to the fact the you’ll never really know the truth so why bother. After all, what is someone with limited hours in a day to do?

Which brings us to the goal for this essay: To allow someone with one hour and no background in polling or statistics to come away with an intimate understanding of the real story behind the 2020 Democratic Primary. If you are someone with even a kernel of doubt about the legitimacy of the results, this is, without hyperbole, the most important document you will ever read.

The sections are laid out as follows:

Part 1 - Exit Polls
Part 2 - Adjustments
Part 3 - Discrepancies
Part 4 - Margins of Error
Part 5 - Early Voting / Mail-In Ballots
Part 6 - Young Voters and Enthusiasm
Part 7 - The 2016 Primaries
Part 8 - Caucus States
Part 9 - Electronic Voting
Part 10 - History of Electronic Voting
Part 11 - Audits
Part 12 - Bernie would have lost

Part 1 – Exit Polls

First off, what do you mean by “rig”?

The term rig, or rigging, is often used to refer to a whole suite of methods of voter disenfranchisement. These methods range from the underhanded and illegal, like purges of voter rolls and uncounted ballots; to the systemic yet perfectly legal, like restrictive voter ID laws or the 2.5% of Americans who’ve had their voting rights stripped due to felony conviction.

But for the purposes of this essay, the term rigging is used to refer to the direct manipulation of tabulated vote totals. The key piece of evidence we’ll use to show this occurred are exit polls.

Right, but I already know not to trust the polls…

Exit polls are NOT phone polls, or other types of mass polling of voter intent. Exit polls are a measure of what voters actually did. Exit polls are typically so reliable and the methodology so well understood that the UN, the EU, and USAID all rely on exit polls as the gold standard for determining if a foreign election result has been tampered with, a process known as “election verification”. Discrepancies between the exit polls and the vote count have even been used to overturn election results in other countries.

Alright, hold on. What the hell are exit polls then?

Exit polls are conducted at the actual voting locations as you leave. They typically involve a voluntary anonymous questionnaire that asks your age, gender, race, who you voted for, and several other questions to gauge broad voter sentiment about the issues that motivated your vote.

In the US, exit polls have two purposes. The main purpose is to provide demographic data on voters and their choices. But they’re also useful for projecting a winner that media can broadcast prior to the votes being counted.

The official organization in charge of conducting the exit polls is called the National Election Pool (NEP) and was created by a bunch of media orgs that includes Fox, CNN, ABC, CBS, NBC and the AP. Before 2004 it was a wild west of each media org conducting their own exit polling, but now they’ve pooled the effort and all rely on the NEP to provide a common set of data. The NEP run the exit polls for both the Dems and GOP during both the primaries and the general election. In true American fashion, the actual on-the-ground job of conducting the polling is contracted out to a separate entity named Edison Media Research.

On election day, Edison sends pollsters out to a selection of precincts across the state. As voters are leaving the booths, pollsters systematically approach a selection of departing voters (say one out of every fifth voter for example) and ask them if they would like to participate in an interview. If they agree, they’re handed the questionnaire. Completed questionnaires are anonymized by depositing them in an exit poll ballot box, and are tallied at a few predetermined points during the day. Response rates are typically between 40 – 50 % of people asked.

In states with caucuses or a high proportion of vote-by-mail, alternative methods of polling are used (more on these later).

By mid afternoon, Edison releases demographic data to the NEP, which is disseminated to news orgs. This data comprises approximately two-thirds of all interviews for the day. Immediately after polls close in the evening and before the first result are reported, Edison then calls in the first full wave of exit poll data. The data disseminated at this point is typically comprised of nearly all of the interviews conducted during the day.

Oh those exit polls. Aren’t those wrong all the time? And didn’t you just say US exit polls are only designed to send data to media orgs, not detect rigging?

This is an outright lie relentlessly pushed by these same media orgs in an attempt to hand-wave away unexplained discrepancies that have been occurring in elections for quite some time. The occasional spectacular failure of their own tools to project a winner is cause for embarrassment. A blanket dismissing of exit polls is a means of papering over this embarrassing phenomenon.

In actual fact, well conducted exit polls are typically incredibly accurate. Even in a packed field. We don’t need to look further than the 2016 Republican Primary to show this. Despite the “silent Trump voter” being a common refrain at the time, exit polls proved very reliable and showed close alignment with the results, as explained by Joe Lenski, Edison’s VP:

“While everyone is talking about the Democratic side, we went out at 9 o’clock saying that Donald Trump was going to get 58 percent of the vote. He got just about 60 percent of the vote. Everything we did on the Republican side hit the mark.” (source)

Despite their shortcomings, exit polls are one of the last remaining tools we have to corroborate election results in this country. And while it’s true that US exit polls haven’t been explicitly designed to uncover fraud, there is no fundamental reason preventing them from doing so. Claiming they can’t is akin to saying you can’t determine the weight of a single grain of rice using a bathroom scale. It requires that we have a solid understanding of the limitations of the scale (and a lot of counting!), but it most certainly can be done.

But I read the exit polls matched the results fairly closely. Shouldn’t we expect to see a difference between the exit polls and the result if there was rigging?

In many of the early states, the exit polls did NOT match the results closely. Screen captures of the exit polls for individual states from before the first results were announced on election night can be found here:

New Hampshire
South Carolina
California
Massachusetts
Texas
Vermont
Michigan
Missouri

However, if you were to look up the exit polls now, or even an hour after the results were originally released, you’d see very different numbers than these. This is because the exit polls have all been adjusted to closely align with the results.

Since this late adjustment occurs, it’s important to understand exactly which exit polls we’ll be examining. What we want are the exit polls that are released prior to the adjustments made after the initial results are released, since this is the only time the exit polls are still truly independent of the result. This difference between the official result and these unadjusted exit polls is what we’ll call the discrepancy.

Ok if you’re saying we saw exit poll discrepancies, how come I haven’t heard anything about this? Wouldn’t this be big news?

As we said before, the exit polls conducted in the United States are designed by media organizations, for media organizations. They’re primarily intended to provide demographic data on voters but, as a convenient bonus, they also allow the media to accurately project the winner on election night prior to the results being announced.

You’ll often hear that exit polls in the US aren’t designed to provide election verification. And that’s true. Election verification is provided by a different type of exit poll, called an Election Verification Exit Polls (EVEP) or Parallel Vote Tabulation. These generally refer to more rigorous exit polling methods which differ from our election exit polls in the following ways:

By the way, from here on in, if your eyes are glazing over and you just want the gist of a particular section, skip to the end of the section for the summary.


Section Summary

  1. Exit polls are a survey of actual voters and are different from pre-election opinion polls.
  2. Exit polls come in different varieties, with some designed to verify election integrity and some designed around collecting data on voter demographics and sentiment.
  3. US exit polls are typically quite accurate, but have been spectacularly wrong in high profile instances, leading to a poor public reputation.
  4. In the US, exit polls are conducted on behalf of media organizations by a company called Edison Research. On after the initial results are released on election day, Edison adjusts exit polls to match the results.

Part 2 - Adjustments

You’re saying the exit polls get adjusted to the results? Aren’t they supposed to be independent?

Yes. Adjusted to the results. Joe Lenski again explains it here:

Shortly after poll closing, we can quickly compare what the exit poll of that precinct said the votes were and what the actual votes were. So that’s when you’ll see a fairly quick adjustment to the exit poll estimates after poll closing. Like in New York, we were showing a four-point margin in the exit poll at 9 o’clock, but by 9:45 we were showing a 12-point margin. That’s because we can quickly compare precinct-by-precinct what the exit poll results were and what the full results for that precinct were. So we’re seeing precinct-by-precinct that the actual results were that Hillary Clinton was doing four points better than she did in the exit poll in that precinct, we will adjust the results [of the exit poll] accordingly.

You read that correctly. The exit poll numbers are adjusted to match preliminary reported vote fractions. This can be a tricky concept to grasp conceptually, so here it is represented by three simple elections with the same exit poll fraction:


Election 1: Normal election. This one is straight forward. The exit poll agrees closely with the initial results and thus no adjustment occurs. 100% reporting results match exit polls closely.

Election 2: Tampered election. Same exit poll fraction as Election 1. However, initial results show discrepancy with exit poll. No adjustment to exit poll made, and discrepancy persists through to 100% reporting.

Election 3: Tampered election with adjusted exit poll. Same as Election 2 only this time, exit polls are quickly adjusted to reflect the early results despite discrepancy. No discrepancy remains after 100% of votes are reported.

Election 3 demonstrates exactly what Edison does and why US exit polls are fundamentally incapable of detecting election tampering. The adjustments rely on the primacy of an incontrovertible result, and work backwards to adjust the exit poll numbers to match. Put simply, the exit polls assume the results are “correct” by definition, and thus any disagreement between the two is resolved by changing the exit polls.

If this seems absurd and ludicrously unscientific to you, then you’ve understood it correctly.

Wait but don’t they make adjustments to the polls for things like voter bias?

Yes. But these adjustments for systematic biases are made continuously throughout the day and are uncontroversial. It’s the adjustments made after the polls close, to align with the first official results, that are the issue. And this means that not only do the exit polls get adjusted to the results, all the answers to the exit poll questionnaires and the associated demographic data gets scaled along with it! More on the implications of this later.

If Edison collects and records all this data, surely someone could just check the raw data and settle this once and for all?

Well in fact, the raw exit poll data is available through an org called the Roper Center at Cornell University. There are just a few small caveats:

  1. You must pay a small fee according to your affiliation.

  1. You wait until February 2021, the month when data related to the 2020 Primaries is first made available.
  2. You submit your request for data access along with a written affidavit detailing exactly what research you intend to use specific data for, and if found to be sufficiently deserving by the Roper Center administration, your request will be approved. Hard to imagine a more transparent system!
So that’s it then? The exit polls are adjusted, or maybe they aren’t. In either case, we have no way of knowing what the exit polls showed since they’ve been changed and we can’t see the original data. So how can we say there’s rigging?

You’d think so, but we’re in luck. On election night, Edison publishes the exit poll numbers after polling officially closes but before the first results are announced. So for a brief window of time, the unadjusted exit polls are made public for anyone to see on most major news networks.

Hold on. Is this about that site “TDMS Research”? Hasn’t that been debunked?

TDMS Research is a website maintained by an unaffiliated individual that had the foresight to record the exit poll data made publicly available by news orgs during the election. TDMS was savvy enough to record the exit poll data on election night prior to the post-result adjustments. TDMS then went on to perform a straightforward comparison between the unadjusted exit polls and the official results.

So in short, no, it wasn’t debunked because the unadjusted data is still available for anyone to view using internet archive tools such as the WayBackMachine.


Section Summary

  1. If exit polls show discrepancy with the initial results, Edison quickly adjusts the exit polls to conform with the results on election night, and no further analysis or forensics is performed. While this adjustment is not malicious, the implications of this adjustment are often downplayed by the same media who sponsor the polls.
  2. This adjustment renders the exit polls unable to identify instances of election rigging.
  3. Unadjusted exit polls are our most accurate tool to independently validate the election results.
  4. Unadjusted exit polls are publicly available via news orgs for a brief window after polls close and before the initial results are published.
  5. These unadjusted exit polls have been preserved for us to now analyze.

Part 3 - Discrepancies

Alright then, what do the unadjusted exit polls show?

Glad you asked. Here they are summarized:

Since this format isn’t very revealing, let’s try to plot these discrepancies one state at a time. For now, what we’re interested in is not how each candidate performed, but how they performed relative to the exit polls ie. the numbers shown in the “Dis.” columns. Let’s start with the first non-caucus state, New Hampshire.

Discrepancy between unadjusted exit polls and vote percentage in NH

Here we can see Biden’s and Sanders’ results represented by the blue and red bars. The smaller the bar, the smaller the discrepancy between the exit poll and the vote count. A bar extending upward means that the candidate overperformed their exit polls, while downward bar means they underperformed.

Exit Polls for NH had Biden at 10.0%, Sanders at 25.9%, and Buttigieg at 21.4% but the results ended up being Biden at 9.2%, Sanders at 25.6%, and Buttigieg at 24.3%. This means Sanders and Biden both came quite close to their vote share predicted by the exit polls, but Buttigieg did 2.6% better than the exit polls predicted.

Remember, bars don’t represent candidate performance. They represent performance relative to exit polls. Or alternatively - the bar represent the discrepancy. If a bar is large, in either a +ve or a -ve direction, there is very good reason for your eyebrows to raise. Alright, now that we have a sense for how this works let’s see what the rest of the early states look like:

Discrepancy between unadjusted exit polls and vote percentage in early states

In VT, CA, MA, TX, and MI, Sanders received SIGNIFICANTLY fewer votes than the exit polls indicated he would. Conversely, in SC, VT, CA, and MA, Biden received SIGNIFICANTLY more votes than the exit polls indicated he would.

But there is more going on here than just Sanders and Biden: the over-performance of Buttigieg in NH, the under-performance by Warren in both CA and her home state of MA, and some questionable over-performance by Bloomberg on Super Tuesday.

Notice a pattern? The one thing all these anomalies have in common is that they hurt the progressive wing and boosted moderate candidates. It’s not surprising that the discrepancies aren’t all the same, since any competent rigging strategy would need to take into account the unique degree of organic support the candidates had in each state. But if we were to speculate on the machinations:

In short, if Biden needed to lock in a win, his votes were boosted. Otherwise, Sanders’ (and occasionally Warren’s) votes were redistributed to other candidates or simply deleted.

But what about NH? There was no Biden or Sanders discrepancy there.

Prior to South Carolina, the anointed nominee appears to have been Buttigieg. But after the Iowa disaster and Buttigieg’s failure to generate any black voter support in the south, he was abandoned in favor of Biden, the begrudging backup who could lean on on his legacy as Obama’s VP.

And before we move on, keen-eyed readers will have also noticed that Colorado, in contrast with seemingly everywhere else, appeared to have almost zero discrepancies across the board. Keep this in back of mind, as we’ll explore a theory for why this might be in Part 11.

What about this FactCheck.org article? It says all of these exit poll conspiracies were debunked.

Back in March, FactCheck.org wrote an article on how a number viral posts highlighting the Exit Poll discrepancies were not factual. In it, they claim:

[…] the exit polling numbers in Massachusetts showed 34% support for former Vice President Joe Biden. That’s the same percentage of the final vote that he ended up winning.

This is an inane statement that reveals a lack of understanding of the discrepancy being highlighted. It is no surprise that the exit polls agreed with the final vote when we are quite clearly told the exit polls are deliberately adjusted to match the vote! Unfortunately, the article continues:

But thinking that interim exit poll results is a better indicator of voter preference is misguided, said Daron Shaw, a professor at the University of Texas at Austin who has worked on political campaigns and polling.

Exit polls are weighted throughout the day, said Shaw, not just at the end. He called the analysis done by TDMS Research “misleading at best and corrosive at worst.”

Let’s ignore for a second that the Mr. Shaw they quote was a strategist for the 2000 and 2004 Bush election campaigns, and also serves on Fox News Channel’s national decision team, and focus on the objection. The objection being raised does not in any way address the discrepancy. No one is arguing that ALL adjustments of the exit poll data are fraudulent. At issue are the adjustments made specifically to align with the early results, since this conceals evidence of rigging, albeit with the understanding these adjustments are not themselves malicious.

The article then goes on to nitpick one of the alternate methods TDMS used to try to illustrate the magnitude of the “margin of error” or MoE, calling it “misleading” and non-standard. While there is some merit to the idea that any reporting of discrepancies should stick closely to standard definitions to avoid confusion, at no point does TDMS attempt to misrepresent or conceal this additional peek into proportional differences.

In short, the FactCheck article expertly vanquishes a number of straw-men, while at no point addressing the crux of the argument: that exit polls showed a clear pattern of discrepancies that far exceeded their stated MoE in a direction that benefitted Biden and/or disadvantaged Sanders.


Section Summary

  1. Almost all early non-caucus states showed significant discrepancies between exit polls and the reported vote that either helped Biden, or hindered Sanders, or both.

  2. Exit polls showed Sanders consistently had MORE support than the actual result in all Super Tuesday states.

  3. Exit polls showed Biden consistently had the same or LESS support than the actual results in all Super Tuesday states.

  4. Exit polls correctly predicted votes for Sanders in Colorado, the only state to conduct a full post-election audit of the vote.

  5. In most cases these discrepancies far exceeded the reported Margin of Error, making the observed pattern of discrepancies nearly impossible due to random chance.

  6. Articles claiming to “debunk” viral analyses of the exit poll discrepancies are themselves a source of much misinformation and misunderstanding.

Part 4 - Margins of Error

Is MoE a measure of how accurate the poll is?

Kind of. With every discussion of exit polls, you’re bound to hear mention of the “margin of error” or MoE. A MoE is a statistical phenomenon that accompanies any effort to sample a population. The MoE is a combination of two distinct things:

  1. A base level of statistical noise and an intractable artifact of sampling. We’ll call this the standard error.
  2. Systematic biases introduced by non-randomness in the sampled group.

The standard error cannot be avoided and is merely a function of the sample size. But with good design, systematic bias can. Mostly.

Let’s first expand on the “standard error is only a function of sample size” part, since it’s probably the most misunderstood aspect of how samples sizes, and by proxy polls, work.

Here it is again: The standard error is merely a function of the sample size. But what does this mean? It means that, all other things being equal, the expected accuracy when sampling, or polling, a population will be roughly the same for a population of 10,000 as it will for 10 million. This may seem intuitively wrong because there’s a common misconception that an accurate sample needs to make up a big portion of the population being sampled to be accurate. This is not the case. This statistical quirk comes about due to something called the Law of Large Numbers, or LLN.

LLN is a description of the tendency of an average obtained from a total sample to approach the real value as the sample size grows. Whew, that’s a mouthful. But you can think of it like randomly drawing marble from a very large bag filled with both black and white marbles to figure out the ratio of black to white. Initially, you will have low confidence that your sample is close to the fractions contained within the bag. But as the number of marbles you draw increases, your confidence in how representative your sample is of the bag “population” increases. Once you’ve drawn a considerable number of marbles you’ll be quite confident you are close to the true value, no matter how large the bag is.

Bringing this back to exit polls, put simply it means that the standard error depends only on the number of people polled and is independent of the total number of people who voted. For example, an exit poll with 3,000 respondents would have the same MoE in California as it would in Wyoming. If this still doesn’t make sense, here’s a great video that explains this concept in more detail.

Yeah, I got it. The standard error depends only on the sample size. Then how big a sample size do we need then to make sure the polls are accurate?

In the context of Exit Polls, the Margin of Error (MoE) should be thought of as the range within which we expect the actual result to land 95% of the time. For example, if Candidate A receives 30% of the vote, we would expect an exit poll with a ±2% MoE to predict a result in the range of 28% - 32%, nineteen times out of twenty. There’s only a 5% chance that an exit poll would predict a result outside of this range. Put another way, if you could run this same poll 19 additional times while randomly sampling different people, only one of those times would your exit poll predict a result of less than 28% or more than 32%.

Exit poll MoE are reported, if they are reported at all, almost always in the form of a single value (“The MoE for the poll is 2.5%”) when in fact, that’s kind of a simplification. In reality, the MoE is slightly different for each candidate within a single poll. There is also a different MoE when comparing one candidate’s share of the exit poll against another candidate’s. But for our purposes, we’re going to assume the most conservative interpretation of this error just to be safe, and call it the Standard Error (SE). The way we calculate the SE is via the following formula, where p is a candidate’s share of the vote, and n is the sample size taken:

For p=0.5 and large sample size, this simplifies down to:

Running a range of sample sizes through this formula, we see the phenomenon of the Law of Large Numbers expressed as the diminishing returns for larger and larger samples:

Most state level exit polls typically sample between 1,000 - 3,000 people, meaning the standard error ranges from about 2 - 3%. This means that 95% of the time, the actual observed error is less than these values. In fact, the most likely error that we would observe is typically less than 1%. And in fact, this is backed up by plenty of real world data. For example, exit poll discrepancies for many of the 2016 GOP candidates hovered around 1-2%.

This is wrong though! It says right here that Edison’s polls are ±4% MoE!

Indeed it does! Straight from Edison themselves:

The margin of error for a 95% confidence interval is about +/- 3% for a typical characteristic from the national Exit Poll and +/-4% for a typical state Exit Poll.

Edison claim that, far from being a matter of simply calculating a standard error as a function of sample size, there are other contributions to the MoE that cause it to be larger. By this, Edison are referring to the systematic biases we mentioned earlier. And the 4% number they offer appears to be the sum of standard error and estimated systematic bias.

Unfortunately, we are left to speculate on the exact makeup of this 4%, since Edison do not publish any literature that explains the methodology they use to calculate and weight systematic biases.

You’ve mentioned “systematic biases” a few times now. What are they?

Systematic biases are introduced when the sample of people chosen for the exit poll is skewed in such a way that they aren’t representative of the whole. Choosing a truly “random” sample is harder than it sounds, as there are a number of ways biases can creep in. These include:

Since all of these biases can combine to result in exit polls preferencing one candidate over another, lots of effort is made to reduce systematic biases to improve the accuracy of the poll. Even so, some error is inevitable. The pollsters take all of these effects and more into account to reduce their influence and ensure as representative a sample as possible.

Sounds like a lot of guess work, no wonder they have big errors! Couldn’t you just ask everyone who they voted for instead of trying to counteract these biases with guesses?

Sure, you could. But it would require an absurd number of trained pollsters stationed at every single polling location, and you would still have to account for the non-respondents. In short, it’d be a lot more expensive for a very modest improvement in accuracy. Believe it or not, despite all the biases that need to be controlled for, exit polls are actually surprisingly accurate. Shockingly so, actually.

International Election monitors auditing foreign elections are confident enough to declare fraud with as little as a 2% discrepancy between well conducted exit polls and tabulated results. But fundamentally, exit polls are not enough to 100% conclusively confirm rigging. They are best used in the same way you use a carbon monoxide detector - you don’t try to turn it off when it beeps, you immediately bring in professionals to investigate!


Section Summary

  1. Margin of error is a measure of the accuracy of a poll.
  2. Margin of error is combination of both standard error and systematic biases.
  3. Standard error cannot be eliminated, only reduced by increasing sample size.
  4. Systematic biases can be controlled for and minimized with careful design.
  5. Edison claims that their exit polls have a 3-4% MoE, nineteen times out of twenty.

Part 5 - Early Voting / Mail-In Ballots

What about early voting / mail-in ballots? Are they even polled? Couldn’t they be the the cause of the discrepancy, since people that mail their votes in early are more likely to be older or more conservative?

Yes, early / mail-in voters are polled. Firstly, in order to account for the fact that in some states a significant portion of people vote by mail, Edison has an adapted strategy of conducting exit polls:

“Edison has two methods of reaching people who voted early or by mail. They conduct a regular telephone survey in the week or two leading up to the election to reach those who have already voted by mail, specifically geared toward states where bigger groups of the population vote by mail (like Arizona, Washington, Colorado and more).
They also place interviewers at early voting locations in states where majorities vote before election day (like Tennessee, North Carolina and Texas).” (Source)

Note that these phone surveys are not the same as normal polls, where voters are asked who they intend to vote for. Edison polls only those who have already voted, who are as fair a group to sample as those leaving the voting booth. Edison then combines these polls of early voters’ / mail-in phone interviews with their Election Day interviews to create a complete picture of the electorate.

Secondly, while 2020 exit polls showed that older voters strongly preferred Biden and younger voters overwhelmingly voted for Sanders, studies done on the use of vote-by-mail during the 2016 election showed that:

Aha! So older people do use mail-in more. Which means that the discrepancy could just be mail-in voters, right?

Not so fast. Let’s examine this.

In several states, mail-in ballots and provisional ballots are not counted on election night. They are counted and added to the total in the days and weeks post-election. The reason for this is that there is significant processing and verification involved, which can be very time-consuming for the already stretched-thin state election employees and volunteers.

Therefore, if mail-in ballots were the source of the exit poll discrepancies, we should observe an initial discrepancy in Sanders’ favor, that slowly shrinks in the days and weeks following the election as the mailed-in ballots are added. But in fact, this is the opposite of what we actually observe.

All this to say: the discrepancy cannot be reasonably explained by mail-in ballots.


Section Summary

  1. Mail-in voting is a frequently proposed explanation for the observed inaccuracy of exit polls, but this explanation has no merit.
  2. States with significant portions of mail-in voters are sampled and included in Edison’s exit polls.
  3. There is little evidence to suggest that mail-in voters have a partisan lean that diverges from the electorate as a whole. The one exception is that older voters are more likely to use mail-in ballots than younger voters.
  4. If the higher proportion of older voters using mail-in ballots was the root cause of the discrepancies, we’d expect to see an initial discrepancy in favor of Sanders that shrank in the days/weeks following the election. However, we see the opposite.

Part 6 - Young Voters and Enthusiasm

Ok fine. But this still assumes that exit polls are a good sample of the people that vote. Which they aren’t! Sanders supporters are just more enthusiastic and probably more likely to say yes to participating in an exit poll!

This argument is like old faithful, getting rolled out of the broom closet every time polls don’t align with left-leaning candidates, and yet is nowhere to be found when most other times, they line up very well. But even still, youthful bias is a phenomenon that IS measured. And if a bias is found, it’s corrected for. Again from Joe Lenski:

Interviewer: Are you expecting a higher completion rate among younger people than older people? Joe Lenski: Historically that has been true but that’s one thing we can adjust for. Our interviewers record the approximate age as well as gender and race of those who decline to take the survey. With regard to age, we don’t do this precisely. Interviewers assign respondents to one of three groups: 18 to 29, 30 to 59 and 60-plus. We find that interviewers can put people in the right age group about 95% of the time.

So with this in mind, up until the point the exit polls are adjusted, Edison have already estimated the demographics of the voting population, INCLUDING NON-RESPONDENTS, and made age, race, and gender adjustments to their tally. Given that age correlates very strongly with Sanders support, there is no reason to believe that any tendency for Sanders supporters to be overrepresented in exit polls can’t be quite accurately estimated (95% of age guesses are correct!) and accounted for.

What about enthusiasm? How can you account for how “enthusiastic” exit poll respondents are? Maybe Sanders supporters of all ages just like participating in exit polls?

This is an interesting hypothesis and deserves delving into. But before we do, let’s first take a look at an interview Joe Lenski’s did where he’s asked about the surprising accuracy of the 2016 GOP exit polls, despite also having a packed field and a popular “anti-establishment” candidate:

Interviewer: It seems like a tougher field on the GOP side with so many candidates, including not one but two anti-establishment candidacies. In Georgia, for instance, where the first wave missed on the Democratic side by 12.2%, it nailed the GOP race with deadly accuracy: 40% Trump (versus a 38.8% finish), 24% Cruz (versus a 24.4% finish) and 23% Rubio (versus a 23.6% finish). It looks like you’ve missed the margin of error just once for Republicans. In Texas you had an ~10.6% error on the gap between Cruz and Trump. This makes sense. We often hear the margin of error is +/-x.x 19 times out of twenty. In this case, Edison has gotten it right on the GOP side within the margin of error 20 times out of 21 (for the figures I can find). On the Dem side, you’ve gotten it right within the margin of error just 13 times out of 22. Is this information correct and if so, why has Edison polling been so much more accurate on the Republican side this cycle?

Joe Lenski: As I mentioned above the calculation of total error for an exit poll survey differs from the standard sampling margin of error calculation that I assume that you are using so I wouldn’t agree with your statement about how many of the exit poll surveys were within the margin of error.  However, if a differential non-response among younger voters is a cause for exit poll errors it would make sense that the errors would be larger on the Democratic side because the differences in vote between younger and older voters on the Democratic side in this primary season are much larger than on the Republican side.  Bernie Sanders has been typically receiving 70+% of the vote among 17-29 year olds in the 2016 primaries while Hillary Clinton has been receiving 70+% of the vote among voters 65+.  On the Republican side the Trump percentage among younger and older voters tends to only differ by ten points or less.  It would then make sense that if the exit poll were overstating the number of younger voters it would have much more effect on the Democratic side.

What stands out most about this response is the complete lack of intellectual curiosity. It’s striking really. If you were the head of an organization whose job was to predict the outcomes of elections based on the actions of voters, would you not be a tiny bit embarrassed by results like this? And would this embarrassment not have motivated you to investigate the source of this discrepancy, especially since uncovering new phenomenon would improve the future predictive power of your polls?

But let’s look at Mr. Lenski’s proposed explanation: that 2016 polls for Sanders were way off because, unlike Trump, youth support was biased in his favor. If the age of the electorate was a valid explanation for the discrepancies, we would expect to see larger discrepancies in races that had a higher proportion of young voters. However, we see no such correlation between youth vote and discrepancies. The fact that this is something that Edison does not appear to have even investigated, is almost unbelievable.

But maybe we’re being too hard on Mr. Lenski. Perhaps the qualities Joe Lenski displays here are the reason he is in the position he is. Being able to propose such easily investigated explanations, while simultaneously resisting any urge to investigate, is certainly a desirable skillset if your job has the potential to uncover problematic discrepancies. Like a blind dealer at a blackjack table, Mr. Lenski is content to shuffle while the players inform him of their winning hands.


Section Summary

  1. Youthful voters are often proposed as the source of the exit poll discrepancy, due to their higher tendency to participate in exit polls, biasing the poll in favor of the youth candidate.
  2. Edison estimates the age of people who refuse to take part in their polls, called non-respondents, and uses this data to correct for any youth bias toward a particular candidate.
  3. When questioned on why exit polls showed far less discrepancy in the 2016 GOP primaries, Edison proposed the explanation that young voters caused a bias they couldn’t account for, despite there being no correlation between discrepancy and the youth vote proportion across multiple states.

Part 7 - The 2016 Primaries

But how can we really know if these exit poll discrepancies are abnormal? Do we have anything to compare them to?

We do in fact have a very good analogue to compare them to: the 2016 Republican Primary race.

Funnily enough, Trump’s support in 2016 was found to be highly correlated with less educated voters, a group that Edison has found are more likely to refuse to participate in exit polls. Despite this, exit polls in the 2016 GOP races were miraculously bang on, with only a pair of significant outliers. If there were exit poll respondent bias that wasn’t being accounted for, we’d expect to see it on both sides, especially since both featured prominent anti-establishment candidates. But we didn’t. The discrepancies were overwhelming on the Democratic side.

Let’s take a look at how the 2016 races compared.

Discrepancies between exit polls and results in 2016 Dem and GOP Primary races

The discrepancies in the GOP races were roughly balanced between those that helped or hurt Trump. And apart from a couple significant outliers (TX* and WV), they were within the MoE.

On the other hand, the discrepancies in the Democratic races, were significant to the point of absurdity and, in almost all instances, to Clinton’s benefit.

Take Georgia and Massachusetts for example. In both states the exit polls were taken at the same time, at the same precincts, by the same interviewers, using the same methodologies. Despite the votes on the GOP side being split amongst a crowded field of five candidates, exit polls almost perfectly predicted the results. And yet in on the Democratic side, the discrepancies skewed in favor of Clinton so dramatically from that they 2 to 3 times larger than the MoE. The chance of this being due to statistical fluctuation is virtually nil.

The Texas primary being off by a whopping 10.6% against Trump and in favor of Cruz certainly stands out on the GOP side. Let this be a lesson to never underestimate the power of a home field advantage.

From May 24, 2016 onward, no further exit polls were conducted. By that point in the race, the NEP had judged that the two nominees had been conclusively decided and any remaining demographic data was of little value to them.

*WI - Unadjusted exit poll values for the WI Democratic race were derived from a NBC News broadcast, on April 5, 2016 at 4:24 PM.

**CT - There were reports of unadjusted exit poll values that claimed a very sigificant discrepancy in favor of Clinton, however no definitive evidence endures apart from Edison’s raw data, which remains unreleased.


Section Summary

  1. When it comes to exit poll accuracy, the 2016 GOP primaries provide a useful case study for the 2020 Dem primary to compare to.
  2. Exit poll discrepancies in the 2016 GOP primaries were well within the margin of error, with two notable exceptions, and were balanced between helping and hurting Trump.
  3. The 2016 Dem primary results revealed extreme discrepancies with exit polls, even when compared to the the GOP races in the same states and conducted by the same pollsters, and almost exclusively benefitted Clinton.

Part 8 - Caucus States

Is there anywhere we have more confidence in the official results?

Yes. Caucus states. Despite all the flaws with caucuses, every single “vote” cast in a caucus state is public. This is because instead of casting an anonymous ballot, caucuses involve people organizing into supporter groups, after which the tallies are typically read aloud to those in attendance and can be visually verified by everyone present through simple counting. Larger campaigns typically have the resources to send a rep to every precinct that can verify and record the reporting sheet, which can then be independently tabulated in parallel with the official count.

In fact, this is exactly what the Sanders campaign did this year; they even created their own app. Recording the results this way was essentially a real-time audit. Any inconsistency in the official tally would be very quickly flagged as it would disagree with the results collected by the precinct rep.

Remembering back to the Shadow app debacle in Iowa this year, the auditing power wielded by Sanders’ team via their own app seems to have been unanticipated by the architects of the DNC’s 2020 strategy. After the disaster in Iowa, the DNC’s app was spiked before Nevada and Sanders went on to win Nevada by a large margin, which was of course heavily downplayed in the media.

Oh come on. Caucuses are obviously biased toward people that care way too much about politics. They don’t represent the average primary voter, it’s a bunch of young people that have nothing better to do than spend a full day caucusing.

A bold claim! Let’s examine this for the inaugural and most anticipated caucus, Iowa.

Relative Proportions of Raw Iowa Caucus Votes by Age Group

We see that of those that caucused in Iowa, the largest age groups were 45-64 and 65+. And not only were these groups the largest, in comparison to census demographic data for Iowa, they were OVER-represented (65+ make up 21% of Iowans but made up 30% of Iowa Dem caucus goers in 2020).

Secondly, it’s clear that these older age groups were much more likely to vote for candidates other than Sanders. So not only were caucus goers older than the average population, these over represented groups did not vote for Sanders. That means based on the ages of people who caucused, there were more people from the age groups that showed lowest support for Sanders than there were from the groups that strongly supported Sanders, relative to Iowa’s demographics.

There is no merit to the common refrain that Sanders benefitted from caucuses dominated by the young and impassioned in Iowa. But just for fun, let’s look at the results of the other early caucuses too:

Relative Share of Raw Vote Total in Early Caucus States

Of these three, it’s readily apparent that Sanders received a plurality of votes in all three early caucus states, including a majority in ND. Or we can go back to 2016, where Sanders won caucused states in landslides (WA 80/20, AK and KS 70/30). With this context, it becomes apparent that the Iowa caucus app, framed as a bungled rollout in the media, was possibly a jettisoned attempt by the DNC to regain total control over contests that weren’t going their way. Or perhaps it served as a means of permanently delegitimizing the entire caucus process. It almost doesn’t matter though, since both speculated possibilities lead to the same end result: the elimination of caucuses from future elections.

Nobody should be denying that the caucus system is long overdue to be replaced with something better. But despite their tedious and archaic design, caucus states provide a reasonable representation of broad voter sentiment, and are conducted in such a way that they are easily the most transparent and secure election format that currently exists in this country. Is it coincidental that the results of caucus states were relentlessly downplayed in the media when won by Sanders? The number of Dem primary caucus states between 2016 to 2020 was slashed from 18 to just 6. If the DNC has their way, 2024 will continue the trend.


Section Summary

  1. Voting in caucus states is effectively “public” and thus makes real-time parallel tabulation of the results possible.
  2. The Sanders campaign independently tabulated the caucuses in 2020, a tactic which may have thwarted an attempt by the DNC to rig the caucuses via their proprietary app.
  3. Sanders won the popular vote in all caucus states, by increasingly large margins.
  4. Older demographics were OVERrepresented in Iowa, and thus is not an explanation for Sanders’ popular vote win.
  5. The DNC’s rapid push for elimination of caucus primaries is more likely motivated by the relative transparency of caucuses than their awkward design.

Part 9 - Electronic Voting

Ok fine. Let’s say Bernie won caucus states because they’re hard to rig without getting caught. What about non-caucus states like CA and CO? Why was he “allowed” to win there? Hell, let’s take this idea to its conclusion: why was Hillary “allowed” to lose to Trump?

This is a very good question and not one that we can claim to have a definitive answer for. But let’s start by examining California’s voting system and see what that tells us.

California’s Electronic Voting System

In 2018 California’s Secretary of State signed a $282 million contract with the election technology company Smartmatic to overhaul the state’s electronic voting system. The CA SOS effectively made the old voting tech illegal, requiring that all jurisdictions adopt the new tech from Smartmatic.

A December 2019 report commissioned by the Secretary of State’s office said the system did not meet several of the state’s cybersecurity standards, which were to be “woven directly into the DNA” of the new system, according to the development contract. The report found “the excessive root access and the ability to boot the system from a USB port give[s] access to the system by unauthorized individuals. Either scenario can result in undetected changes to files and data.”

Well that certainly doesn’t sound good. But fear not! According to the CA SOS:

“In California, post-election audits are routine. During audits, every county in the state manually tallies approximately one percent of the votes cast to verify the accuracy and integrity of its machine results. According to Padilla, those audits protect the system from fraud, and address critics’ often valid concerns about inadequately vetted manufacturers or potential manipulation of the electronic machines. In addition, California does not permit any voting machine or vote tabulator to be connected to the internet.”

Let’s put the audit claim on hold for now and examine the second claim first.

Many election experts correctly point out that the voting machines used in the US are not connected to the internet. But they then use this fact to conclude that any attempts to tamper with the machines would be prohibitively difficult, due to our patchwork of disparate systems. This is unfortunately not the case.

There is a term you won’t see in almost any media articles about election rigging but one which pops up all over papers by election security experts sounding the alarm bells over the current state of voting in the US. The term is election definition. The election definition is the packet of computer code which tells the voting machine the list of candidates and other parameters specific to that particular election. The election definition must be loaded onto the machine prior to every election. Since the voting machines aren’t connected to the internet, also known as being “air-gapped”, the election definition must be loaded onto the machines using a physical flash drive. You may see where this is going, but I assure you, it’s even scarier than you think. One working theory proposes a rigging method that involves inserting malicious code in advance when the election definition is loaded via flash drive, as these election security researchers at Princeton explained in their 2006 paper:

[We] have developed a voting machine virus that spreads the vote-stealing code automatically and silently from machine to machine. The virus propagates via the removable memory cards that are used to store the election definition files and election results, and for delivering firmware updates to the machines. […] As a result, an attacker could infect a large population of machines while only having temporary physical access to a single machine or memory card.

Understandably, this method carries with it a certain amount of guesswork. And with guesswork, comes the occasional embarrassment. If we’re to believe that the margin to “rig” needs to be pre-programmed ahead of time, it’s safe to assume that these guesses are likely based on pre-election opinion polls. If true, we can speculate that the surprise upset of Clinton in Michigan may have been the result of pre-election polls that significantly underestimated Sanders’ actual lead.

But even this surprise upset may highlight part of the method’s inner-workings: one that involves self-imposed “caps”. It’s not difficult to see why self-imposed caps would be coded to avoid a situation in which a pre-programmed algorithm stubbornly refuses to lose, flipping an excessive number of votes in a bid to overcome an unanticipated surge by your opponent. Preventing such a scenario by accepting a few defeats seems prudent, lest you arouse suspicion or worse: trigger an audit.

How can we seriously believe it’s possible to coordinate some vast secret network of thousands of individuals all working together to rig the election against Sanders? Are you saying even the pollsters are in on it?

There is no reason to believe adjustment of the exit polls by pollsters is malicious, nor is the adjustment that they perform fraudulent. The polls are adjusted to fit the early reported vote fractions, and these adjustments are made with the assumption that the misalignment is exclusively due to some unaccounted for systematic bias. While there are criticisms that can be laid at Edison’s feet, Edison does not need to be complicit for rigging to be occurring.

The only way rigging is possible on the scale we’re seeing is through the centralization and opaqueness that an electronically tabulated vote total provides. That’s the beauty of our election system in 2020. Rigging the vote totals doesn’t require something as messy as the pollsters being in on the rigging. Nor the election volunteers, or the state governments, or the media. The only person you need is the one who programs the election definition that is loaded onto machines which do the counting. But as we’re about to see, this isn’t exactly a recent phenomenon.


Section Summary

  1. California mandated the switch to new electronic voting machines in 2018.
  2. A 2019 review of CA’s new voting system found several major vulnerabilities with the system.
  3. CA’s machines not being connected to the internet is not sufficient protection from tampering, as the election definition loaded onto the machines prior to the election is both proprietary and has been demonstrated as a potential vector for malicious code.
  4. The centralization and opaqueness created by proprietary electronic voting machines has for the first time made rigging on a mass scale possible.

Part 10 - History of Electronic Voting

“It’s not the people who vote that count, it’s the people who count the votes.” -Unknown

There’s no way you can convince me the Dems are involved in rigging and somehow the GOP aren’t.

You’d be right to assume the start of what we’re calling “rigging” predates Sanders’ presidential runs, but might be surprised to learn it likely predates even Bush v Gore. Let’s take a trip all the way back to the year 1996, to the Senate race in Nebraska.

The race was between Ben Nelson, the popular Democratic governor, and a virtual unknown millionaire named Chuck Hagel. The contest was very close, and even days before the election the candidates were polling in a dead heat. Once the results came in however, Hagel sailed through with a massive 15% margin of victory, a veritable trouncing of his opponent and huge upset for the Democrats, who lost a seat they had held for 18 years.

What didn’t emerge until all the dust had settled was the fact that, up until a few weeks before announcing his candidacy, Chuck Hagel was chairman of a company contracted with providing the voting machines for the very same election he ran in. Not only that, but the company, Election Systems & Software (ES&S), was a subsidiary of the McCarthy Group, of which Hagel owned millions of dollars in stock.

When a 2002 challenge to Hagel resulted in an even more lopsided victory, the Democratic challenger called for the ballots to be recounted by hand. His challenger quickly realized the futility of his demand when, due to Nebraska law, a judge ruled the ballots could only be recounted using the same method as they were originally cast: using the optical scanners supplied by Hagel’s company. For more on this story and several others relating to electronic rigging, this article has several more such examples.

Those were early days though! Voting must have gotten more secure since then.

Well then let’s start by looking at what’s happened since.

Florida was one of the early states to roll out statewide proprietary electronic voting systems. In 2000, one machine in Volusia County, FL handed Bush Jr. the election by subtracting 16,000 votes from Gore in a county of only 585 registered voters. In 2004, Ohio rolled out Diebold machines prior to the election. What followed were large exit polls discrepancies and Ohio deciding the election for Bush. In retrospect, the commitment made by Diebold’s CEO a year earlier “to helping Ohio deliver its electoral votes to the President” seemed oddly prophetic.

Since those humble beginnings, America’s voting architecture has grown into an absolutely dizzying patchwork of different systems. But over the past couple decades, the overarching trend has been the near complete replacement of hand counted paper ballots with proprietary electronic voting terminals.

The old method of counting, involving paper ballots was slow, clunky, and most of all frustratingly difficult to rig on a mass scale. With hand-counted paper ballots, the likelihood of getting caught is too great and the margin for victory too difficult to easily ascertain on the fly. In fact, this “un-rigability” seemed to have been on display in 2016 when Stanford researchers uncovered a marked difference in results that correlated with voting method, with Clinton performing markedly better in electronically balloted states while Sanders performed better where paper ballots were used. Strangely enough, they found that Clinton experienced no such performance disparity in 2008 when she ran against Obama.

Unsurprisingly, this push toward electronic methods has partisan origins. Passed in 2002, the GOP authored Help America Vote Act was introduced under the guise of improving access for Americans with disabilities but was in reality a trojan horse for the full scale demolition of election integrity measures, the most significant of which was the near elimination of auditable paper ballots.

A 2012 Harpers article summarizes the shift:

The use of computers in elections began around the time of the Voting Rights Act. Throughout the 1980s and 1990s, the use of optical scanners to process paper ballots became widespread, usurping local hand counting. The media, anxious to get on the air with vote totals, hailed the faster and more efficient computerized count. In the twenty-first century, a new technology became ubiquitous: Direct Recording Electronic (DRE) voting, which permits touchscreen machines and does not require a paper trail.

Old-school ballot-box fraud at its most egregious was localized and limited in scope. But new electronic voting systems allow insiders to rig elections on a statewide or even national scale. And whereas once you could catch the guilty parties in the act, and even dredge the ballot boxes out of the bayou, the virtual vote count can be manipulated in total secrecy. By means of proprietary, corporate-owned software, just one programmer could steal hundreds, thousands, potentially even millions of votes with the stroke of a key. It’s the electoral equivalent of a drone strike.

Currently, there are approximately 20 companies in the US that have active contracts with state governments to facilitate some aspect of our elections. At least seven states use voting systems running proprietary software that don’t generate a voter-verified paper trail and can be programmed by their manufacturer to rig election outcomes with impunity. And anything generated by said proprietary software, whether the machines are spitting out ballot images or printouts, is not a reliable proof of tally, since it can output whatever was programmed into it.

If you had originally presumed rigging required a teams of hackers covertly inserting thumb drives into machines, you’ll have to rethink that. In most cases, the state sends the list of candidates to the machine vendor before the election; the vendor sends back a flash drive containing the election definition on it. This flash drive is encrypted and since no one can view the code there is no way to tell if it is weighting the vote toward a specific candidate. Test mode provides no transparency either because code running in test mode can be different from the code running in production mode. And so this impenetrable, unverifiable code is loaded by election officials onto the state’s fleet of machines as a matter of standard procedure.

Now, these machines blanket the country and smoke from the hidden fires of rigging is everywhere.


Section Summary

  1. Electronic voting machines first began to appear in the late 80s and were in widespread use by the 2000s.
  2. The passing of the Help America Vote Act in 2002 was a means of pushing for the widespread adoption of proprietary electronic voting systems.
  3. Electronic voting machines have now almost completely replaced hand counted ballots, with machines supplied to state governments across the country by 20 different companies, all with their own proprietary software.
  4. Evidence of electronic rigging is as old as the machines themselves, and appears to have almost exclusively benefitted Republicans historically.
  5. Before every election, manufacturer provided propriety election definition code is sent to state election agencies, who load the unverifiable code onto the state’s fleet of machines.

Part 11 - Audits

When I voted, the machine spit out a paper copy that I double checked. And this paper trail gets audited! Shouldn’t that confirm the vote was correct and the exit polls were wrong?

There are two main types of audits that are performed on our voting systems: procedural/reliability audits and risk-limiting audits.

If we were to make a clumsy analogy between auditing elections and catching drug use in sports, reliability audits would be like taking a peek into the locker room, while risk limiting audits would be drawing blood samples. Neither are perfect, but one is vastly more effective at catching cheats.

However, as we’ve seen so far, not all states’ voting systems are created equal, and thus equally auditable. States in which each vote involves or generates an accompanying paper ballot can be audited. It is beyond naive to think states with fully electronic methods of voting with no paper backup can also be audited. In the industry, these fully electronic voting machines are referred to as Direct Record Electronic voting machines aka “DRE”, or by election security experts as “not worth the paper they don’t print on”.

Even in the states where voting does generate an auditable paper trail, the degree to which audits are performed in party primaries is not always clear. Only CO and RI automatically perform risk-limiting audits of their primaries by law. Other states have their own practices. As of 2020 there are still no federal standard “triggers” for when and under what circumstances an audit is performed.

According to California’s SOS for example, the results of an audit are only released if they overturn the election result. It’s not too difficult to see how this approach is somewhat benign on a state level, but wholly inadequate when scaled up to a national level. If such a policy were universal, it’s not difficult to imagine how rigging to ensure your opponent’s victories were razor thin and their losses were blowouts, would allow you to not only win, but to do so without a single audit being released. And while this policy my be satisfactory for California, a state Sanders won, discrepancies indicate that rigging did in fact flip the outcome in multiple other states from Sanders to Biden.

Have any states that we saw discrepancies in completed a risk-limiting audit? If electronic rigging was actually occurring, wouldn’t a paper audit detect it?

Michigan did. And they claim their audit of the 2020 primaries validates the official results:

The sample pulled mirrored the results almost exactly. […] In the Democratic Primary, out of 415 ballots pulled, Joe Biden and Bernie Sanders received 224 and 155 votes, respectively. This equates to 54 percent for Biden and 36 percent for Sanders. In the official results, Biden received 53 percent of votes and Sanders received 36 percent. In other words, for the three leading candidates in the two primaries, the randomly selected ballots were all within one percent of the official outcome.

If the audit indeed matches the results this certainly seems to shoot a pretty large hole in our rigging theory. After all, it’d be crazy to think that an audit could be gamed as well, right?

The Michigan Bureau of Elections was kind enough to provide the raw data used for the audit, which unsurprisingly confirms the numbers from the report. However, the report quickly glosses over the fact that a large chunks of the ballots selected to be audited were omitted from the audit, due to being “unable to retrieve the ballots under present circumstances”.

Of the 669 selected for audit, 77 ballots were omitted. Of those 77, 41 of the omitted were from a single county: Kent County.

Ok. So what? What’s so special about Kent?

Kent is home to the highest concentration of colleges and universities in the state. As we’ve already seen, the youth vote overwhelmingly favors Sanders. In 2016 Sanders beat Clinton in Kent County 62.5% to 37.3%.

In the end, Kent contributed 19 of the 415, or 4.6% of the audited ballots. However Kent County voters comprised 6.3% of the Democratic votes in the election (99,132 of the 1,585,858). Is it possible that the Kent ballots omitted from the audit would have revealed a discrepancy between the official result and the audit? With no clear reason why such a large chunk of the votes in Kent were omitted, we’re left to speculate.

But Michigan has hand marked paper ballots. Didn’t you imply those are harder to rig?

Yes and yes. Voting in Michigan does use paper ballots marked by hand. It’s understandable that people then make the leap into thinking hand-marked paper ballots also implies the ballots are hand-counted. But ballots have not been counted by hand in much of the US for decades now.

What actually happens after you mark your choice by hand, depending on your county, is your paper ballot is fed into one of three different types of optical scanners that records your vote electronically to then transmit their tally to a central tabulator. The three brands of machine used are Dominion, Hart Intercivic, and a third company you’ll no doubt recognize as Chuck Hagel’s company, ES&S. Small world!

Many of the tabulators are also set up to create and store a digital image of your ballot as it’s read to facilitate the auditing process, but for unknown reasons, Michigan election officials chose to disable this feature for the 2020 primaries.

Now taking this all together, we’ve learned that that even in states like MI where paper ballots are used, the vote is still tabulated electronically and the only assurance we’re provided that the results are correct, if that assurance is given at all, is via audits that can be easily manipulated.

What about Colorado? Didn’t you say they also performed an audit? And their exit polls were bang on!

They did indeed. Beginning in November 2017, Colorado election law began to require counties to conduct a risk-limiting audit. In fact, CO is only one of 2 states in the entire country that automatically perform these audits, whether or not there are discrepancies. The other is Rhode Island.

Rhode Island, which passed legislation in 2017 to require risk-limiting audits by 2020, is technically not the first state to implement this security measure. That was Colorado, which conducted the first statewide risk-limiting audit in 2017. But Colorado votes entirely by mail, making it quite different from most other states. So that puts Rhode Island in the spotlight, and when it conducts its real audit in 2020, it will be the first state to do so using ballots cast in local precincts around the state. (Source)

So what’s different between CO and MI then?

The difference is in the way votes are tallied in each state.

Since CO votes almost entirely by mail-in ballot, the votes themselves ARE hand-counted. The combination of hand counted votes with an automatic risk limiting audit appears, at least for now, to be too complex a pretzel to rig. Jump back to find CO on the graph in Part 3 if your memory needs jogging.

As a result, Sanders won CO by 12 points, nearly exactly matching exit polls, and actually outperforming pre-election opinion polls. Go figure.


Section Summary

  1. There are two types of audits: procedural audits and risk-limiting audits. Only risk-limiting audits assess the validity of the result.
  2. Only 2 states automatically perform risk limiting audits: Colorado and Rhode Island.
  3. While paper ballots can facilitate an audit, they do not provide protection against rigging if counted by proprietary voting machine.
  4. Michigan also performed a trial run of a risk-limiting audit this year, and found that is confirmed their results, despite significant exit poll discrepancies.
  5. Upon further examination of Michigan’s audit, it appears that a large chunk of ballots from predominantly student heavy precincts were omitted.
  6. Colorado’s lack of exit poll discrepancies is likely the result of hand-counted ballots combined with risk-limiting audit procedures, making rigging methods used elsewhere unworkable.

Part 12 - Bernie would have lost

“There’s an old saying in Tennessee — I know it’s in Texas, probably in Tennessee — that says, fool me once, shame on — shame on you. Fool me — you can’t get fooled again.”

In 2017 a federal lawsuit was filed against the DNC alleging rigging of the 2016 primary against Sanders. During trial, lawyers for the DNC argued their client was under no legal obligation to run fair or enforceable elections, and thus reserved the right to pick the Democratic nominee irrespective of the primary result. It’s time to start taking them at their word.

Ok let’s assume that they did put their thumb on the scale to prevent Bernie from becoming the nominee. Are you implying that the DNC would have then rigged the general election against their own candidate?

There’s nothing being implied, that’s exactly what would have happened. The perceived threat Sanders posed to the party establishment could not be overstated, and party leaders were not shy about saying so. A Sanders presidency would have not only been a resounding rejection of the party’s default Third Way strategy pioneered by Bill Clinton, but a threat to every high-ranking party member’s position within the organization. Is it too hard to imagine that this party, which had already shown its repeated willingness to ignore the will of its voters, when faced with its own “destruction” would choose self-preservation over beating Trump?

Why does any of this matter? Bernie lost. And even he is saying we need to get behind Biden. Time to move on, we have Trump to beat!

Let’s recap.

The Democratic primary process has repeatedly shown strong evidence of widespread rigging and manipulation of the electronic vote.

The DNC have argued in court that they have the right to ignore voters and pick the nominee they prefer.

The results of these rigged elections have been widely used as justification for why the Democratic Party platform must be purged of broadly popular proposals like single-payer healthcare or a Green New Deal.

If we refuse to acknowledge the high likelihood that the DNC rigged their own primary to block the progressive wing, we are going to repeat the same mistakes. How do we move forward if we don’t know what surplus of support is needed to ensure an election can’t be stolen? How large a lead does a progressive candidate need to accumulate to overcome rigging not only by the opposition, but by their own party? Were we really naive enough to think Sanders, had he somehow made it through the primary, would have been allowed to win the presidency?

But this isn’t about Bernie anymore, if it ever was. This is about the next one. How much time, energy, money, and dare we say it, hope can we be convinced to funnel into the Democratic party under the premise that our votes matter? How long will we continue to see the Democratic party as a meaningful apparatus for change? How many times can Lucy convince us to kick the proverbial football?

After Sanders’ loss, pundits generated an endless stream of reasons why.

“Change happens slowly”

“The youth vote never materialized”

“The voters rejected Sanders’ brand of socialism”

“At the end of the day, Americans are conservative people”

And for the most part, we bought it. A frustrating tendency of many on the left is our ability to recognize the ecosystem of corporate influence over our political sphere but somehow stop short of extending this critique to the conclusions drawn via our rigged elections. We can feel the game stacked against us but still fall into the trap of internalizing the wrong lessons of defeat. It’s not that none of the criticisms of the Sanders campaign are valid (many are), it’s that they fall far short of a useful explanation for why he lost, again.

So your grand plan this whole time was to just convince people not to vote? What about down ballot races? Or the supreme court?

To those with no semblance of history or the forces at work, Trump appeared as a unique aberration, a glitch in an otherwise redeemable political system that under normal circumstances is a force for progress. This belief was perfectly embodied in the obsessive focus on the role foreign interference played in the 2016 election, an obsession which ultimately served little more than to obscure a robbery hatched right at home.

When Barack Obama began his presidency by presiding over a 2008 financial crisis “fix” that facilitated one of the most dramatic upward transfers of wealth in history, many began to see the Democratic party for what it truly was: a mechanism for corralling all of our justified anger and frustrations into a controlled electoral arena. And when Michelle Obama now laments “the people who didn’t vote at all, the young people, the women, that’s when you think, man, people think this is a game” it reveals the growing frustration of a party brass that feels entitled to votes they aren’t receiving. But when wildly gesticulating toward an openly corrupt GOP is the sum total of your election strategy, it’s understandable that people have decided they have nothing to vote for.

In the midst of a historic nationwide uprising against decades of police violence, the democratic establishment has sought to temper this energy, calling for protestors to funnel their dissatisfaction into voting in November. So vote if you still believe it can yield a temporary reprieve for the most vulnerable from the most brutal aspects of this trajectory we’re on. Ridiculing those who are motivated by a genuine desire to protect others is certainly not effective strategy. But at the same time, do not berate those who refuse to participate either. These converging crises were set in motion long before 2016 and will continue to intensify, albeit at a slower rate, even if Biden is elected.

But even if they Dems aren’t great, the GOP are so much worse!

The GOP have undeniably been the pioneers of election rigging since the beginning of electronic voting methods, and other methods of election tampering for much longer. Anyone reading whose takeaway is that the GOP’s hands are clean, need look no further than the almost certainly rigged and stolen general elections in 2000 and 2004. We haven’t even touched on the huge irregularities seen in 2008 and 2012 from which they benefitted. It’s likely that if Obama’s actual tally was counted, he would have won in a landslide not seen since Regan v Mondale.

But this isn’t about the unconcealed corruption of the GOP. This is about the Democratic party; a party that professes to be working for everyday Americans, while also stonewalling any legislation that threatens the interests of their corporate donors, representing industries that profit off the same suffering the Democrats performatively claim to oppose.

So what then? What are we supposed to do?

I’d love to tell you that organizing a weekend march or calling your representative and demanding free and transparent elections would make an ounce of difference. But it won’t, not on the timescales we have to work with.

The situation is at the same time bleaker and more hopeful than many of us recognize. But a correct assessment begins by recognizing that on this road we’re on, careening toward a cliff, there is no off-ramp via the Democratic party. It is only through reforging the levers of power that private capital has spent decades dismantling that we can hope to counter the power that emanates from a political sphere that no longer answers to us. And as for what we can do, here’s a start: