AZCIR’s analysis of rejected ballots from the state’s two most recent presidential elections demonstrates our continued dedication to providing the most accurate, data-driven journalism to our readers. AZCIR used federal and state data to identify trends in the state’s rejected ballot rates from past general elections to better understand how the information can inform voters leading into the 2020 presidential election. 

As part of that process, AZCIR conducted more than two dozen interviews with election experts and state officials, reviewed court filings relating to existing election policy and explored changes to Arizona laws that could impact the number of ballots rejected in November.

The core of AZCIR’s analysis relied on county-level data from Arizona’s 2012 and 2016 presidential elections. 

Each county is required to submit a report to the Arizona Secretary of State that contains a series of data points such as the number of ballots sent to voters, how many were returned to election officials and the number of ballots submitted for counting – including the number of those rejected. The data is then sent to the U.S. Elections Assistance Commission, a federal agency first created in 2002 by the Help America Vote Act to monitor election performance across the country. 

AZCIR has reviewed rejected ballots in previous elections, but this analysis represents a new approach to the data itself, in a first-of-it’s-kind look at how the data was collected and reported. After reviewing the data internally and separately validating discrepancies found with county elections officials, AZCIR found that counties across the state had overreported 3,563 rejected ballots in 2016 and an additional 542 in 2012. 

Understanding how AZCIR classified a rejected ballot is important to understanding why AZCIR uses the term “overreported” when discussing state reports, and why reporters chose to remove the data from its analysis. 

AZCIR considered a “rejected ballot” to be one that a voter submitted to election officials with the intention of having it counted, but that election officials ultimately rejected for a specific reason based on election policy, such as a voter forgetting to sign their name, for example. 

Mail-in ballots sent by election officials but never received by voters, which are labeled as “returned as undeliverable” in EAC data, are classified by the state as a rejected ballot. The same is true for ballots that were damaged and replaced by voters, labeled as “spoiled ballots.” 

Based on AZCIR’s strict definition of a rejected ballot, the ballots associated with these reasons for rejection were removed from AZCIR’s data. 

In addition to removing ballots that were never submitted by voters from the EAC’s data, AZCIR  also removed the corresponding totals from the Secretary of State’s 2012 and 2016 canvas results, which included all ballots that were considered by the Secretary of State to be  “rejected,” according to spokesperson Sophia Solis. 

Other problems AZCIR fixed

Overall, specific sections of EAC data, as reported to it by the state, didn’t add up. In some cases, counties reported counting and rejecting more ballots than they had received. In another instance, more than 15,000 ballots were missing from Yuma County’s data.

AZCIR consulted with county officials and found that some of the data issues, such as Yuma’s 15,000 mystery ballots, were a result of reporting errors by the county. AZCIR updated any data inconsistencies after consulting with election officials in each of the counties for which reporters found issues with the data 

Problems were most prevalent in the 2016 data for “UOCAVA ballots,” a type of ballot that is cast by members of the military or civilians overseas. UOCAVA ballots were included in AZCIR’s “early ballot” numbers for the story, which were merged with absentee and other early votes.

In an email from the EAC, AZCIR was told that discrepancies in UOCAVA ballot data were “not unusual” in reports submitted by multiple states in 2016. The agency wrote that reorganization in how the data was collected and reported that year most likely led to inconsistencies.

AZCIR corrected this by adding the total number of ballots rejected and total ballots counted in order to calculate the total number of ballots cast, recreating the way data points are typically produced by the EAC. 

Rates and raw numbers

AZCIR used both rejection rates and raw numbers in our story when either of the two measures may have been misleading.

The varying population sizes of Arizona’s counties makes comparing raw numbers of rejected ballots across counties unfair. In Maricopa County, for example, nearly 20,000 ballots were rejected in 2016, representing the largest share of rejected ballots of all counties that year. Maricopa County’s ballots represent 1.2% of the 1.6 million ballots cast in the county – a rejection rate lower than seven other counties, including some that only rejected a few hundred ballots in total. 

Raw numbers can quickly demonstrate how many ballots were rejected in a single area, such as a county. They also provide important context for Arizona’s provisional ballots for which rates could not. 

Between 2012 and 2016, counties such as Maricopa updated their polling place technology from paper voter rosters to an electronic poll book. This allowed poll workers to confirm that someone who had been issued an early ballot didn’t already vote, meaning they could vote a regular ballot in-person rather than a provisional. Before the technology change, voters would have been required to submit a provisional ballot, which would later be checked by election officials to determine if they had already voted.

This update caused the overall number of provisional ballots cast to decrease in Maricopa County. The provisionals that would have been rejected anyway, such as those that were cast by unregistered voters, remained. Data show that this simultaneously drove down the total number of provisional ballots cast while driving up the provisional ballot rejection rate. 

Election Performance Index

AZCIR used some findings from MIT’s Election Performance Index, which uses a series of indicators that include rejected ballots to rank the efficiency of state elections. Specifically, AZCIR used the Index’s overall rank of Arizona because the measurements used to compare all states are standardized, and because they don’t rely solely on one specific metric such as rejected ballots. 

With respect to the rejected ballot comparisons, the Index calculates mail-in ballot rejection rates using the number of  ballots rejected as a percent of total ballots cast. This comparison can be misleading because mail-in ballots will represent a larger share of the total ballots rejected in states such as Arizona where voting by mail is more prevalent.

AZCIR’s analysis of rejection rates for types of ballots used the total number of each specific type of ballot cast as its denominator. For example, instead of using all ballots cast as a denominator to determine the rate of provisional ballots rejected, AZCIR instead used the total amount of provisionals cast. 

For early and provisional ballots, AZCIR showed rejection rates by ballot type, as percentages of the total number of that type of ballot cast – not as a percentage of all ballots cast that year, as was calculated by MIT.

Creative Commons License

Sam Kmack is an investigative reporter for AZCIR, covering a range of topics across Arizona for a yearlong fellowship supported in part by the Poynter-Koch Media and Journalism Fellowship.