Unified District Model

December 2021

In order to score new plans, it is necessary to create a statistical model of the relationship between districts’ latent partisanship and candidates’ incumbency status with election outcomes. This enables us to estimate district-level vote shares for a new map and the corresponding partisan gerrymandering metrics. This page describes the details of our methodology and how we validate the results of this model.

Results for uncontested elections are imputed as described in The Impact of Partisan Gerrymandering on Political Parties and its appendix, by Nicholas Stephanopoulos and Christopher Warshaw.

Methodology

The Big Picture

We use the correlation between the presidential vote on the one hand, and state legislative and congressional votes on the other, to predict how new districts will likely vote and so how biased a plan will be. Our correlations come from the last 10 years of elections, and factor in both any extra advantage incumbents might have as well as how much each state’s results might differ from others. We also allow our predictions to be imperfect by quantifying how much our method missed the actual outcomes of past elections, including the degree to which partisan tides have changed party performance from one election to the next. This enables us to generate the most accurate, data-driven, and transparent prediction we can.

The Details

We use a Bayesian hierarchical model of district-level election returns, run for all state legislatures and congressional delegations on the elections from 2012 through 2020. Formally, the model is:

where

The model allows the slope for all our covariates—as well as the corresponding intercept— to vary across both states and election cycles. Chambers accounted for minimal variation in an ANOVA test, so state legislative and congressional results were modeled together as emerging from a common distribution. The model includes two covariates: 1) the two-party district-level Democratic presidential vote share, centered around its global mean (0.494); and 2) the incumbency status in district election i, coded -1 for Republican, 0 for open, and 1 for Democratic. We do not have the 2020 presidential vote for estimating new plans in two states—Kentucky and South Dakota—so we used the 2016 presidential vote in the model for those states. In the small number of state-cycle combinations that were missing presidential vote we used the presidential vote for the same district in the next presidential election (or the previous presidential election where the next one was not available).

When generating predictions, PlanScore draws 1000 samples from the posterior distribu- tion of model parameters, and uses them to calculate means and probabilities. We also add in the offsets for the 2020 presidential election cycle, and then also add in samples from the covariance matrix of cycle random effects to allowing the uncertainty of predicting for an unknown election cycle to propagate into our predictions. This has the effect of predicting for an election like 2020 in most respects, but with error bounds that encompass the full range of partisan tides that occurred over the last decade.

Table 1: PlanScore prediction model results
Estimate 95% Credible Interval
POPULATION-LEVEL
Intercept (β0) 0.50 [0.46, 0.53]
Presidential vote (β1) 0.78 [0.62, 0.93]
Incumbency (β2) 0.05 [0.03, 0.07]
STATE-LEVEL
Standard Deviations
Intercept (σβ0s) 0.02 [0.02, 0.02]
Presidential vote (σβ1s) 0.11 [0.09, 0.14]
Incumbency (σβ2s) 0.02 [0.01, 0.02]
Correlations
Intercept - Pres. vote (ρσβ0sσβ1s) −0.53 [−0.71, −0.29]
Intercept - Incumbency (ρσβ0sσβ2s) 0.29 [0.00, 0.54]
Pres. vote - Incumbency (ρσβ1sσβ2s) −0.73 [−0.85, −0.56]
CYCLE-LEVEL
Standard Deviations
Intercept (σβ0c) 0.03 [0.01, 0.09]
Presidential vote (σβ1c) 0.15 [0.07, 0.37]
Incumbency (σβ2c) 0.02 [0.01, 0.06]
Correlations
Intercept - Pres. vote (ρσβ0cσβ1c) −0.1132 [−0.81, 0.66]
Intercept - Incumbency (ρσβ0cσβ2s) −0.20 [−0.85, 0.60]
Pres. vote - Incumbency (ρσβ1cσβ2c) −0.57 [−0.96, 0.39]
Note: Model estimated in brms for R. Model based on 4 MCMC chains run for 4000 iterations each with a 2000 iteration warm-up. All model parameters converged well with < 1.0.

Predictions

The charts below show comparisons between this model’s in-sample predictions and observed historical scores for plans with at least 7 districts. The results were broadly similar for cross-validated predictions with 10 percent of the sample set aside for testing. The predictions were also quite strong for 2020 in states where we were able to obtain election results for comparison.

model_v_historical_chambers_7plus_pres_in_year.png

model_v_historical_states_7plus_pres_in_year.png

model_v_historical_cycles_7plus_pres_in_year.png

Data Sources

Precinct-level presidential vote data used by this model is mostly sourced from the Voting and Election Science Team at University of Florida and Wichita State University.

Files