Gerrymandering#

datashadergeoviews
Published: January 1, 2016 · Updated: October 1, 2024


Racial data vs. Congressional districts#

We are now awash with data from different sources, but pulling it all together to gain insights can be difficult for many reasons. In this notebook we show how to combine data of very different types to show previously hidden relationships:

  • “Big data”: 300 million points indicating the location and racial or ethnic category of each resident of the USA in the 2010 census. See the datashader census notebook for a detailed analysis. Most tools would need to massively downsample this data before it could be displayed.

  • Map data: Image tiles from Carto showing natural geographic boundaries. Requires alignment and overlaying to match the census data.

  • Geographic shapes: 2015 Congressional districts for the USA, downloaded from census.gov. Requires reprojection to match the coordinate system of the image tiles.

Few if any tools can alone handle all of these data sources, but here we’ll show how freely available Python packages can easily be combined to explore even large, complex datasets interactively in a web browser. The resulting plots make it simple to explore how the racial distribution of the USA population corresponds to the geographic features of each region and how both of these are reflected in the shape of US Congressional districts. For instance, here’s an example of using this notebook to zoom in to Houston, revealing a very precisely gerrymandered Hispanic district:

Houston district 29

Here the US population is rendered using racial category using the key shown, with more intense colors indicating a higher population density in that pixel, and the geographic background being dimly visible where population density is low. Racially integrated neighborhoods show up as intermediate or locally mixed colors, but most neighborhoods are quite segregated, and in this case the congressional district boundary shown clearly follows the borders of this segregation.

If you run this notebook and zoom in on any urban region of interest, you can click on an area with a concentration of one racial or ethnic group to see for yourself if that district follows geographic features, state boundaries, the racial distribution, or some combination thereof.

Numerous Python packages are required for this type of analysis to work, all coordinated using conda:

  • Numba: Compiles low-level numerical code written in Python into very fast machine code

  • Dask: Distributes these numba-based workloads across multiple processing cores in your machine

  • Datashader: Using Numba and Dask, aggregates big datasets into a fixed-sized array suitable for display in the browser

  • GeoViews: Projecting and visualizing points onto a geographic map

  • GeoPandas: Creates an GeoDataFrame from an online shapefile of the US

  • HoloViews: Flexibly combine each of the data sources into a just-in-time displayable, interactive plot

  • hvPlot: Quickly creates interactive visualizations from Dask and GeoPandas

  • Bokeh: Generate JavaScript-based interactive plot from HoloViews declarative specification

Each package is maintained independently and focuses on doing one job really well, but they all combine seamlessly and with very little code to solve complex problems.

import dask

dask.config.set({"dataframe.convert-string": False})
dask.config.set({"dataframe.query-planning": False})
<dask.config.set at 0x7fd8bd512150>
import holoviews as hv
import hvplot.dask  # noqa
import hvplot.pandas  # noqa
import datashader as ds
import dask.dataframe as dd
import geopandas as gpd
import geoviews as gv
import cartopy.crs as ccrs

hv.extension('bokeh')
ERROR 1: PROJ: proj_create_from_database: Open of /home/runner/work/examples/examples/gerrymandering/envs/default/share/proj failed

In this notebook, we’ll load data from different sources and show it all overlaid together. First, let’s define a color key for racial/ethnic categories:

color_key = {'w':'blue',  'b':'green', 'a':'red',   'h':'orange',   'o':'saddlebrown'}
races     = {'w':'White', 'b':'Black', 'a':'Asian', 'h':'Hispanic', 'o':'Other'}

color_points = hv.NdOverlay(
    {races[k]: gv.Points([0,0], crs=ccrs.PlateCarree()).opts(color=v) for k, v in color_key.items()})

Next, we’ll load the 2010 US Census, with the location and race or ethnicity of every US resident as of that year (300 million data points) and define a plot using datashader to show this data with the given color key. While we would normally use Pandas to load in data, we will use Dask instead for speed since it can use all the available cores on your machine. We also “persist” the data into memory, which will be faster as long as you have enough memory; otherwise every time we zoom it would have to read from disk.

df = dd.read_parquet('./data/census.snappy.parq', engine='pyarrow')
df = df.persist()

Now we will use hvplot with datashade=True to render these points efficiently using Datashader. We also set dynspread=True which dynamically increases point size once zooming in enough that that it makes sense to focus on individual points rather than the overall distribution. We also add a tile-based map in the background for context, using tiles=.

x_range, y_range = ((-13884029.0, -7453303.5), (2818291.5, 6335972.0)) # Continental USA

shaded = df.hvplot.points(
    'easting', 'northing',
    datashade=True,
    aggregator=ds.count_cat('race'),
    cmap=color_key,
    xlim=x_range,
    ylim=y_range,
    dynspread=True,
    height=800,
    width=1000,
    data_aspect=1,
    tiles='CartoLight',
)

shaded