Years ago, as part of my graduate study, I made *isochronic maps* - maps that are scaled by travel time instead of physical distance - for Paris, France and Singapore. At the time, I used Processing to build these maps. With travel times between each pair of nodes pre-calculated, the animation was resonably responsive with a few hundreds of nodes.

This past June, during our Visualization team hackathon, I decided to revisit the old idea with some new shiny tools.

Uber shares its historical records of street speeds via its Movement program. This freely available dataset represents a graph:

- The nodes are way points (intersections) in a city, each associated with a geo location (longitude and latitude);
- The edges are street segments that connect the way points, each associated with a bunch of metadata, and a time series of median speeds.

I wanted to build an application that render isochronic maps from this dataset. Users should be able to select a point of origin and a time of day, and see the visualization of travel times to everywhere else in the city. Due to the scale of these graphs (a major metropolitan area contains 100K+ way points and 80K+ street segments), my old approach of pre-calculating travel times for every node pair is no longer feasible.

To calculate the travel time dynamically, I implemented a shortest path finder on the GPU. It is the same idea as Dijkstra’s algorithm except that all edges are revisited on every pass (a small overhead thanks to the parallelization).

The currently known lowest values for each node are stored in a float texture. At the start, all pixels have the “infinity” value except the user-selected origin, which has 0. On every pass, a transform is run to traverse all edges. For each edge, I sample the texture for the value of the source node, add it to the value of the edge, and paint the sum to the pixel of the target node. Using the *MIN* blending, a pixel is only updated if a smaller value present itself.

The following video highlights visited nodes with each pass, as the traversal progresses. The dot sizes show the number of nodes on the shortest path from the origin:

With the 4 color channels in the texture, I can in fact compute multiple shortest paths at the same time! And that was what I did: the shortest path finder yields the shortest distance in terms of travel time (R), the geographical distance (G) and the number of way points (B), along with a visited flag (A).

We could now color and distort the city’s road network to create isochronic maps. Node colors, positions and radius are all computed using the shortest distance in a transform feedback, so that everything stays on the GPU. Rendered with deck.gl’s ScatterplotLayer and LineLayer.

#### Performance Benchmarks

In my benchmark tests, each pass takes 0.45ms to run, insensitive to the size of the graph. The number of passes needed is theoretically upper-bounded by the number of edges `E`

. Considering the highly connected nature of street networks, I set it to `sqrt(E)`

and see satisfactory results.

Dataset | # of Nodes | # of Edges | Shortest path (CPU) | Shortest path (GPU) |
---|---|---|---|---|

San Francisco | 25898 | 19015 | 19.0ms | 72.4ms |

New York City | 103746 | 84769 | 84.7ms | 144.9ms |

CPU is still faster. However, if we consider the immediate responsiveness of the UI, that is, the delay from selecting the source node to first render:

Dataset | # of Nodes | # of Edges | First render (CPU) | First render (GPU) |
---|---|---|---|---|

San Francisco | 25898 | 19015 | 46.4ms | 6.48ms |

New York City | 103746 | 84769 | 150.4ms | 39.3ms |

If we want to render the traversal animation (i.e. update node/edge attributes on every animation frame):

Dataset | # of Nodes | # of Edges | Traversal animation (CPU) | Traversal animation (GPU) |
---|---|---|---|---|

San Francisco | 25898 | 19015 | 29.8fps | 55.7fps |

New York City | 103746 | 84769 | 7.20fps | 15.5fps |

There definitely are more optimizations that we can do. We are also discussing generalizing this code into a deck.gl graph layer. The code is hosted on GitHub. Head to the demo and try it yourself.

Collaborated with the fabulous Jérôme Cukier.