After months of analysis, reporting and delays, Melissa Nann Burke and I finally saw our analysis of the most dangerous intersections in Delaware grace A1 of The News Journal.
Our analysis focused on the 185 intersections that averaged at least 15 crashes per year between 2010 and 2012. I’ll defer to the story for a discussion of the findings, though. Here, I want to focus on how the analysis was done.
We obtained a geo-database of all reported crashes in the state (which might be a huge perk of covering a small state) and used it to expand an analysis by a local transportation-planning agency called WILMAPCO that does something similar for only New Castle County. To do this, we needed a database (or shapefile) of polygons for every intersection in the state. We also needed the traffic volumes so we could normalize the data and be able to compare the smaller rural intersections with the big-city and suburban ones.
For all the intersection polygons, we started with a shapefile of intersection points we obtained from DelDOT. Instead of just creating buffers, though, we had to painstakingly go through each intersection and draw the “sphere of influence” - basically the area where a driver’s decision is impacted by the intersection such as acceleration and turning lanes and the heart of the intersection itself - to keep our analysis in line with WILMAPCO’s. Fortunately, WILMAPCO gave us their New Castle County shapefile, so Melissa and I each took a remaining county and started drawing.
As annoying as drawing hundreds of intersections sounds, dealing with traffic volumes was even worse. While the volumes were easy for the major intersections, the the crossroads for many small and some mid-size intersections didn’t have traffic counts and we were forced to use estimates based on a formula the state also uses that takes into account land use (Housing development A has 200 occupied homes. Multiply that by the average number of trips for an occupant. Etc. etc.). Those didn’t give us absolute numbers, but they gave us a strong enough estimate that we could say there were at least X crashes per 1 million vehicles entering the intersection.
With polygons complete and traffic numbers in hand, I threw everything into a PostGIS database and started running spacial joins and calculating a few fields. It was really nice to be able to write SQL to take care of the analysis instead of dealing with the QGIS gui. It was much faster and made it a breeze to export snapshots to excel.
The last bit of tinkering was that we scored each of the 185 intersections based on how they compared with the others on total crash rate and injury crash rate. If the injury crash rate was high compared to other intersections, it scored higher, and the same went with the total crash rate. This meant that an intersection with an average crash rate but with a larger rate of injury accidents could score higher than an intersection with a higher crash rate but a lower injury crash rate.
Because our analysis took place in PostGIS, it was a breeze to turn our intersections data as geojson. To make it much more accessible, though, I converted our polygons back to points by calculating each centroid. That left me with a nice, small geojson file that I could display on a leaflet map.
For those that didn’t want to just look at the worst intersections near them, though, I wanted to put together a datatable that folks could sort and search. That was easy with datatables.js. The fun part, was tying together the interactivity of the datatable with the leaflet map. Here’s the code I used: