Rounding Out the Social Distancing Scoreboard

Get data for any location

As we discussed in our previous blog post, mass implementation of social distancing has not been done in modern times. Not only is there a lack of data from which to learn, there is also a lack of robust metrics for evaluating its success. We understand that social distancing is multi-faceted and can only be understood through varying lenses.

If you recall, our first two measure social distancing by using proxies:

  1. Change in average distance traveled compared to a pre-COVID-19 period
  2. Change in visitation to non-essential venues compared to a pre-COVID-19 period

We found that they were successful in accurately tracking people’s behavior in response to either media events about mounting COVID-19 cases or to government restriction measures like shelter-in-place orders.

With just two metrics, however, this was an incomplete picture. That is why our Scoreboard now incorporates three metrics, each one describing a different facet of social distancing.

Introducing the Third Metric: Human Encounters

Several of our Scoreboard users observed something that was on our radar: in many rural places and other less-populated areas, the baseline for “social distancing” is naturally much lower and thus, it is inaccurate to apply the same standard to places that have drastically less potential to decrease. We heartily agree.

Moreover, since the virus itself is spread via person-to-person contact, we needed to incorporate some notion of that into our score. Since our data cannot detect if two humans have actually met, we instead use our data to simulate potential encounters and derive the probability that two devices that were in the same place at the same time (details are in the Methodology section below).

With this new metric added to our Scoreboard, not only are the scores more fairly and accurately balanced, local leaders and public health officials get a more useful picture of the social distancing behavior happening in their communities. Our hope is that greater nuance will strengthen their ability to make strategic decisions and craft targeted responses.

Human Encounters Methodology

To account for the likelihood that the people in a given community will contract COVID-19, we created this metric:

M= number of encountersarea (km²)/ baseline - 1

In order to understand the formula, we need to define: an encounter, normalization, the baseline, and a scoring range.

Inspired by the metric used by Pepe et al., we set an encounter as "proximity between any two users of the same province who were seen within a circle of radius R = 50m over a 1 hour period." In other words: two devices within 50 meters of each other for 60 minutes or less.

Since cities naturally have higher counts than rural places, measuring absolute values of human encounters doesn’t solve the problem by itself. To hold every county and state to the same standard of measurement, we needed to normalize our metric. We looked at two different strategies: encounters per capita and encounters per square kilometer of land area. When we evaluated both, we found that per square area resulted in the expected behaviors for densely populated areas as well as rural areas:

encounters per area chart

A Different Baseline

Unlike the previous two metrics, Human Encounters doesn’t grade for rate of change, but instead for absolute values. Here’s why: dense communities with large populations in a small land area have much higher levels of risk in terms of potential infection. Since these communities were dense prior to the outbreak, rate of change isn’t a particularly valuable measurement. What matters is how many people were in the same place at the same time, regardless of how much it changed from the past.

Our baseline, intended to represent “business as usual”, is calculated as the national average encounter density during the 4 weeks that immediately precede COVID-19 outbreak (February 10th - March 8th). We use this baseline as a lead to define fixed ranges of encounter density for our grades, which are then expressed as a reduction from the fixed baseline. We leaned on the human-encounter-reduction goals recommended by a variety of experts and studies (such as this one) and create this scoring range:

  • A: >94%
  • B:  82-94%
  • C: 74-82%
  • D: 40-74%
  • F:

These ranges are universally applicable as it is fair to assume that the spread of disease is only related to the absolute encounter density (number of encounters per km2) and does not depend on administrative level (county, state or nation).

As of this update, the total grade is the average numerical score of our three metrics. We settled on this method because while the previous two metrics use counts from the pre-COVID19 weeks as the baseline, the third metric uses an altogether different kind of baseline. Additionally, some counties don’t have enough data to reliably calculate scores for all three metrics; for those counties with only two, the overall grade is an average of those two.

What does the Human Encounters metric mean in the real world?

Some locations, such as Manhattan, are more densely populated than other areas and should be extra vigilant about reducing their number of encounters as much as possible.

human encounter density
Change in Number of Human Encounters in Manhattan from February 26th to March 24th

In contrast, less-populous states, like Wyoming, continuously score high on this metric even under non-pandemic circumstances — their score remains high as of this publication.

Probably the most heartening finding in our Human Encounters data is that it didn’t take long after the outbreak for many other states to follow Wyoming’s lead: 

grade distribution shift

In real time, we’re seeing our “data for good” mandate come to life.

When we first conceptualized the Social Distancing Scoreboard as a way to help in the fight against COVID-19, we never imagined the sort of positive response we’ve received. Suffice to say, the launch of the Scoreboard created more than just a ripple effect — it was a massive wave. 

We are proud and humbled that our data is helping governments from local municipalities to national agencies enact life-saving policies. We have seen our data enable the best and brightest academicians to test and model different ways to fight the spread of the virus, ensuring that relevant bodies receive critical data. Our data was adopted not only by international organizations such as the IMF and the World Bank, but also by ordinary citizens who also knew that by following social distancing guidelines, they are doing their part.

While there is certainly room for improvement, as a country, we are thankfully trending in the right direction.

What’s Next?

As we read the news and interpret our data, it is obvious that the fight is far from over. Unacast will continue to do our part by developing additional tools that will complement our Social Distancing Scoreboard. Our hope is that these contributions will help us understand the new world in which we now live.

Stay tuned — and stay safe!

Special thanks goes out to our data science team who made this possible: Jan, Kate, and Mathias.

Jan Benetka
Jan Benetka
Kate Kuzmina
Kate Kuzmina
Mathias Schläffer
Mathias Schläffer


Resources

Sort
No items found.

Schedule a Meeting

Meet with us and put Unacast’s data to the test.