Introducing a Critical COVID Metric: Daily New Cases Per 100K Population

Today, we are adding an important fifth metric to our COVID warning system: “daily new cases per 100K population” (also referred to by epidemiologists as “incidence”). The addition of this metric rounds out our warning system by incorporating a measure of how much COVID there is in each community today. Our previous metrics focused on direction of change and preparedness, but incidence corresponds to a person’s actual chances of being infected and suggests how many people will likely be infected in the near future. Learn more here.

Nine state risk scores have changed.

This is what the risk level map of the United States looked like on July 21 with the original four metrics:

This is what it looked like on July 22, with the case incidence metric added:

350 county risk scores have changed. 1800 new counties now have risk scores.

Adding the incidence metric has changed the scores of 350 counties. It has also allowed us to expand our coverage to more counties. Previously, many counties did not have enough data for us to calculate a risk score. CAN has always prioritized providing actionable data, and, since a lot of COVID decision-making by policy makers and residents will happen at the local—rather than state or federal—level, we believe it is critical to provide a county-level view of COVID. 

We will now grade every county with a green case incidence score (less than one new case per day per 100K people) as green overall. If case incidence is not green, our normal grading system applies, whereby a state’s overall color reflects the highest risk color for any one of its metrics. Counties that have not reported how many cases they have will show up as grey. 

Here is what the Covid Act Now map of counties looked like before: 

Here is what it looks like now: 

What does this change mean for my community?

We understand that taking into account Daily New Cases per 100K Population has increased the risk score for many states and counties. This change may be disheartening, but we believe it is important for our COVID risk score to reflect risk as accurately as possible and adding this metric improves our ability to do so. 

In addition to helping our overall risk scoring be comprehensive, this new metric also has an important intuitive interpretation. As incidence increases, so does the risk that you’ll run into an infected individual on your trip to the grocery store or at a barbeque.

We also made a minor change to our contact tracing metric. Read more here.

Changes to How We Assess Contact Tracing

Based on discussions with public health experts and policy makers, we are responding to feedback on the way we assess contact tracing. We have removed the critical (red) range for contact tracing and it will no longer be possible to be marked critical (red) due to low numbers of contact tracing staff alone. While low numbers of contact tracing staff should be acknowledged, it doesn’t indicate an active or imminent outbreak. In addition, contact tracing is one our least refined datasets, largely because states themselves are highly variable in their reporting of contact tracing data. 

Furthermore, the best metric to measure the success of a contact tracing program is the percentage of new cases not traced to a known case. However, to our knowledge, the only state releasing this data is Oregon. Therefore, as a proxy, we currently look at the number of contact tracers on staff for each state to estimate the number of contacts that could be traced within 48 hours if the contact tracing program was otherwise working perfectly. The number of contact tracers hired is a very unreliable proxy, because we do not know how effective the contact tracing programs are, even if they have enough staff. The number of contact tracing staff reported can also be incorrect. There are other unknowns, such as whether the number of contact tracing staff reported is merely the number of planned hires, or the number of actual hires. We also do not know if contact tracers have been given sufficient training or whether they are following up in person with people who do not pick up their calls.

For all these reasons, we are updating the way we categorize contact tracing. First, we are removing the critical range. It will no longer be possible to be marked red for low numbers of contact tracing staff alone.

The 0-10% range will now be High Risk.

What is Incidence?

What is incidence? 

Incidence is a measure of new confirmed COVID cases per day. To ensure incidence can be compared across geographies, we calculate it as a proportion of the population — specifically, new daily cases for every 100,000 people. Adding a case incidence metric depicts risk more accurately, since it takes the overall number of cases into account.

We calculate incidence as follows:

Daily New Cases New Daily Cases (average of last 7 days)
100k Population = ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
Population / 100,000

Why is incidence important?

Incidence measures how many new people become infected with the virus per day per unit of population. It answers the question: “how many new COVID infections are in my area?” If the infection rate is the acceleration, incidence is the velocity. Infection rate reflects how quickly incidence is increasing or decreasing.

Incidence is important because it provides a more complete picture of the state of COVID in a given community. For instance, a community recovering from a major outbreak may have driven their infection growth rate (also known as R(t)) down to 0.5, but still a very high incidence of 50 daily new cases per 100 thousand population. On the other hand, a community heading towards an outbreak situation may have a very high infection growth rate (for example, 2), but a low incidence (for example, 2). 

How do we grade incidence?

As with our previous four metrics — infection growth rate, test positivity rate, ICU capacity, and contact tracing — we separate case incidence into four categories: critical, high, medium, and low. To be classified as critical state or county must have over 25 daily new cases per 100,000 people. To be classified as low, a state or county must have less than one new case per 100,000 people.

Color Code
Covid Risk Level
Daily new cases per 100,000 people





What is the difference between incidence and prevalence?

Infection prevalence is another term you may have read in the news or online. It is not the same as case incidence. 

Incidence refers to the number of new confirmed cases within a given time, typically one day. Imagine if on day one we have three people test positive, and on day two we have two more people test positive. The “incidence” on day one would be 3, and the incidence on day two it would be 2. Incidence is reported by most states and counties, but it does not account for new infections that are not caught by testing. It also does not include the duration of each infection (how long each infected person is exhibiting symptoms and/or contagious).Prevalence refers to the actual disease prevalence (total number of active infections) at any given time. This data is what we’d really like to know, because it indicates the actual risk of encountering an infected individual in a community.  But to know the true prevalence, you would need to test the entire population (or a widespread randomized sample) every day, using a highly-accurate test. Just counting confirmed cases isn’t enough. Per our earlier example, let’s say that on day one, we have three new confirmed cases, and on day two, we have two more confirmed cases. Those numbers suggest a total of five active infections, but perhaps there were an additional five new infections each day that weren’t caught via testing. (Some cases are asymptomatic and some symptomatic people never get tested.) That puts us at 15 active infections. Plus, over the past couple weeks, there may have been an additional 5+ infections per day, and most of those people may still be sick (indicating another 70 active infections). So the actual disease prevalence (number of infected people) may be 85+.

The bottom line is that prevalence paints a more complete picture of COVID in a community, but it can only be estimated based on the current data available (incidence).

Calculating better Infection Growth Rates (Rt) for more communities

Today we are making a change to the way we calculate Rt to better serve lower population regions and regions with lower case counts as well as to improve the timeliness of our Rt metric. We want to be more timely in letting people know when COVID is growing or shrinking in their communities, and this hopefully helps people understand how policies and actions are able to achieve different outcomes.

Launching COVID Alerts

Covid Act Now is a nonprofit organization with a mission to help save lives. To further that mission, we’re excited to launch COVID Alerts.

COVID Alerts let you know when the COVID threat level — based on metrics critical to monitoring COVID, such as infection growth rate and ICU capacity — changes in your county or state.

It’s easy to sign up! Just visit:

Then enter your email, along with the names of the states and/or counties you want to keep track of. We will let you if/when the COVID threat level in your area changes.

Our goal is to help keep you, your family, and your community out of harm’s way. Knowing the threat level in your area can help you make decisions about what precautions to take to keep yourself and your loved ones safe and healthy. It can also help keep local government decision makers informed about the risk of an active or imminent outbreak and how their testing and tracing compare to international standards.

Our alerting system determines threat level based on the four metrics we use in each state and county: infection rate, positive test rate, ICU headroom used, and contacts traced. 

For each metric, we score the COVID risk as either low, medium, high, or critical. We also provide an overall COVID threat score for the state or county, based on the same four metrics. When you sign up for COVID Alerts, we will send you an alert if the risk assessment for your state or county increases or decreases in severity.

For example, if you live in Hennepin County, Minnesota, but your parents live in Miami-Dade County, Florida, you might fill out the following counties and states:

If you sign up for alerts for Travis County, Texas, and the risk level moves from low to medium, we might send you the following email:

Sign up here to stay up-to-date on the COVID risk level in your area.

Model Changes: Adding a Fourth Color

Previously, our model depicted risk in three colors: red, yellow, and green. We have now added a fourth color: orange.

Using four colors allows the Covid Act Now threat scores to be more nuanced, reflecting the more messy, complicated reality of COVID in America. The addition of a fourth color (orange) allows red to be reserved for truly critical situations.

Here are the four colors and what they mean:

Red: Active or imminent outbreak. Red is reserved for the most severe situations in which data indicate that COVID is spreading rapidly and might foreshadow a new wave of infection. Orange: Risk of second spike or major gap in at least one of the metrics. A state categorized as orange and “at risk” can quickly devolve to an active or imminent outbreak, barring intervention.Yellow: Does not meet standards for containment. This means that the disease is still spreading, but in a slow and controlled manner that is unlikely to overwhelm the healthcare system.Green: On track for containment. This means that the rate of disease growth is negative, government testing and tracing efforts are robust, and that overall COVID is in retreat.

To make room for the fourth color, we’ve shifted the thresholds for some of the metrics. Here are the new thresholds for each metric.

Infection Growth Rate:

Red: >1.4

Orange: 1.1 – 1.4

Yellow: 0.9 – 1.1

Green: < 0.9

Test Positivity Rate:

Red: >20%

Orange: 10% – 20%

Yellow: 3% – 10%

Green: < 3%

ICU Headroom:

Red: >70%

Orange: 60% – 70% 

Yellow: 50% – 60%

Green: < 50%

Percent of Necessary Contacts Tracers Hired:


Social Sharing

If you see a graph on Covid Act Now’s website and want to share it, now you can!

We recently launched social sharing, so you can share data about your state or county on Facebook, Twitter, LinkedIn, or other social media. You can also embed Covid Act Now graphs in presentations, in other documents, or on your website.

Here’s a quick guide:

First, find the graph you want to share. In the upper right hand corner are the buttons “Save” and “Share.” Pressing “Save” will take you to an export image of the graph that you can save to your computer. 

Pressing “Share” will allow you to share that particular graph to Facebook, Twitter, or LinkedIn. You can also copy the link to share to another form of social media.

If you want to share a whole state or county page rather than a single graph, just scroll down.

You can then choose to share your state or county page on social media or embed it on your website.

People seem to like it! As of May 28, we’re generating 8,500 images per day.

We think that the more people know about COVID risk in their area, the more their decisions will be driven by data. So keep sharing away!

Where Does Our Data Come From?

Want to know where Covid Act Now’s data comes from?

We’ve created an easy-to-follow slide deck to answer all your questions about where we get the data that feeds our modeling and metrics. 

Our guiding principles for selecting data sources are availability, authoritativeness, and timeliness. The slides explain the inputs for each of our metrics (e.g., the number of new daily COVID cases or the typical ICU capacity in a county) and where we get the data source for that input.

For each source, we also explain why we chose that source, the known limitations of that source, and where discrepancies in data may come from.

We hope this deck is helpful in making sense of the many COVID datasets out there, and how we use them to build our data analysis tools.

Our Newest Metric: Contact Tracing

Today, we’re excited to announce a fourth metric added to our COVID warning system: contact tracing. We will lay out how we calculate whether states have sufficient contact tracing capacity, and why we think it is an important metric to assess reopening.

Why Does Contact Tracing Matter?

When people contract the virus, they do not show symptoms right away. Even as states begin to reopen, people will need to quarantine themselves if they have been silently exposed to someone with COVID.

How will they know? That’s where contact tracing comes in.

Because of this problem, it is critical that enough tracing capacity exists to rapidly trace the contacts of individuals who test positive for COVID. Those contacts can be tested, quarantined (if necessary), and asked about whom else they have come into contact with. Because exposed individuals begin infecting others, it is critical that this process be completed in less than 48 hours. If this routine of testing and tracing is done quickly and completely, it can contain COVID, as we have seen in South Korea and Taiwan, and without the need for costly lockdowns.

The White House Coronavirus Task Force’s Guidelines say that contact tracing is a “core responsibility” of states in order to be prepared to reopen. As of May 21, 27 states (CA, CT, DE, FL, HA, IL, IN, KS, KY, ME, MD, MI, MO, MT, NE, NV, NM, NY, NC, ND, OH, PA, RI, SD, WA, WV, and WI) call for contact tracing in their reopening plans. The American Enterprise Institute’s roadmap to reopening says states must massively scale up contact tracing and isolation/quarantine of traced contacts.

How Should We Measure Contact Tracing?

So how do we calculate contact tracing capacity? Experts recommend tracing contacts of someone who tests positive for COVID within 24 hours, to contain the potential of transmission. Based on conversations with practitioners and public health experts, our metric assumes that tracing all contacts for each new positive COVID case requires an average of five full-time contact tracers. 

Therefore, our contact tracing metric measures the percentage of new cases for which all contacts can be traced within 48 hours relative to available contact tracing staff in the state (assuming 1:5 new-positive-COVID-case:contact-tracing-staff ratio).

We use green if greater than 90% of the contacts can be traced within 48 hours, yellow if between 20% and 90% of the contacts can be traced within 48 hours, orange if between 7% and 20% of the contacts can be traced within 48 hours, and red if fewer than 7% of the contacts can be traced within 48 hours.

Here is an example from Wyoming:

As of June 13, Wyoming has an average of 12 new cases per day. If Wyoming needs 5 contact tracers per case, that would be 60 contact tracers necessary to trace all cases in 48 hours. Since Wyoming has 50 contact tracers, that is enough to trace 82% of cases.

What Should The Goals Be?

How did we choose our targets? Research estimates that the infection rate can be driven below 1.0 if 70-90% of cases are identified and 70-90% of those contacts are traced and isolated within 48 hours or less. Therefore, we chose 90% as the cut-off between green and yellow.

The boundaries between yellow, orange, and red are trickier. When less than 90% of positive cases have their contacts traced within 48 hours, contact tracing will likely be insufficient to contain COVID. We use 90% as the boundary between green and yellow. In the absence of expert consensus, we have set inclusive lower thresholds for yellow and orange. We peg the cut-off between yellow and orange at 20% — the number required for there to be one contact tracer per active case per 48 hours — and the cut-off between orange and red at 7%. Every state currently coded red is either currently experiencing a new outbreak or effectively has no tracing capacity.

A state can become green either by increasing the number of contact tracers, or by decreasing the number of new daily COVID infections. We hope that this new metric will help states factor contact tracing capacity into their reopening decisions.

Changes to Model Parameters

We recently changed a few of our model parameters. We wanted to share what changed and why.

The epidemiological model we use to track COVID is an SEIR model. An SEIR epidemiology model tracks the flow of a population between four states in relation to a disease (in this case, COVID). Those four states are susceptible (S), exposed (E), infected (I), and recovered (R). Mash those four together and, voila, “SEIR.”

Within the “infected” (I) category, our SEIR model projects how many patients will need to be hospitalized, how many will require ICU treatment, and how many will die. When we first created the model, we chose parameters based on the best available COVID information at the time.

(You might ask, what are “parameters”? Parameters are wonky modeling lingo for a model’s fundamental assumptions. Examples of parameters in an SEIR model: what percentage of people progress into each state of “SEIR”? How long does the average person spend in each of those four states?)

Our knowledge of COVID is always improving; as new data comes in, we update our parameters to fit what we know about the disease.

Given new available information and data, we are revising three different parameters in our SEIR model. The cumulative effect of these three parameter changes is that projected death rates remain about the same and that projected hospitalization rates have become more optimistic (i.e., fewer projected hospitalizations).

Here are the nitty-gritty specifics on all three parameter changes:

1. We have revised our estimate of the COVID hospitalization rate from 4% to 2%. The model’s initial assumed hospitalization rate (based on data from Italy and elsewhere) is approximately double the observed, empirical hospitalization rate in the United States. We have revised the assumed hospitalization rate to better reflect the newer, more accurate data.

2. While the model’s initial assumed hospitalization rate was too high, the model’s assumed death rate has accurately reflected available data. Therefore, as we lower the assumed rate that infected people require hospitalization, we commensurately increase the assumed death rates of patients in the ICU and on ventilators. (Our assumed rate of patients with disease severe enough to be hospitalized but not severe enough to go to the ICU has accurately reflected available data.)

3. We shortened the assumed average amount of time that people spend in the hospital, in the ICU, and on the ventilator — all to reflect new data. This is good news in the sense that COVID patients spending less time in the hospital reduces strain on hospitals. Unfortunately, though, one of the reasons COVID patients are spending less time in the hospital is that they are dying more quickly than initial data suggested.

So what is the take away? Taken together, these three parameter changes result in reduced projected COVID hospitalizations and no significant change in projected deaths. Our job as modelers is to continuously edit our model to reflect reality. We hope that a more accurate, up-to-date model will empower decision makers to make better decisions.

Return to


Recent Posts