Currently viewing the tag: "Error analysis"

We need to bring error and uncertainty analysis forward in the public discourse. That’s what Painting By Numbers is for.

Consider these two statements.

  • “Cuomo says 21% of those tested in NYC had virus antibodies”
  • “Every 1% increase in unemployment results in 37,000 deaths”

The first is a headline in the New York Times April 23, 2020. The second is taken from a meme in my social media feed the same day.

In terms of numbers widely propagated and magnified in the public sphere, both suffer from a common deficiency. Quantified error and uncertainty bounds around the result are not reported. So, the public has no idea what the value of the numerical result really is.

Without a sober, quantified explanation of accuracy, validity, relevance, repeatability, bias, and certainty, both numerical results come across as sensationalist in their own way.

This gap is a constant source of misunderstanding for smoldering crises like climate disruption and social inequities but becomes dangerous, frankly, during immediate crises like the COVID-19 pandemic and the 2007-2008 financial crisis (lack of understanding of financial engineering models). Information, including results of countless numerical analyses, forecasts, and predictions,  is disseminated fast furiously and peoples’ heads spin.

Memes propagated through social media aren’t going to improve in quality anytime soon. I understand that. I’m not going to even try deconstructing the unemployment meme.

But scientists, academics, political leaders, and journalists should be more careful.

It’s one thing to also report, or state from the podium, “these are early results and must be validated with more testing,” “preliminary results,” or “the testing method is still under development and is not 100% percent accurate.”

In fairness, a New York Times article about the 21% number does include many disclaimers.  https://www.nytimes.com/2020/04/23/nyregion/coronavirus-new-york-update.html

The account does acknowledge that the accuracy of the test has been called into question. But what does that tell us? Not much. The article also takes the percentage and propagates it through another calculation, stating that “if the state’s numbers indicated the true incidence of the virus, they would mean that more than 1.7-million people in New York City…had already been infected” and “That is far greater than the 250,000 confirmed cases of the virus itself that the state has already recorded.”

So a numerical result with unquantified accuracy now implies that the infection rate is almost seven times higher than the confirmed cases. The error in the 21% number is now embedded in the next numbers!

Just because a bit of information is a number doesn’t necessarily mean it is telling us something meaningful, relevant, or useful at this time.

Determining and error or uncertainty is a rigorous, quantitative, analytical exercise and should be conducted for every numerical result, especially during times of manic public concern. 

Many, though not enough, scientific journal papers at least will include a qualitative on error and uncertainty in the measurements, the models, statistics, assumptions, etc, especially around statistical analysis. Rarely do you see included in media reports a thorough answer to the question “how confident are we in the numerical result we just reported?” 

We need to bring error and uncertainty analysis forward in the public discourse. That’s what Painting By Numbers is for.

Looking for something?

Use the form below to search the site:


Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...

Set your Twitter account name in your settings to use the TwitterBar Section.