We need to bring error and uncertainty analysis forward in the public discourse. That’s what Painting By Numbers is for.
Consider these two statements.
“Cuomo says 21% of those tested in NYC had virus antibodies”“Every 1% increase in unemployment results in 37,000 deaths”
The first is a headline in the New York Times April 23, 2020. […]
We need to bring error and uncertainty analysis forward in the public discourse. That’s what Painting By Numbers is for.
Consider these two statements.
- “Cuomo says 21% of those tested in NYC had virus antibodies”
- “Every 1% increase in unemployment results in 37,000 deaths”
The first is a headline in the New York Times April 23, 2020. The second is taken from a meme in my social media feed the same day.
In terms of numbers widely propagated and magnified in the public sphere, both suffer from a common deficiency. Quantified error and uncertainty bounds around the result are not reported. So, the public has no idea what the value of the numerical result really is.
Without a sober, quantified explanation of accuracy, validity, relevance, repeatability, bias, and certainty, both numerical results come across as sensationalist in their own way.
This gap is a constant source of misunderstanding for smoldering crises like climate disruption and social inequities but becomes dangerous, frankly, during immediate crises like the COVID-19 pandemic and the 2007-2008 financial crisis (lack of understanding of financial engineering models). Information, including results of countless numerical analyses, forecasts, and predictions, is disseminated fast furiously and peoples’ heads spin.
Memes propagated through social media aren’t going to improve in quality anytime soon. I understand that. I’m not going to even try deconstructing the unemployment meme.
But scientists, academics, political leaders, and journalists should be more careful.
It’s one thing to also report, or state from the podium, “these are early results and must be validated with more testing,” “preliminary results,” or “the testing method is still under development and is not 100% percent accurate.”
In fairness, a New York Times article about the 21% number does include many disclaimers. https://www.nytimes.com/2020/04/23/nyregion/coronavirus-new-york-update.html
The account does acknowledge that the accuracy of the test has been called into question. But what does that tell us? Not much. The article also takes the percentage and propagates it through another calculation, stating that “if the state’s numbers indicated the true incidence of the virus, they would mean that more than 1.7-million people in New York City…had already been infected” and “That is far greater than the 250,000 confirmed cases of the virus itself that the state has already recorded.”
So a numerical result with unquantified accuracy now implies that the infection rate is almost seven times higher than the confirmed cases. The error in the 21% number is now embedded in the next numbers!
Just because a bit of information is a number doesn’t necessarily mean it is telling us something meaningful, relevant, or useful at this time.
Determining and error or uncertainty is a rigorous, quantitative, analytical exercise and should be conducted for every numerical result, especially during times of manic public concern.
Many, though not enough, scientific journal papers at least will include a qualitative on error and uncertainty in the measurements, the models, statistics, assumptions, etc, especially around statistical analysis. Rarely do you see included in media reports a thorough answer to the question “how confident are we in the numerical result we just reported?”
We need to bring error and uncertainty analysis forward in the public discourse. That’s what Painting By Numbers is for.
Now here’s a excellent example of the importance of data frequency resolution! This New York Times article informs us about some ‘weird’ characteristics of the planet Uranus (apart from the juvenile fun you can have with the name).
But what’s even more fascinating, if you are a data geek, is the notion that […]
Now here’s a excellent example of the importance of data frequency resolution! This New York Times article informs us about some ‘weird’ characteristics of the planet Uranus (apart from the juvenile fun you can have with the name).
But what’s even more fascinating, if you are a data geek, is the notion that Uranus ejects “plasmoids” (a blob of plasma and magnetic fields, responsible for a planet’s atmosphere leaking away) was formulated just recently after space scientists went back into thirty year old data taken during Voyager 2’s 1986 journey, increased the resolution of the data from 8-minute averages to ~2 seconds. They detected what’s known as an anomaly in the planet’s magnetic field. You have to click on the NASA blog post referenced in the article to find this graph, below. The red is the average line; the black is the higher time frequency.
The plasmoid release occupies only 60 seconds of Voyager’s 45-hour long flight by Uranus, but has led to all kinds of interesting informed speculation about Uranus’ characteristics, especially compared to the other planets in our solar system. This “60 seconds” reminds me of what I vaguely recall learning in an anthropology class in college about constructing an entire hominid from a single tooth. (I thought it was Australopithicus but I wasn’t able to quickly confirm that.). Obviously, scientists will have to further validate their findings, either with a follow-on trip to the outer planets, or other means.
But the story certainly is an interesting lesson in data science. And I bet the scientists were itching to say Uranus burps, or even better, farts.
So much “painting by numbers” is done with numerical models. And the government is probably the largest consumer of such models. All models require assumptions, and as Commandment 2 in “Painting By Numbers” counsels, you must identify these assumptions to understand the results.
The need for assumptions gives policy-makers wide latitude to drive towards answers which […]
So much “painting by numbers” is done with numerical models. And the government is probably the largest consumer of such models. All models require assumptions, and as Commandment 2 in “Painting By Numbers” counsels, you must identify these assumptions to understand the results.
The need for assumptions gives policy-makers wide latitude to drive towards answers which support their policies. For example, the EPA under the Obama administration calculated the “social cost of carbon” as a value around $50/ton of carbon emitted. The EPA under the Trump administration managed to tweak the model so that the social cost of carbon (SCC) was more like $7/ton.
I wrote about this a while back in this space. Apparently, one thing you can do is select a different value for the internal rate of return (a financial parameter) in the model, according to a few references I read at the time.
Now here’s some fun: A paper I found surfing the web entitled “The Social Cost of Carbon Made Simple” shows one methodology for calculating it. By the way, this has got to be the most wrongly titled paper of 2010, the year it was published. There is nothing simple about it! Go on – click on it and read the first few pages. I dare you.
But the paper does acknowledge that a “…meta-analysis…found that the distribution of published SCC estimates spans several orders of magnitude and is heavily right-skewed: for the full sample, the median was $12, the mean was $43, and the 95th percentile was $150…” Moreover, the spread was as low as $1/ton.
See what I mean? If you want to de-emphasize carbon in your economic policies, you pick a methodology that minimizes SCC. If you want to build your policies around climate change, you pick a method that maximizes it. To the credit of the Obama administration, they settled on something close to the mean.
The paper is provisional work and nine years old, so don’t take it for any kind of gospel. I use it simply to illustrate points that require of the paper neither absolute accuracy or timeliness.
In an article (New York Times, March 27, 2020) titled “Trump’s Environmental Rollbacks Find Opposition From Within: Staff Scientists,” I read this: “In 2018, when the Environmental Protection Agency proposed reversing an Obama-era rule to limit climate-warming coal pollution, civil servants included analysis showing that by allowing more emissions, the new version of the rule would contribute to 1,400 premature deaths a year.”
I’m not going to dig deep and determine how they arrived at the number 1400, and anyway, the key to the sentence isn’t the number, it’s the word “contribute.” How many other factors “contribute to those premature deaths?
The article argues that Trump administration officials are not even trying to “tweak” the models, but instead have come in with a “repeal and replace” attitude “without relying on data, and science and facts.” It was reported that Obama’s head of the EPA, before she departed, had encouraged staffers to remain and make sure that EPA’s analyses have the “truth” put in there.
Unfortunately, numerical models don’t cough up the truth, just someone’s version of it. Those who don’t take the time understand all of this become victims reduced to parroting others’ versions of the truth. On the other hand, not even being willing to consider data and science and facts is completely wrong-headed. That is ignorance, as any model of human behavior will tell you.
At the end of this paper is a tantalizing Best Practice, however. There are two sidebar text boxes: (1) What is already known on the subject?” and (2) What this study adds. Imagine if every article, every paper having numerical analysis or results had a third section, (3) What are the uncertainties around our results?
[…]
At the end of this paper is a tantalizing Best Practice, however. There are two sidebar text boxes: (1) What is already known on the subject?” and (2) What this study adds. Imagine if every article, every paper having numerical analysis or results had a third section, (3) What are the uncertainties around our results?
When I was a kid, I sometimes would write down lots of really huge numbers and add them up, subtract one from the other, or multiply them. Just for the fun of it. You might think, wow, a budding math genius (not even close), but then I’d have to add, sometimes I did this to keep myself awake so I could sneak out of my room at night and watch TV with my sister well past our bedtimes.
Now, just for kicks, I read through technical papers with complex numerical analysis and see if I can find the Achilles Heel in the analysis, a questionable assumption, or a variable with a high degree of error associated with it.
After reading an article about the total costs of bicycle injuries (I am an avid cyclist), I went to the original source, linked below. Calculating the total cost of something is always fraught with uncertainty. Let me reiterate that I’m not impugning the credibility of the authors; I’m pointing out common uncertainties in numerical analyses which should be more visible.
Well, it didn’t take long to find at least one Achilles Heel, and it’s a good one because I see it frequently. The “heel” is evident from the graph on page three of the paper. Without getting down into the weeds, the total cost has three principle components – medical costs, work loss costs, and lost quality of life costs.
It’s easy to see that the lost quality of life costs represent the largest of the three cost components. In fact, just eyeballing the bar chart, that component is two to three times the size of the other two components. So it makes the “total cost” of bicycle injuries appear much higher. What isn’t so easy to discern is that the lost quality of life costs are probably subject to a far greater error factor than the other two.
Estimating “quality of life” is more difficult, because it’s a more subjective variable. This is what I mean in commandment 7 of Painting by Numbers: “Don’t confuse feelings with measurements.” Medical costs of an injury are less squishy – someone had to pay the bills after all – as is work loss. Just multiple the wages or salaries by the lost time due to the injury.
To their credit, the authors point this out in the Discussion section: “Costs due to loss of life are challenging to estimate.” What would have been far more helpful in understanding the validity of this quant exercise is if the authors added error bands around the three variables in the figure I referenced above. Or ran the results with and without the very high error prone variable and compared them. Because, as stipulated by Commandment three in Painting By Numbers, “Find the Weakest Link,” the results are only as good as the most error prone variable.
At the end of this paper is a tantalizing Best Practice, however. There are two sidebar text boxes: (1) What is already known on the subject?” and (2) What this study adds. Imagine if every article, every paper having numerical analysis or results had a third section, (3) What are the uncertainties around our results?
http://injuryprevention.bmj.com/…/injuryprev-2016-042281.fu…
This is another entry at my Facebook Author Page on error, bias, numerical analysis, and all the topics in Painting By Numbers: How to Sharpen Your BS Detector and Smoke Out the Experts.
I’ve spent many hours in my career listening to technical papers, reviewing them for engineering associations and conferences, and editing […]
This is another entry at my Facebook Author Page on error, bias, numerical analysis, and all the topics in Painting By Numbers: How to Sharpen Your BS Detector and Smoke Out the Experts.
I’ve spent many hours in my career listening to technical papers, reviewing them for engineering associations and conferences, and editing them or extracting from them for publications and client reports. Over close to four decades, I’ve witnessed a deterioration in quality of these papers and presentations. Many of them today are thinly veiled marketing pieces for the authors’ companies.
So my eyeballs perked up when I read this headline at Retraction Watch: “Could bogus Scientific research be considered false advertising?” The opening sentence is, “Could a scientific paper ever be considered an advertisement?” Retraction Watch is a newly discovered website I’m now following through regular notices.
The questions were stimulated by a court case in Japan where a researcher for a top global pharmaceutical company was being tried, not for manipulating data and scientific fraud (that had already been acknowledged), but for criminal violation of Japan’s advertising laws. The article goes further to probe whether a similar court case in the US might find the researcher and/or his/her company guilty of false advertising when research shown to include falsified data is circulated with promotional material about the drug.
There’s a difference between a technical paper so weak it comes across as company marketing collateral and corrupted research data used to support pharmaceutical advertising. But my larger point here is that the general deterioration in technical information disseminated by “experts” to professionals and consumers creates a huge credibility gap.
It’s high time we call out data-driven BS for what it is in many cases – advertising, false or legitimate, for a product, company, specialist, researcher, author, or government policy maker disguised as legitimate information.
Retraction Watch is a fascinating site to follow (even if somewhat depressing). Someone has to do the dirty work of accentuating the negative. I’m glad I’m not alone!
http://retractionwatch.com/…/bogus-results-considered-fals…/
From a Painting By Numbers perspective, the article below is probably one of the most important you’ll read this month, maybe the next few months.
It does a great job expanding on my Commandment No. 10, “Respect the Human Condition,” probably the most sweeping of the twelve commandments in my book. It means […]
From a Painting By Numbers perspective, the article below is probably one of the most important you’ll read this month, maybe the next few months.
It does a great job expanding on my Commandment No. 10, “Respect the Human Condition,” probably the most sweeping of the twelve commandments in my book. It means that the foibles of us mere mortals – such as accentuating the positive, stretching for success, seeking reward and avoiding punishment – are almost always baked into every numerical result we see in the public sphere. And when they aren’t, you can bet it took lots of experts with plenty of patience for the foibles, or biases, to be extracted out.
Unless you are looking at primary research documents, every numerical result you see has two major components: the work of the analysts or researchers themselves and the work of those (journalists, communications professionals, policy aids, etc) who report them. The headline of the article below focuses on making the scientific method better account for less than positive results. But the authors also take reporters to task, who generally ignore critical research which doesn’t lead to a positive result.
The headline, “Dope a Trope shows modest cancer fighting ability in latest research,” is going to have higher readability than “Scientists find Dope a Trope has no effect on cancer patients.” The problem with this is, in the realm of research, there could be half a dozen experiments of the latter variety and only one of the former. And the half dozen who found no effect probably aren’t going to impress those who fund research.
The author, Aaron E. Carroll of the Indiana University School of Medicine, notes, rightly I believe, that the whole culture of professional scientific research has to change to address this endemic challenge. Thankfully, the author has a great blog site, The Incidental Economist, where he regularly expands on this broad but critical subject. For those interested in diving in even deeper, The Center for Open Science has tools and info for making research methods more transparent and results more reproducible. Only after many experts arrive at the same results should the rest of us even begin to take them seriously.
https://www.nytimes.com/…/science-needs-a-solution-for-the-…
So this happened! Painting By Numbers won a GOLD “IPPY” from Independent Publisher magazine. Think Oscar, Emmy, or Tony for Indie, small press, and academic publishers. Awards are presented in conjunction with Book Expo America, this year in NYC. Of course, I wouldn’t pass up a chance to return to my old stomping grounds. Get […]
So this happened! Painting By Numbers won a GOLD “IPPY” from Independent Publisher magazine. Think Oscar, Emmy, or Tony for Indie, small press, and academic publishers. Awards are presented in conjunction with Book Expo America, this year in NYC. Of course, I wouldn’t pass up a chance to return to my old stomping grounds. Get yours here!
Always feels good to support you local independent bookseller!
Or from the big dog here:
This morning on C-Span, the editor of a prominent politics and culture magazine stated that there were seven health care lobbyists for every member of Congress! That’s right – 7. So, of course, I went to validate this number.
The figures I turned up, based on […]
This morning on C-Span, the editor of a prominent politics and culture magazine stated that there were seven health care lobbyists for every member of Congress! That’s right – 7. So, of course, I went to validate this number.
The figures I turned up, based on a cursory scan of Google entries with different sets of key words, ranged from six to thirteen (!) health care lobbyists between 2002 and 2013. I couldn’t find more recent ones. I presume the number fluctuates depending on how “hot” certain legislation and bills are before our elected officials.
BUT..even one health care lobbyist for every Senator and Representative would be a frightening number. In other words, it could be off by a factor of seven and still make me ill. Imagine what other industries have for lobbyists, like energy and financial services and defense contractors.
When you think about the numbers shaping our lives (the tag line for my latest book, Painting By Numbers: How to Sharpen Your BS Detector and Smoke Out the Experts), this is one that looms larger than many others.
As good-intentioned as it may be, here’s the kind of article Painting By Numbers was written for. The author describes how risk valuation techniques from financial engineering can be applied to assess the future risks of global climate change, techniques used by the federal government. I won’t get into the analysis, just point out that, […]
As good-intentioned as it may be, here’s the kind of article Painting By Numbers was written for. The author describes how risk valuation techniques from financial engineering can be applied to assess the future risks of global climate change, techniques used by the federal government. I won’t get into the analysis, just point out that, according to the author, the Obama administration pegged the “social cost of carbon” at $40/ton (of CO2) and the Trump administration is on a path to calculate it as $5/ton.
That is one hell of a change! Here’s my question: How valid is the methodology regardless of which number you subscribe to? This is the essential stumbling block when models are used to provide a numerical framework in the present for things that might happen well into the future. In this case, as the author explains, everything depends on the “discount rate” plugged into the model.
Writes the author: “A concept known as the discount rate makes it possible to translate future damages into their present value. In 2009, President Obama convened an interagency working group, of which I was a co-leader, to come up with a uniform method for estimating the social cost of carbon: the resulting number to be used across all federal agencies. Our group chose to emphasize estimates based on a discount rate of 3 percent.”
So now we have the key assumption, based on Commandment No. 2 in my book.
But why did they choose to emphasize estimates with a discount rate of 3 percent?? No explanation is given. Without an explanation, this can’t be a “uniform method” but instead the “preferred” method for this group, a group given lots of power and influence in the last administration. And now we have another group in charge with different preferences and their assumption apparently is a discount rate of 7%.
This isn’t an argument one way or another about the impact of global climate change and what we should be doing about it. I’m simply illuminating how those in charge are able to wield the results of their math models with impunity, unless we all become more engaged in assessing the validity of their methods.
https://www.nytimes.com/…/what-financial-markets-can-teach-…
The latest of my irregular posts elaborating on my new book, Painting By Numbers… It’s only fair to recognize when numbers are reported appropriately. I’d started hearing about the mysteriously accepted “10,000 steps daily” number several years ago when wearable fitness devices started getting attention. Seemingly overnight, it seemed 10,000 became to the exercise […]
Recent Posts
- What Debussy, data mining and modeling have in common…
- Turning Traditional Economics Inside Out
- C-IRA Poster for the International Conference on Complex Systems
- The lack of error and uncertainty analysis in our science and technical communications is as pernicious as the ‘partisan divide’
- It’s just not that hard: Earth Day at 50
Recent Comments
- jmakansi on When a Favorite Short Story Expands to a Novel…
- Ronald Gombach on When a Favorite Short Story Expands to a Novel…
- Kathy Schwadel on When a Favorite Short Story Expands to a Novel…
- jmakansi on So Vast the Prison: Takes No Prisoners Regarding the Universal Plight of Women
- Elena on So Vast the Prison: Takes No Prisoners Regarding the Universal Plight of Women
Archives
- September 2020
- August 2020
- July 2020
- April 2020
- March 2020
- July 2017
- June 2017
- April 2017
- March 2017
- January 2017
- July 2016
- May 2016
- November 2015
- October 2015
- August 2015
- May 2015
- March 2015
- January 2015
- November 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- August 2013
- July 2013
- June 2013
- April 2013
- February 2013
- January 2013
- November 2012
- October 2012
- September 2012
- August 2012
- March 2012
- November 2011
- October 2011
- July 2011
- June 2011
- December 2010
- November 2010
- March 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- Error gathering analytics data from Google: Error 404 (Not Found)!!1 *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px} 404. That’s an error. The requested URL /analytics/v2.4/data?ids=ga:66373148&metrics=ga:pageviews&filters=ga%3ApagePath%3D%7Epainting-by-numbers%2F.%2A&start-date=2024-11-24&end-date=2024-12-24 was not found on this server. That’s all we know.
- Error gathering analytics data from Google: Error 404 (Not Found)!!1 *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px} 404. That’s an error. The requested URL /analytics/v2.4/data?ids=ga:66373148&dimensions=ga:date&metrics=ga:pageviews&filters=ga%3ApagePath%3D%7Epainting-by-numbers%2F.%2A&start-date=2024-11-24&end-date=2024-12-24 was not found on this server. That’s all we know.