Painting By Numbers: How to Sharpen Your BS Detector and Smoke Out the Experts earned Silver in the Foreword Reviews Indies  Awards for 2016 in the category Social Sciences. 

Recognition is always welcome, especially as an independent author through an independent publisher, and this latest award for Painting By Numbers sits alongside the GOLD IPPY won in the 2017 Independent Publisher Book Awards in the category Current Events: Social Issues/Humanitarian. 

To everyone fighting numerical illiteracy, we salute you!

 

At the end of this paper is a tantalizing Best Practice, however. There are two sidebar text boxes: (1) What is already known on the subject?” and (2) What this study adds. Imagine if every article, every paper having numerical analysis or results had a third section, (3) What are the uncertainties around our results?

 

When I was a kid, I sometimes would write down lots of really huge numbers and add them up, subtract one from the other, or multiply them. Just for the fun of it. You might think, wow, a budding math genius (not even close), but then I’d have to add, sometimes I did this to keep myself awake so I could sneak out of my room at night and watch TV with my sister well past our bedtimes.

Now, just for kicks, I read through technical papers with complex numerical analysis and see if I can find the Achilles Heel in the analysis, a questionable assumption, or a variable with a high degree of error associated with it.

After reading an article about the total costs of bicycle injuries (I am an avid cyclist), I went to the original source, linked below. Calculating the total cost of something is always fraught with uncertainty. Let me reiterate that I’m not impugning the credibility of the authors; I’m pointing out common uncertainties in numerical analyses which should be more visible.

Well, it didn’t take long to find at least one Achilles Heel, and it’s a good one because I see it frequently. The “heel” is evident from the graph on page three of the paper. Without getting down into the weeds, the total cost has three principle components – medical costs, work loss costs, and lost quality of life costs.

It’s easy to see that the lost quality of life costs represent the largest of the three cost components. In fact, just eyeballing the bar chart, that component is two to three times the size of the other two components. So it makes the “total cost” of bicycle injuries appear much higher. What isn’t so easy to discern is that the lost quality of life costs are probably subject to a far greater error factor than the other two.

Estimating “quality of life” is more difficult, because it’s a more subjective variable. This is what I mean in commandment 7 of Painting by Numbers: “Don’t confuse feelings with measurements.” Medical costs of an injury are less squishy – someone had to pay the bills after all – as is work loss. Just multiple the wages or salaries by the lost time due to the injury.

To their credit, the authors point this out in the Discussion section: “Costs due to loss of life are challenging to estimate.” What would have been far more helpful in understanding the validity of this quant exercise is if the authors added error bands around the three variables in the figure I referenced above. Or ran the results with and without the very high error prone variable and compared them. Because, as stipulated by Commandment three in Painting By Numbers, “Find the Weakest Link,” the results are only as good as the most error prone variable.

At the end of this paper is a tantalizing Best Practice, however. There are two sidebar text boxes: (1) What is already known on the subject?” and (2) What this study adds. Imagine if every article, every paper having numerical analysis or results had a third section, (3) What are the uncertainties around our results?

http://injuryprevention.bmj.com/…/injuryprev-2016-042281.fu…

This is another entry at my Facebook Author Page on error, bias, numerical analysis, and all the topics in Painting By Numbers: How to Sharpen Your BS Detector and Smoke Out the Experts.

I’ve spent many hours in my career listening to technical papers, reviewing them for engineering associations and conferences, and editing them or extracting from them for publications and client reports. Over close to four decades, I’ve witnessed a deterioration in quality of these papers and presentations. Many of them today are thinly veiled marketing pieces for the authors’ companies.

So my eyeballs perked up when I read this headline at Retraction Watch: “Could bogus Scientific research be considered false advertising?” The opening sentence is, “Could a scientific paper ever be considered an advertisement?” Retraction Watch is a newly discovered website I’m now following through regular notices.

The questions were stimulated by a court case in Japan where a researcher for a top global pharmaceutical company was being tried, not for manipulating data and scientific fraud (that had already been acknowledged), but for criminal violation of Japan’s advertising laws. The article goes further to probe whether a similar court case in the US might find the researcher and/or his/her company guilty of false advertising when research shown to include falsified data is circulated with promotional material about the drug.

There’s a difference between a technical paper so weak it comes across as company marketing collateral and corrupted research data used to support pharmaceutical advertising. But my larger point here is that the general deterioration in technical information disseminated by “experts” to professionals and consumers creates a huge credibility gap.

It’s high time we call out data-driven BS for what it is in many cases – advertising, false or legitimate, for a product, company, specialist, researcher, author, or government policy maker disguised as legitimate information.

Retraction Watch is a fascinating site to follow (even if somewhat depressing). Someone has to do the dirty work of accentuating the negative. I’m glad I’m not alone!

http://retractionwatch.com/…/bogus-results-considered-fals…/

Could a scientific paper ever be considered an advertisement? That was the question posed to a…
RETRACTIONWATCH.COM
 

 

From a Painting By Numbers perspective, the article below is probably one of the most important you’ll read this month, maybe the next few months.

It does a great job expanding on my Commandment No. 10, “Respect the Human Condition,” probably the most sweeping of the twelve commandments in my book. It means that the foibles of us mere mortals – such as accentuating the positive, stretching for success, seeking reward and avoiding punishment – are almost always baked into every numerical result we see in the public sphere. And when they aren’t, you can bet it took lots of experts with plenty of patience for the foibles, or biases, to be extracted out.

Unless you are looking at primary research documents, every numerical result you see has two major components: the work of the analysts or researchers themselves and the work of those (journalists, communications professionals, policy aids, etc) who report them. The headline of the article below focuses on making the scientific method better account for less than positive results. But the authors also take reporters to task, who generally ignore critical research which doesn’t lead to a positive result.

The headline, “Dope a Trope shows modest cancer fighting ability in latest research,” is going to have higher readability than “Scientists find Dope a Trope has no effect on cancer patients.” The problem with this is, in the realm of research, there could be half a dozen experiments of the latter variety and only one of the former. And the half dozen who found no effect probably aren’t going to impress those who fund research.

The author, Aaron E. Carroll of the Indiana University School of Medicine, notes, rightly I believe, that the whole culture of professional scientific research has to change to address this endemic challenge. Thankfully, the author has a great blog site, The Incidental Economist,  where he regularly expands on this broad but critical subject. For those interested in diving in even deeper, The Center for Open Science has tools and info for making research methods more transparent and results more reproducible. Only after many experts arrive at the same results should the rest of us even begin to take them seriously.

https://www.nytimes.com/…/science-needs-a-solution-for-the-…

So this happened! Painting By Numbers won a GOLD “IPPY” from Independent Publisher magazine. Think Oscar, Emmy, or Tony for Indie, small press, and academic publishers. Awards are presented in conjunction with Book Expo America, this year in NYC. Of course, I wouldn’t pass up a chance to return to my old stomping grounds. Get yours here!

Always feels good to support you local independent bookseller!

Or from the big dog here:

 

No automatic alt text available.

This morning on C-Span, the editor of a prominent politics and culture magazine stated that there were seven health care lobbyists for every member of Congress! That’s right – 7. So, of course, I went to validate this number.

The figures I turned up, based on a cursory scan of Google entries with different sets of key words, ranged from six to thirteen (!) health care lobbyists between 2002 and 2013. I couldn’t find more recent ones. I presume the number fluctuates depending on how “hot” certain legislation and bills are before our elected officials.

BUT..even one health care lobbyist for every Senator and Representative would be a frightening number. In other words, it could be off by a factor of seven and still make me ill. Imagine what other industries have for lobbyists, like energy and financial services and defense contractors.

When you think about the numbers shaping our lives (the tag line for my latest book, Painting By Numbers: How to Sharpen Your BS Detector and Smoke Out the Experts), this is one that looms larger than many others.

As good-intentioned as it may be, here’s the kind of article Painting By Numbers was written for. The author describes how risk valuation techniques from financial engineering can be applied to assess the future risks of global climate change, techniques used by the federal government. I won’t get into the analysis, just point out that, according to the author, the Obama administration pegged the “social cost of carbon” at $40/ton (of CO2) and the Trump administration is on a path to calculate it as $5/ton.

That is one hell of a change! Here’s my question: How valid is the methodology regardless of which number you subscribe to? This is the essential stumbling block when models are used to provide a numerical framework in the present for things that might happen well into the future. In this case, as the author explains, everything depends on the “discount rate” plugged into the model.

Writes the author: “A concept known as the discount rate makes it possible to translate future damages into their present value. In 2009, President Obama convened an interagency working group, of which I was a co-leader, to come up with a uniform method for estimating the social cost of carbon: the resulting number to be used across all federal agencies. Our group chose to emphasize estimates based on a discount rate of 3 percent.”

So now we have the key assumption, based on Commandment No. 2 in my book.

But why did they choose to emphasize estimates with a discount rate of 3 percent?? No explanation is given. Without an explanation, this can’t be a “uniform method” but instead the “preferred” method for this group, a group given lots of power and influence in the last administration. And now we have another group in charge with different preferences and their assumption apparently is a discount rate of 7%.

This isn’t an argument one way or another about the impact of global climate change and what we should be doing about it. I’m simply illuminating how those in charge are able to wield the results of their math models with impunity, unless we all become more engaged in assessing the validity of their methods.

https://www.nytimes.com/…/what-financial-markets-can-teach-…

Policy-makers are probably the worst offenders when it comes to using and abusing mathematical modeling and numerical analysis, the subject of my latest book, Painting By Numbers: How to Sharpen Your BS Detector and Smoke Out the Experts. When it comes to the administration’s rollback of the global climate change regulations and specifically the Clean Power Plan, however, one number which really matters is 3. That’s the number of times the Supreme Court has ruled in favor of the Environmental Protection Agency’s authority to regulate carbon dioxide under the Clean Air Act Amendments.

This means that the Administration cannot just end the Clean Power Plan, central to the EPA’s carbon regulation strategy, but must come up with an alternative regulatory framework. The EPA concluded, and the Supreme Court upheld, an endangerment finding for carbon pollutants, and therefore the agency is legally required to regulate carbon emissions.

Ironically, this is similar to the repeal, replace, repair problem with the Affordable Care Act. You can’t just “repeal” EPA’s carbon regulations, and it will be difficult to replace them. So, repair is probably going to be the sensible option. 

Nothing is easy when it comes to federal regulations and that’s the way the framers of our Constitution intended. 

The latest of my irregular posts elaborating on my new book, Painting By Numbers…
 
It’s only fair to recognize when numbers are reported appropriately. I’d started hearing about the mysteriously accepted “10,000 steps daily” number several years ago when wearable fitness devices started getting attention. Seemingly overnight, it seemed 10,000 became to the exercise world what PI is to the world of math.
 
This morning, I read an article about 15,000 steps a day! The article reviews a study in which Scottish postal workers, those walking to deliver the mail and those in sedentary back office positions, were studied to determine any association with risk of heart and other diseases.
 
The first thing I learned is that the 10,000 number has never been scientifically validated as a means of reducing health risk, though I’m certain it has been for selling fitness products. The study did reveal, for this small sample size, that a high level of activity does indeed show an association with reduced health risk and that the more activity, the lower the risk. The reason I am pointing out the article, though, is that it makes it clear that there is merely an ASSOCIATION between activity and health, adhering to Commandment Eight in Painting By Numbers, Suspect the (Co)mpany They Keep (i.e., making a bright line distinction among co-location, coincidence, correlation, causation, convergence, and consensus). The article further underscores the study’s other limitations. I like the conclusion: “…the findings imply there are good reasons to get up and move…” Rather than push towards the realm of certainty, the article makes clear that there is only an “implication.”
 
Exercise is kind of the reverse of smoking. Anyone who smoked cigarettes or still does probably knows just by what is happening to their body that the habit is terrible for you. By the same token, I’ve never heard anyone, unless they injure themselves, come back from vigorous exercise and say, “Well, damn, I feel worse!”
 
https://www.nytimes.com/2017/03/22/well/move/should-15000-steps-a-day-be-our-new-exercise-target.html?rref=collection%2Fbyline%2Fgretchen-reynolds&action=click&contentCollection=undefined&region=stream&module=stream_unit&version=latest&contentPlacement=2&pgtype=collection
Tagged with:
 

Another in my continuing series on applying the “commandments” from my most recent book, Painting by Numbers: How to Sharpen Your BS Detector and Smoke Out the Experts

“Americans ate 19% less beef from ’05 to ’14,” according to a recent headline. The article goes on to focus on the carbon footprint reduction associated with eating less beef and avoiding the methane emissions from the back ends of cattle and all the carbon-intensive stuff that has to happen to get that steak to your dinner table. The original study (hot linked in the article) the article is based on comes from the Natural Resources Defense Council (NRDC). It wasn’t just beef eating that fell during this period; chicken and pork were affected as well.

The article makes a valiant case for how Americans are “gradually changing their diets, driven by health concerns and other factors.” The original NRDC study’s aim is to show that the answer to the question, “Where’s the beef?,” is a victory in the war on climate change. It may indeed have something to do with that.

But the larger explanation for this quantitative result probably has as much to do with general economic forces. 2007 marked the beginning of the “great depression” and the economy has been on an anemic (relative) growth cycle since we came out of it. The beef ranch-to-table production and delivery cycle is not only carbon intensive, it’s expensive. Beef, for most people, is one of the most expensive foods they buy, and it would be natural to cut back on its consumption during bad economic times.

The article is a little more fair about all this than the original study. The author cites a survey in which 37% of Americans say price is the number one reason why they ate less beef. That’s somewhat out of step with the conclusion NRDC is trying to draw. The study avoids mention of ANY economic factors. It would have been more credible if there was even an attempt to “iron out” the affect of general economic conditions. Consumer economic activity generally has been substantially cut back since 2007.

Of course, the study also doesn’t explicitly CLAIM a correlation between consumer’s desire to impact carbon footprint by eating less beef, but you can bet it wishes to suggest one, it we invoke Commandment six in Painting By Numbers, “Understand the Business Model.” NRDC is an environmental policy organization (and an effective one at that).

What’s critical here is the aura around that 19% number (and I’m not even going to start in on the methodology to calculate it). In the study, it was all about an associated (not correlated) improvement in carbon footprint. In the article, it was about all the reasons except economic forces why Americans are reducing beef consumption (price being more of a footnote). In two iterations, there’s a great deal of political, cultural, and social “stuff” hanging off of that number. Imagine how laden it might be once it’s being discussed around the dinner table!

https://www.nytimes.com/…/…/beef-consumption-emissions.html…

Tagged with:
 

Looking for something?

Use the form below to search the site:


Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...

Set your Twitter account name in your settings to use the TwitterBar Section.