Currently viewing the category: "Uncategorized"

…and when is data mining and analysis just a sophisticated, math-laden opinion?

I like to draw insight from juxtapositions. Yesterday, I listened to half a dozen academic presentations on modeling and data mining aimed at understanding the impact of extreme weather on global communities. As you might imagine, these exercises require large data sets, bold assumptions, and extrapolations, some out to as far as year 2100.

Later in the day, I sat at the piano with a blog post about Debussy’s Arabesque No. 1 for solo piano, a popular piece known for its “impressionistic” qualities. The author of the blog did some analysis on melody, harmony, and rhythm that essentially was trying to get into Debussy’s head as he composed this piece.

The blog author teased out a melody buried in some arpeggios and then attempted to show how it becomes a motif throughout the piece. She admitted she couldn’t really know whether this melody was Debussy’s intent, but made an assumption that this certainly could have been what was going through Debussy’s head.

The academic data mining and modeling would probably be scary to those who aren’t comfortable with numerical modeling and methods; the analysis of Arabesque No. 1 would probably be scary to those not familiar with musical notation and compositional methods. The assumptions and extrapolations made in both cases could make nervous anyone familiar with both.

In both cases, a “specialist” is trying to gain insight into something that, for all practical purposes, is unknowable – Debussy’s thought process (even if subconscious) as he composed Arabesque No. 1 and economic and community impacts as the planet warms over the coming decades – and then convince an audience that they’ve indeed shed some light into a dark cave. And if we are to take either analysis as useful, others would have to validate the findings, or otherwise agree on the methodology, results, and conclusions.

Moral of this tale: Analysis isn’t “new knowledge,” regardless of what kind of notation accompanies it, until many other experts weigh in and many analyses converge on similar conclusions. And just because someone has credentials that brand him or her a specialist, doesn’t mean their analysis is more than a sophisticated opinion.

What really astounds me about listening to academic presentations these days (which I have been doing my entire career) is how few people, usually experts with as much background and experience on the topic as the presenter, actually question the results or methodology. This to me is dangerous at its core. Academia is where data and findings should be vigorously interrogated and debated. These days, technical presentations in general seem to be more of an advertising opportunity than a spark for debate towards achieving some consensus and contribution to the knowledge base.  

Tagged with:
 

How I arrived at the Carbon IRA (individual retirement Account)

I’m thinking maybe a little more context is necessary for people to “get” the Carbon IRA (individual retirement account) concept and why my book, Carbon IRA + YouTility: How to Address Climate Change and Reduce Carbon Footprint Before Its Too Late, has such salient messages.

A chemical engineering education and electricity industry career pretty much makes you a systems person. You think in terms of systems, boundaries, and surroundings. After several decades in the electricity and energy sectors, and wrestling with the macro-carbon footprint challenge, I finally realized a few things weren’t going to change anytime soon:

  • Economic growth is predicated on buying and consuming stuff
  • Low cost energy supports economic growth
  • Carbon-laden fossil fuels are the backbone of low-cost energy, especially electricity
  • Therefore, carbon is the ultimate externality – the environmental impact that isn’t properly reflected on the accounting ledger
  • Thus, until carbon-free energy replaces fossil fuels, we are doomed to aggravate carbon-induced climate disruption.

These aren’t immutable “truths” like the laws of thermodynamics. But they sure have been immutable in my lifetime. At least globally. So the “problem statement” is pretty straightforward.

When my two daughters were in high school and college, we had many dinner conversations about dad’s job (electricity industry) and climate change (what my daughters were having nightmares about). I kept saying, until everyone on the planet learns to consume less and we accelerate the transition to renewable energy sources, it will get worse before it gets better. And it has.

So we came up with this slogan. Think:Less! Note the double meaning. Think less, or simplify how you think about global climate change. Think about consuming less stuff. Period. End of Story. We even made a stack of bumper stickers!

Of course, no one really knew what the hell we were talking about. But it led me to the more important challenge. Environmentalists have been preaching the three Rs – reduce, recycle, reuse – in my memory ever since the first Earth Day in 1970.  By the new millennium, that still wasn’t working.

After more dinner conversations, I had an epiphany.

  • You have to reverse the growth-at-any-cost economic mantra
  • One approach is to reward behavior and consumer choices which result in less carbon
  • If you make the behaviors permanent, you can make progress on carbon and climate
  • You can’t depend on volatile energy price signals to sustain these behaviors
  • Converting the avoided carbon into money is a bona-fide incentive
  • Putting the money towards a retirement account encourages a lifetime of better behaviors and choices.

Hence, the carbon IRA concept was born. It turns traditional economics inside out. Since nothing else seems to be working fast enough, maybe it’s time for a radical departure?

Now here’s a excellent example of the importance of data frequency resolution! This New York Times article informs us about some ‘weird’ characteristics of the planet Uranus (apart from the juvenile fun you can have with the name).

But what’s even more fascinating, if you are a data geek, is the notion that Uranus ejects “plasmoids” (a blob of plasma and magnetic fields, responsible for a planet’s atmosphere leaking away) was formulated just recently after space scientists went back into thirty year old data taken during Voyager 2’s 1986 journey, increased the resolution of the data from 8-minute averages to ~2 seconds. They detected what’s known as an anomaly in the planet’s magnetic field. You have to click on the NASA blog post referenced in the article to find this graph, below. The red is the average line; the black is the higher time frequency.

The plasmoid release occupies only 60 seconds of Voyager’s 45-hour long flight by Uranus, but has led to all kinds of interesting informed speculation about Uranus’ characteristics, especially compared to the other planets in our solar system. This “60 seconds” reminds me of what I vaguely recall learning in an anthropology class in college about constructing an entire hominid from a single tooth. (I thought it was Australopithicus but I wasn’t able to quickly confirm that.). Obviously, scientists will have to further validate their findings, either with a follow-on trip to the outer planets, or other means.

But the story certainly is an interesting lesson in data science. And I bet the scientists were itching to say Uranus burps, or even better, farts.

Tagged with:
 

In recent news reports on COVID19 pandemic, I’m glad to hear pundits and politicians refer to the electric utility model for emergency response. Utilities assist each other during major outages, sharing and moving personnel and resources to hard-hit areas, to keep the lights on and save lives. This has been a traditional part of utility operations for decades, and thankfully has survived the deregulation/competitive era which nominally began in the late 1970s. It’s part of utility culture.

At that time, many industries were eventually transformed by what is commonly known as neo-liberal/conservative economic and cultural philosophy which argued, in effect, that everything is better with competition and markets. The list includes trucking, airlines, natural gas, electricity, water, education, and health care. Given today the disparities in wealth, dislocations in resources, and the environmental issue of our time, global climate disruption, it’s easy to blame this philosophy for the ills we seem to be facing as a nation and society.

Maybe the better way to look at it is that this “strain of economic thought” has run its course and it is time to work within a new framework.

In my mind, that framework is the traditional regulated utility. The basic business model is the utility invests to expand and maintain its infrastructure to serve everyone in its “service territory” and a government entity, the public utility commission, sets a regulated rate of return on that investment. Approved operating costs are passed along to the ratepayer. This way, investors earn a fair return, the system is equitable to all, and rates are kept reasonable. While electricity prices vary around the country, there isn’t a person that pays a rate that is excessive to the value of the service.

Most importantly, this approach keeps things predictable enough so that utilities can plan on a multi-decade basis. This is critical for infrastructure businesses.

As I argue in Carbon IRA & YouTility: How to Address Climate Change & Reward Carbon Reduction Before It’s Too late, we could solve at least one half of our carbon discharge problem by quickly returning to the traditional utility business model, this time allowing utilities to own customer infrastructure – such as smart thermostats, efficient AC systems, storage devices and rooftop photovoltaic systems – which help optimize the grid for everyone.

Now, I’m not a health care industry expert but I don’t seen why this business model can’t be applied in the same way. Public and private hospitals compete for resources, segregating care based on who can pay, etc.; insurance companies adversarially fight to keep costs in check, and state government agencies struggle to create standards, fund innovation, and oversee the whole mess. Rather than debate the merits of socialism or capitalism applied to health care, why not a third way?

Consider each large health care organization (i.e., hospitals, doctor network, accepted insurance providers, etc.) a “public utility” and regulate the businesses’ financials and performance using a public government commission. It’s worked well for other “essential services.” Why shouldn’t it work for health care?

I don’t think anyone reflects on a personal health care event with the words, “Gee, that worked well!” And I don’t think anyone witnessing this morass called the COVID19 pandemic response is saying, “Damn: This is working really great!”

Market-based economics work best for new industries and innovation. Markets don’t work well for critical or essential services or commodities (that’s why commodity businesses tend to have three major suppliers and are reduced to an oligopoly). Markets work worst during crises, why you see price-gouging and hoarding.

Anyway, I’m glad the pundits and politicians are invoking utilities and their ability to collaborate for crisis response and perhaps will apply some of their processes over the long haul. Because what we are witnessing during COVID19, the lack of coordinated analyses, communication, response, could very well embarrass this country forever.

So much “painting by numbers” is done with numerical models. And the government is probably the largest consumer of such models. All models require assumptions, and as Commandment 2 in “Painting By Numbers” counsels, you must identify these assumptions to understand the results.

The need for assumptions gives policy-makers wide latitude to drive towards answers which support their policies. For example, the EPA under the Obama administration calculated the “social cost of carbon” as a value around $50/ton of carbon emitted. The EPA under the Trump administration managed to tweak the model so that the social cost of carbon (SCC) was more like $7/ton.

I wrote about this a while back in this space. Apparently, one thing you can do is select a different value for the internal rate of return (a financial parameter) in the model, according to a few references I read at the time.

Now here’s some fun: A paper I found surfing the web entitled “The Social Cost of Carbon Made Simple” shows one methodology for calculating it. By the way, this has got to be the most wrongly titled paper of 2010, the year it was published. There is nothing simple about it! Go on – click on it and read the first few pages. I dare you.

https://www.epa.gov/sites/production/files/2014-12/documents/the_social_cost_of_carbon_made_simple.pdf

But the paper does acknowledge that a “…meta-analysis…found that the distribution of published SCC estimates spans several orders of magnitude and is heavily right-skewed: for the full sample, the median was $12, the mean was $43, and the 95th percentile was $150…” Moreover, the spread was as low as $1/ton.

See what I mean? If you want to de-emphasize carbon in your economic policies, you pick a methodology that minimizes SCC. If you want to build your policies around climate change, you pick a method that maximizes it. To the credit of the Obama administration, they settled on something close to the mean.

The paper is provisional work and nine years old, so don’t take it for any kind of gospel. I use it simply to illustrate points that require of the paper neither absolute accuracy or timeliness.

In an article (New York Times, March 27, 2020)  titled “Trump’s Environmental Rollbacks Find Opposition From Within: Staff Scientists,” I read this: “In 2018, when the Environmental Protection Agency proposed reversing an Obama-era rule to limit climate-warming coal pollution, civil servants included analysis showing that by allowing more emissions, the new version of the rule would contribute to 1,400 premature deaths a year.”

I’m not going to dig deep and determine how they arrived at the number 1400, and anyway, the key to the sentence isn’t the number, it’s the word “contribute.” How many other factors “contribute to those premature deaths?

The article argues that Trump administration officials are not even trying to “tweak” the models, but instead have come in with a “repeal and replace” attitude “without relying on data, and science and facts.” It was reported that Obama’s head of the EPA, before she departed, had encouraged staffers to remain and make sure that EPA’s analyses have the “truth” put in there. 

Unfortunately, numerical models don’t cough up the truth, just someone’s version of it. Those who don’t take the time understand all of this become victims reduced to parroting others’ versions of the truth. On the other hand, not even being willing to consider data and science and facts is completely wrong-headed. That is ignorance, as any model of human behavior will tell you.

So this happened! Painting By Numbers won a GOLD “IPPY” from Independent Publisher magazine. Think Oscar, Emmy, or Tony for Indie, small press, and academic publishers. Awards are presented in conjunction with Book Expo America, this year in NYC. Of course, I wouldn’t pass up a chance to return to my old stomping grounds. Get yours here!

Always feels good to support you local independent bookseller!

Or from the big dog here:

 

No automatic alt text available.

This morning on C-Span, the editor of a prominent politics and culture magazine stated that there were seven health care lobbyists for every member of Congress! That’s right – 7. So, of course, I went to validate this number.

The figures I turned up, based on a cursory scan of Google entries with different sets of key words, ranged from six to thirteen (!) health care lobbyists between 2002 and 2013. I couldn’t find more recent ones. I presume the number fluctuates depending on how “hot” certain legislation and bills are before our elected officials.

BUT..even one health care lobbyist for every Senator and Representative would be a frightening number. In other words, it could be off by a factor of seven and still make me ill. Imagine what other industries have for lobbyists, like energy and financial services and defense contractors.

When you think about the numbers shaping our lives (the tag line for my latest book, Painting By Numbers: How to Sharpen Your BS Detector and Smoke Out the Experts), this is one that looms larger than many others.

As good-intentioned as it may be, here’s the kind of article Painting By Numbers was written for. The author describes how risk valuation techniques from financial engineering can be applied to assess the future risks of global climate change, techniques used by the federal government. I won’t get into the analysis, just point out that, according to the author, the Obama administration pegged the “social cost of carbon” at $40/ton (of CO2) and the Trump administration is on a path to calculate it as $5/ton.

That is one hell of a change! Here’s my question: How valid is the methodology regardless of which number you subscribe to? This is the essential stumbling block when models are used to provide a numerical framework in the present for things that might happen well into the future. In this case, as the author explains, everything depends on the “discount rate” plugged into the model.

Writes the author: “A concept known as the discount rate makes it possible to translate future damages into their present value. In 2009, President Obama convened an interagency working group, of which I was a co-leader, to come up with a uniform method for estimating the social cost of carbon: the resulting number to be used across all federal agencies. Our group chose to emphasize estimates based on a discount rate of 3 percent.”

So now we have the key assumption, based on Commandment No. 2 in my book.

But why did they choose to emphasize estimates with a discount rate of 3 percent?? No explanation is given. Without an explanation, this can’t be a “uniform method” but instead the “preferred” method for this group, a group given lots of power and influence in the last administration. And now we have another group in charge with different preferences and their assumption apparently is a discount rate of 7%.

This isn’t an argument one way or another about the impact of global climate change and what we should be doing about it. I’m simply illuminating how those in charge are able to wield the results of their math models with impunity, unless we all become more engaged in assessing the validity of their methods.

https://www.nytimes.com/…/what-financial-markets-can-teach-…

The latest of my irregular posts elaborating on my new book, Painting By Numbers…
 
It’s only fair to recognize when numbers are reported appropriately. I’d started hearing about the mysteriously accepted “10,000 steps daily” number several years ago when wearable fitness devices started getting attention. Seemingly overnight, it seemed 10,000 became to the exercise world what PI is to the world of math.
 
This morning, I read an article about 15,000 steps a day! The article reviews a study in which Scottish postal workers, those walking to deliver the mail and those in sedentary back office positions, were studied to determine any association with risk of heart and other diseases.
 
The first thing I learned is that the 10,000 number has never been scientifically validated as a means of reducing health risk, though I’m certain it has been for selling fitness products. The study did reveal, for this small sample size, that a high level of activity does indeed show an association with reduced health risk and that the more activity, the lower the risk. The reason I am pointing out the article, though, is that it makes it clear that there is merely an ASSOCIATION between activity and health, adhering to Commandment Eight in Painting By Numbers, Suspect the (Co)mpany They Keep (i.e., making a bright line distinction among co-location, coincidence, correlation, causation, convergence, and consensus). The article further underscores the study’s other limitations. I like the conclusion: “…the findings imply there are good reasons to get up and move…” Rather than push towards the realm of certainty, the article makes clear that there is only an “implication.”
 
Exercise is kind of the reverse of smoking. Anyone who smoked cigarettes or still does probably knows just by what is happening to their body that the habit is terrible for you. By the same token, I’ve never heard anyone, unless they injure themselves, come back from vigorous exercise and say, “Well, damn, I feel worse!”
 
https://www.nytimes.com/2017/03/22/well/move/should-15000-steps-a-day-be-our-new-exercise-target.html?rref=collection%2Fbyline%2Fgretchen-reynolds&action=click&contentCollection=undefined&region=stream&module=stream_unit&version=latest&contentPlacement=2&pgtype=collection
Tagged with:
 

Painting By Numbers is a book I’ve wanted to write for a long time. BUT, as I begin the roll-out of its promotional campaign, I first want to acknowledge the books which should serve, with Painting By Numbers, as a syllabus of sorts. These are books which inspired me to write my own, expand on the topics I raise, and address the issues of numerical uncertainty in specific industries and sectors. All of them should be on your radar if you are passionate about this topic. My hope is that, after readers get acquainted with the concepts at the elementary, anecdotal level I present them, they will move on to the deeper and broader treatments available from these experts. Links to their Amazon pages are provided for convenience.

 

The Signal and the Noise, Nate Silver, Penguin Group, New York, New York, 2012.

This book should be considered a modern bible on the limitations of forecasting and prediction, but also on how prediction can be improved. I’ve recommended it to many friends and several have taken me up on it. If I ever teach a class on this subject, I will warm up the students with Painting by Numbers and then use The Signal and the Noise as the main text. The breadth of Silver’s topics and discussion points are, well, breathtaking. He tackles numerical analysis in baseball, election polling, climate change, gambling, weather forecasting (different from climate change), epidemics, financial markets, chess and much more. If my work is known for one thing, I hope it will be that it achieved more with respect to brevity and simplification. The Signal and the Noise is an investment of time and brain cells but well worth the sacrifice of both.

 

Mindware: Tools for Smart Thinking, Richard E. Nisbett, Farrar, Strauss and Giroux, New York, New York, 2015.

I reference Mindware in the text, because of the author’s unabashed warnings regarding the limitations of multiple regression analysis (MRA), perhaps the most prevalent numerical analysis conducted in research (especially the social sciences). Nisbett also observes that “our approach to hypothesis testing is flawed in that we’re inclined to search only for evidence that would tend to confirm a theory while failing to search for evidence that would tend to disconfirm it.” Nisbett’s book is very readable. While his focus is on reasoning in general, experiments, and the philosophy of knowledge, his central question is very similar to mine: How well do we know what we know?


The Laws of Medicine: Field Notes from an Uncertain Science,
Siddhartha Mukherjee, TED Books/Simon & Schuster, New York, 2015.

This slim volume, by the Pulitzer Prize winning author of The Emperor of All Maladies, reveals why ‘the laws of medicine are really laws of uncertainty, imprecision, and incompleteness.” They are, in fact, the ‘laws of imperfection.’ Probably the greatest piece of wisdom I got from this book is that even a perfect experiment is not necessarily generalizable. In other words, even if all of your statistics prove that your experiment ran perfectly, that doesn’t mean your results can be extrapolated to larger or different populations or even repeated for an identical sample.

In my view, the medical profession is particularly rife with arrogance and inability to face the limits of certainty. Mukherjee courteously holds the collective profession up in front of a mirror, pointing out the flaws in what he concedes is a relatively young area of science.


Willful Ignorance: The Mismeasure of Uncertainty,
Herbert Weisberg, John Wiley & Sons Inc, Hoboken, NJ, 2014.

Weisberg tackles the subject of uncertainty from the perspective of the general process of scientific discovery and uses engaging stories about scientists and “thinkers” throughout history to illustrate his points. Like Nisbett, he also thinks statistical analysis has approached “a crisis” (paraphrasing the back flap copy). One of his central tenets is that “this technology for interpreting evidence and generating conclusions has come to replace expert judgment to a large extent. 

“Scientists no longer trust their own intuition and judgment enough to risk modest failure in the quest for great success.” And this corollary: “Instead of serving as a adjunct to scientific reasoning, statistical methods today area widely perceived as a corrective to the many cognitive biases that often lead us astray.” It isn’t the role of science to provide answers; it’s to refine the questions. It’s a readable text but falls squarely between an academic textbook and one attempting to popularize science concepts.

 

Automate This, Christopher Steiner, Portfolio/Penguin, New York, New York, 2012.

The book’s subtitle, “How Algorithms Came to Rule Our World,” suggests that Steiner’s focus is how human activities are being automated through bots governed by algorithms. “Algorithms,” he writes, 

“operate much like decision trees, wherein the resolution to a complex problem, requiring consideration of a large set of variables, can be broken down to a long string of binary choices.” 

Binary choices are ones computers can make. But this statement also shows that an algorithm is just another form of numerical analysis. Of all the books I recommend, Steiner’s scares me the most. Consider this: 

“Of the nearly one billion users in Facebook’s system, the company stores up to a thousand pages of data, including the type of computer you use, your political views, your love relationships, your religion, last location, credit cards…” (Remember, it was published in 2012). Think about that with respect to the privacy and national security debate.

At one time, the federal government forced AT&T to cooperate for national security in ways no one wants to remember. Now, imagine when the social media sites  of our modern world have your information wrong, when they have drawn the wrong conclusions from your digital footprints! Steiner also describes a company which has developed a bot that “sucks in box scores from sporting events, identifies the most relevant aspects, and writes a story built around those aspects of the game. Is this the end of sports journalism as we know it?

 

Models.Behaving.Badly.: Why Confusing Illusion with Reality Can Lead to Disaster, on Wall Street and in Life, Emanuel Derman, Free Press/Simon & Schuster, New York, New York, 2011.

Derman is a physicist turned Wall Street “quant” and was one of a plethora of authors weighing in on the financial crisis and great recession of 2007/2008. Derman brings into the discussion the idea of models and metaphors: 

“Models stand on someone else’s feet. They are metaphors that compare the object of their attention to something else that it resembles. Resemblance is always partial, and so models necessarily simplify things and reduce the dimensions of the world.” 

But this later quote is priceless in its utility for understanding: “Once you understand that a model isn’t the thing but rather an exaggeration of one aspect of the thing, you will be less surprised at its limitations.” 

This is similar to what Nisbett is trying to convey about MRA, which limits the researcher to one aspect of the thing, and thus loses the context of all the other influences on that one thing (e.g., a measured, independent variable). Although Derman focuses (mostly) on financial models, he explains very well the limitations of models for economics, global climate, and other broad situations compared to those used in physics.


An Engine, Not a Camera
, Donald Mackenzie, The MIT Press, Cambridge, Mass., 2006.

If more people read and understood Mackenzie’s account of his deep research into valuation models for financial derivatives and the inner workings of financial markets, the world of investment would probably be very different. Mackenzie shines a bright light on the purpose of most models—to create a version of reality and then capitalize on that reality. In this case, Mackenzie argues persuasively that the Black-Scholes model for options pricing, which did indeed by most accounts change the field of finance, was developed to drive a market (engine) rather than reflect a market (camera). His analysis lends evidence to a broader contention, that the “invisible hand” of the market is anything but, that markets are deliberately constructed for the entities which will participate in that market. 

To my way of thinking, An Engine, Not a Camera is about uncertainty at its highest level, as it casts doubt on the entire notion of a “free market,” “Markets are not forces of nature, they are human creations,” he writes. To which I would add (as I suggest in the chapter on business models), models today are primarily used to create new markets and new realities, not expand our understanding of the human condition.

 

Useless Arithmetic, Orrin Pilkey and Linda Pilkey-Jarvis, Columbia University Press, New York, 2007. 

This is an example of a book that focuses on a specific field of applications identified in the subtitle, “Why Environmental Scientists Can’t Predict the Future.” This quote sums up what you are going to learn from the Pilkeys: “The reliance on mathematical models has done tangible damage to our society in many ways. Bureaucrats who don’t understand the limitations of modeled predictions often use them.” Even if you consider yourself an environmentalist, Useless Arithmetic is very useful for understanding how math models are used and abused.

 

Merchants of Doubt, Naomi Oreskes and Erik M Conway, Bloomsbury Press, New York, 2010.

As I note, uncertainty is something used to create doubt. In particular, the authors take aim at scientists and researchers pressed into service (and well paid) to blow up what is left of scientific uncertainty on highly charged political and cultural issues to impede progress on the issues of the day. They go as far to accuse such experts as turning doubt into a “product.” The issues they tackle include smoking and cancer, the ozone hole, global warming, acid rain, and other ecological issues. Unlike many of the other books listed, the authors in particular assess the public and political debates around these issues, not the scientific method. Health effects of smoking were turned into a great debate, funded by “big tobacco,” after the scientific evidence was rapidly drawing the conclusion, assert the authors. Among the important tenets of wisdom imparted is that balance in reporting is not giving equal weight to both sides, but to give accurate weight to both sides. Some “sides” represent deliberate disinformation spread by well-organized and well-funded vested interests, or ideological denial of the facts.

 

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Cathy O’Neil, Crown, New York, 2016.

You’ve probably inferred from the title that this book aims to be provocative and incendiary first. It certainly accomplishes that. O’Neil tackles data and modeling through the prism of social justice and power structures. But her metaphor is precious because it reveals how models evolve into WMD. One of her best examples is the US News & World Report college ranking system. Over several decades, it became the standard for college rank and therefore the object of intense manipulation so that colleges could improve on the rank. She observes (correctly in my mind) that all of the emphasis on the rating and its following among parents doesn’t do a damn thing for the quality of education. When a school’s objective becomes figuring out how to “game the ranking,” it’s no different than my attempt to game my rankings of colleges to favor the school I had already selected, as I illustrated in the opening chapter. O”Neil applies her analysis to getting insurance, landing a job, obtaining credit, on-line advertising and other aspects of unfairness in modern life. 

 

An Introduction to Mathematical Modeling, Edward A Bender, Dover Publications, Mineola, NY,  1978. 

Here, the term “introduction” refers to very mathematics-intensive theory and applications, optimization routines, and probabilities. The first chapter, “What is Modeling?” does a good job of laying the groundwork for those who wish to skip the math. 

 

Measurements and Their Uncertainties, Ifan G. Hughes and Thomas P.A. Hase, Oxford University Press, Oxford, England, 2010.

This book, focused on error in physical sciences, also gets complicated in a hurry, but again, the first chapter is well structured and offers good foundational material. It starts with the overriding point that “there will always be error associated with that value due to experimental uncertainties.” It goes on to classify uncertainties as random errors, systematic errors, and mistakes. While most discussions of uncertainty and error (mine included) focus on extrapolation, or extending a curve fit to data past the original measured data (or making inferences into the future using data from the past), this book reminds us that interpolation can be just as insidious. Interpolation refers to assuming the shape of the curve or line or graph between the measured data points. While this is a textbook, it is graphically rich rather than mathematically intensive (authors assume that computers will be doing most of the math).

 

Interpreting Data, Peter M Nardi, Pearson Education Inc, Boston, 2006.

This book keeps to the straight and narrow of how data analysis is applied in experiments. It notes in the introduction that “it is written in non-technical everyday language…” With passages like “Pearson r correlations are for interval or ratio levels of measurement…Many researchers, however, use these correlations for dichotomies and for ordinal measures, especially if there are equal-appearing intervals,” I’m not convinced of the everyday language. Nevertheless, I found it useful as a refresher on how experiments are designed, data taken, results analyzed, and conclusions drawn.

 

A Demon of Our Own Design, Richard Bookstaber, John Wiley & Sons Inc, Hoboken, NJ  2007; and Lecturing Birds on Flying, Pablo Triana, John Wiley & Sons, Hoboken, NJ, 2009.

Both of these books are focused on financial engineering and were blessed in being well-timed with the collapse of financial markets and the world economy. They cover similar territory and both insinuate that financial markets are imperiled by the way modeling is applied. The subtitle for Demon is “Markets, Hedge Funds, and the Perils of Financial Innovation,” and the subtitle for Lecturing Birds is “Can Mathematical Theories Destroy the Financial Markets?” However, everything I read tells me that things have only gotten worse, so unless you are seeking recent historical perspective, I’d supplement these two books with some more recent titles.

 

20% Chance of Rain, Richard B Jones, Amity Works, Connecticut, 1999 

This book wants to be “Your Personal Guide to Risk,” as its subtitle urges. Written by an industry colleague in my consulting work, who spent decades in the machinery insurance business, it’s not really about modeling or uncertainties, but about risk and how we measure risk through probabilistic assessment. Jones stresses the uncertainty boundaries around any risk assessment and that “perception creates risk reality.” He also offers this bit of timeless wisdom: “Statistics do not, and cannot, prove anything. The field of statistics is incapable of this. Statistics can provide information to help us make decisions, but the decisions are still ours to make.” Today, statistics and numerical analysis in general are being used so decisions can be made for us (automation, digital algorithms, market construction, even on-line dating and hookup). We’d better all have a thorough understanding of their limitations

 

 

Looking for something?

Use the form below to search the site:


Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...

Set your Twitter account name in your settings to use the TwitterBar Section.