On the Moral and Intellectual Bankruptcy of Risk Analysis: Garbage In, Garbage Out

Don Howard

For decades, risk analysis has been the main tool guiding policy in many domains, from environment and public health to workplace and transportation safety, and even to nuclear weapons. One estimates the costs and benefits from various courses of action and their conceivable outcomes, this typically in the form of human suffering and well-being, though other goods, like tactical or strategic advantage, might dominate in some contexts. One also estimates the likelihoods of those outcomes. One multiplies probability times cost or benefit and then sums over all outcomes to produce an expected utility for each course of action. The course of action that maximizes benefit and minimizes harm is recommended. In its most general form, we call this “cost-benefit analysis.” We call it “risk analysis” when our concern is mainly with the down-side consequences, such as the risk of a core meltdown at a nuclear power facility.

I have long been uneasy about the pseudo-rationalism of such analyses. An elaborate formal apparatus conveys the impression of rigor and theoretical sophistication, whereas the widely varying conclusions – one can “justify” almost any policy one chooses – suggest a high degree of subjectivity, if not outright, agenda-driven bias. But my recent entanglement in the debate about the risks and benefits of gain-of-function (GOF) experiments involving pathogens with pandemic potential (PPP) moves me to think and say more about why I am troubled by the risk analysis paradigm (Casadevall, Howard, Imperiale 2014).

The H1N1 Influenza Virus
The H1N1 Influenza Virus

The essential point is suggested by my subtitle: “Garbage In, Garbage Out.” Let’s think about each piece of a cost-benefit analysis in turn. Start with cost and benefit in the form of human suffering and well-being. The question is: “How does one measure pain and pleasure?” Is a full belly more pleasurable than warmth on a cold winter’s night? Is chronic pain worse than fear of torture? There are no objective hedonic and lupenic metrics. And whose pain or pleasure counts for more? Does the well-being of my immediate community or nation trump that of other peoples? Does the suffering of the living count more heavily than the suffering of future generations? Most would say that the unborn get some kind of vote. But, then, how can we estimate the numbers of affected individuals even just twenty years from now, let alone fifty, or one hundred years in the future. And if we include the welfare of too many generations, then our own, contemporary concerns disappear in the calculation.

Next, think about cataloging possible outcomes. Some consequences of our actions are obvious. Punch someone in anger and you are likely to cause pain and injury both to your victim and yourself. We could not function as moral agents were there not some predictability in nature, including human nature and society. But the obvious and near term consequences form a subset of measure zero within the set of all possible consequences of our actions. How can we even begin to guess the distant and long-term consequences of our actions? Forget chaos theory and the butterfly effect, though those are real worries. Who could have predicted in 1905, for example, that Einstein’s discovery that E = mc2 contained within it the potential for annihilating all higher organic life on earth? Who could have foreseen in the 1930s that the discovery of penicillin, while saving millions of lives in the near term, carried with it the threat of producing super-bacteria, resistant to all standard antibiotics, risking many more deaths than penicillin, itself, ever prevented? A risk analysis is only as good as one’s catalog of possible outcomes, and history teaches us that we do a very poor job of anticipating many of the most important.

Then think about estimating the probabilities of outcomes. Some of these estimates, such as estimating the probability of injury or death from driving a car five miles on a sunny day with light traffic, are robust because they are data driven. We have lots of data on accident rates with passenger vehicles. But when we turn to the exceptional and the unusual, there is little or no data to guide us, precisely because the events in question are so rare. We cannot even estimate reliably the risk of injury accidents from transporting oil by rail, because transporting oil by rail used to be uncommon, but now it is very common, and the scant evidence from past experience does not scale in any clear-cut way for the new oil transportation economy. Would pipeline transportation be better or worse? Who knows? When real data are lacking, one tries reasoning by analogy to other, relevantly similar practices. But who can define “relevantly similar”?

It is especially when one comes to extremely rare events, such as a global pandemic, that the whole business of making probability estimates collapses in confusion and disarray. By definition, there is no data on which to base estimates of the probabilities of one-of-a-kind events. Doing it by theory, instead of basing the estimate on data, is a non-starter, for in a vacuum of data there is also a vacuum of theory, theories requiring data for their validation. We are left with nothing but blind guessing.

Put the pieces of the calculation back together again. There are no objective ways of measuring human suffering and well-being. We cannot survey all of the possible outcomes of our actions. Our probability estimates are, in the really important cases, pure fiction. The result is that one can easily manipulate all three factors – measures of pain and pleasure, outcome catalogues, and probability estimates – to produce any result one wishes.

And therein lies both the moral and the intellectual bankruptcy of risk and cost-benefit analysis.

But it’s worse than that. It’s not just that such analyses can be manipulated to serve any end. There is also the problem of deliberate deception. The formal apparatus of risk and cost-benefit analysis – all those graphs and tables and numbers and formulas – creates a pretense of scientific rigor where there is none. Too often that is the point, to use the facade of mathematical precision in order to quash dissent and silence the skeptic.

Back to rare and catastrophic events, like a possible global pandemic produced by a GOF/PPP experiment gone awry. What number to assign to the suffering? However low one’s probability estimate – and, yes, the chances of such a pandemic are low – the catastrophic character of a pandemic gets a suffering score that sends the final risk estimate off the charts. But wait a minute. Didn’t we just say that we cannot anticipate all of the consequences of our actions? Isn’t it possible that any course of action, any innovation, any discovery could lead to a yet unforeseen catastrophe? Unlikely perhaps, but that doesn’t matter, because the consequences would be so dire as to overwhelm even the lowest of probability estimates. Best not to do anything. Which is, of course, an absurd conclusion.

This “apocalypse fallacy,” the invocation of possible catastrophic consequences that overwhelm the cost-benefit calculation, is an all-too-common trope in policy debates. Should nuclear weapons have been eliminated immediately upon the discovery that a “nuclear winter” was possible? There are good arguments for what’s now termed the “nuclear zero” option, but this is not one of them. Should the remote possibility of creating a mini-black hole that would swallow the earth have stopped the search for the Higgs boson at CERN? Well, sure, do some more calculations and theoretical modeling to fix limits on the probability, but don’t stop just because the probability remains finite.

So when the pundits tell you not to invest in new nuclear power generation technologies as the surest and quickest route to a green energy economy because there is a chance of a super-Fukushima nightmare, ignore them. They literally don’t know what they’re talking about. When a do-gooder urges the immediate, widespread use of an Ebola vaccine that has not undergone clinical trials, arguing that the chance of saving thousands, perhaps even millions of lives, outweighs any imaginable untoward consequences, ignore him. He literally does not know what he’s talking about. Does this mean that we should rush into nuclear power generation or that we should refuse the cry for an Ebola vaccine from Africa? Of course not, in both cases. What it means is that we should choose to do or not do those things for good reasons, not bad ones. And risk or cost-benefit arguments, especially in the case of rare eventualities, are always bad arguments.

It would be wrong to conclude from what I’ve just argued that I’m counseling our throwing caution to the wind as we do whatever we damned well please. No. The humble, prudential advice is still good, the advice that one think before acting, that one consider the possible consequences of one’s actions, and that one weigh the odds as best one can. It’s just that one must be aware and wary of the agenda-driven abuse of such moral reflection in the pseudo-rational form of risk and cost-benefit analysis.

That said, there is even still value in the intellectual regimen of risk and cost-benefit analysis, at least in the form of the obligation entailed by that regimen to be as thorough and as objective as one can in assaying the consequences of one’s actions, even if the exercise cannot be reduced to an algorithm. But that is just another way of saying that, to the moral virtue of prudence must be added the intellectual virtues (which are also moral virtues) of honesty and perseverance.

Acknowledgment

Sincere thanks to Arturo Casadevall and Michael Imperiale for conversations that sharpened and enriched my thinking about this issue.

Reference

Arturo Casadevall, Don Howard, and Michael J. Imperiale. 2014. “An Epistemological Perspective on the Value of Gain-of-Function Experiments Involving Pathogens with Pandemic Potential,” mBio. 5(5): . doi:10.1128/mBio.01875-14. (http://mbio.asm.org/content/5/5/e01875-14.full)

The Scientist qua Scientist Has a Duty to Advocate and Act

Don Howard

The new AAAS web site on climate change, “What We Know,” asserts: “As scientists, it is not our role to tell people what they should do or must believe about the rising threat of climate change. But we consider it to be our responsibility as professionals to ensure, to the best of our ability, that people understand what we know.” Am I the only one dismayed by this strong disavowal of any responsibility on the part of climate scientists beyond informing the public? Of course I understand the complicated politics of climate change and the complicated political environment in which an organization like AAAS operates. Still, I think that this is an evasion of responsibility.

Contrast the AAAS stance with the so-called “Franck Report,” a remarkable document drawn up by refugee German physicist James Franck and colleagues at the University of Chicago’s “Metallurgical Laboratory” (part of the Manhattan Project) in the spring of 1945 in a vain effort to dissuade the US government from using the atomic bomb in a surprise attack on a civilian target. They started from the premise that the scientist qua scientist has a responsibility to advise and advocate, not just inform, arguing that their technical expertise entailed an obligation to act:

“The scientists on this project do not presume to speak authoritatively on problems of national and international policy. However, we found ourselves, by the force of events, during the last five years, in the position of a small group of citizens cognizant of a grave danger for the safety of this country as well as for the future of all the other nations, of which the rest of mankind is unaware. We therefore feel it is our duty to urge that the political problems, arising from the mastering of nuclear power, be recognized in all their gravity, and that appropriate steps be taken for their study and the preparation of necessary decisions.”

James Franck. Director of the Manhattan Project's Metallurgical Laboratory at the University of Chicago and primary author of the "Frank Report."
James Franck. Director of the Manhattan Project’s Metallurgical Laboratory at the University of Chicago and primary author of the “Franck Report.”

I have long thought that the Franck Report is a model for how the scientist’s citizen responsibility should be understood. At the time, the view among the signatories to the Franck Report stood in stark contrast to J. Robert Oppenheimer’s definition of the scientist’s responsibility being only to provide technical answers to technical questions. Oppenheimer wrote: “We didn’t think that being scientists especially qualified us as to how to answer this question of how the bombs should be used” (Jungk 1958, 186).

 

J. Robert Oppenheimer Director of the Manhattan Project
J. Robert Oppenheimer
Director of the Manhattan Project

The key argument advanced by Franck and colleagues was, again, that it was precisely their distinctive technical expertise that entailed a moral “duty . . . to urge that the political problems . . . be recognized in all their gravity.” Of course they also urged their colleagues to inform the public so as to enable broader citizen participation in the debate about atomic weapons, a sentiment that eventuated in the creation of the Federation of American Scientists and the Bulletin of the Atomic Scientists. The key point, however, was the link between distinctive expertise and the obligation to act. Obvious institutional and professional pressures rightly enforce a boundary between science and advocacy in the scientist’s day-to-day work. Even the cause of political advocacy requires a solid empirical and logical foundation for that action. But that there might be extraordinary circumstances in which the boundary between science and advocacy must be crossed seems equally obvious. And one is hard pressed to find principled reasons for objecting to that conclusion. Surely there is no easy argument leading from scientific objectivity to a disavowal of any such obligations.

Much of the Franck report was written by Eugene Rabinowitch, who went on to become a major figure in the Pugwash movement, the leader of which, Joseph Rotblat, was awarded the 1995 Nobel Peace Prize for his exemplary efforts in promoting international communication and understanding among nuclear scientists from around the world during the worst of the Cold War. The seemingly omnipresent Leo Szilard also played a significant role in drafting the report, and since 1974 the American Physical Society has given an annual Leo Szilard Lectureship Award to honor physicists who “promote the use of physics to benefit society.” Is it ironic that the 2007 winner was NASA atmospheric physicist James E. Hansen who has become controversial in the climate science community precisely because he decided to urge action on climate change?

That distinctive expertise entails an obligation to act is, in other settings, a principle to which we all assent. An EMT, even when off duty, is expected to help a heart attack victim precisely because he or she has knowledge, skills, and experience not common among the general public. Why should we not think about scientists and engineers as intellectual first responders?

Physicists, at least, seem to have assimilated within their professional culture a clear understanding that specialist expertise sometimes entails an obligation to take political action. That fact will, no doubt, surprise many who stereotype physics as the paradigm of a morally and politically disengaged discipline. There are many examples from other disciplines of scientists who have gone so far as to risk their careers to speak out in service to a higher good, including climate scientists like Michael Mann, who recently defended the scientist’s obligation to speak up in a blunt op-ed in the New York Times, “If You See Something, Say Something”). The question remains, why, nonetheless, the technical community has, for the most part, followed the lead of Oppenheimer, not Franck, when, in fact, our very identity as scientists does, sometimes, entail a moral obligation “to tell people what they should do” about the most compelling problems confronting our nation and our world.

Reference

Jungk, Robert (1958). Brighter than a Thousand Suns: A Personal History of the Atomic Scientists. New York: Harcourt, Brace and Company.

How to Talk about Science to the Public – 2. Speak Honestly about Uncertainty

Don Howard

We are all Humeans, all of us who are trained in science, at least. We know that empirical evidence confers at most high probability, never certainty, on a scientific claim, and this no matter how sophisticated the inductive logic that we preach. Enumerative induction doesn’t do it. That the sun rose every day in recorded history and before does not imply that it will, of necessity, rise tomorrow. Inference to the best explanation doesn’t do it, for such inferences depend on a changing explanandum (that which is to be explained) and upon both an obscure quality metric (what determines the “better than” metric) and a never complete reference class of competing explanations. Bayes’s theorem can’t do it either.

No. All of us who are trained in science know that every theory, principle, law, and observation is open to challenge and that many once thought secure now populate the museum of dead theories. Sophisticated philosophers of science have invented the intimidating name, “the pessimistic meta-induction” for the thesis that, just as all theories in the past have turned out to be false or significantly limited in scope, so, too, most likely, will our current best and future science.

No. We all know that science is a matter of tentative hypotheses and best guesses. Some principles that have proven their mettle over the long haul, such as the conservation of energy, rightly earn our confidence that they can be reliable guides in the future. But more than one scientist has been willing to sacrifice even the conservation of energy if that were the price to solve another intractable riddle, as when Niels Bohr twice proposed theories that assumed violations of energy conservation.

That science does not deal in certainty is a major part of what makes it such a precious cultural achievement. Science is not dogma. Science admits its failings and learns from its mistakes. That it does so is key to how it achieved the dramatic expansion of scientific understanding that we have witnessed at least since the Renaissance.

Why, then, do we have so much trouble speaking honestly to the public about uncertainty? Why, when debating on the campaign trail, do we give in to the temptation to describe anthropogenic climate change as “proven fact.”? Why, when on the witness stand, do we feel the need to assert that a Darwinian story about human origins is established “beyond all reasonable doubt”? We have lots of good reasons for believing in human-caused climate change and Darwinian evolution. Few scientific claims are as well established as these. But about both we might be wrong in some as yet unforeseen or unforeseeable way. Why lie? Why not speak honestly?

There are at least two reasons why, when speaking to the public, we so often seek refuge in the rhetoric of proof and truth. The first is that we wrongly think that the scientific laity cannot understand uncertainty and probability. This is one of the most worrisome ways in which we insult the intelligence of our audience.

That lots of us – scientists and non-scientists alike – make lots of inductive and probabilistic mistakes is obvious. Casinos, state lotteries, and race tracks are all the evidence one needs. They profit only thanks to those mistakes. Nor are any of us rational utility maximizers, soundly weighing expected gains and losses against the probabilities of various outcomes. The stock market provides the relevant evidence here.

But the fact that lots of people make inductive errors doesn’t imply that the educated public can’t deal with uncertainty. We all deal with uncertainty all the time, and, in the main, we do a good job with it. Do I take I-294 or the Skyway, the Dan Ryan, and the Kennedy to O’Hare? What are the odds of congestion on each at this time of day? How much of a time cushion do I have? What are the consequences of being early or late? How likely am I to miss my flight if there is a ten-minute delay, a twenty-minute delay, or an hour-long delay? Chance of rain? Do I take the umbrella or also my overcoat? Much of life is like this. We make mistakes, but we get by, don’t we?

Naomi Oreskes and Erik Conway. Merchants of Doubt. Bloomsbury Press, 2010.

The second major reason why we retreat to the rhetoric of proof and truth is that we allow ourselves to be intimidated by the merchants of doubt.* The political exploitation of uncertainty to create the illusion of scientific dissensus and thereby stymie policy making on global warming, public health, energy, and other issues is now, itself, big business. There are lobbying firms, fictitious “think tanks,” corporate public relations offices, sham public interest groups, and members of congress who might as well be paid spokespersons. Much of the same kind of apparatus is encountered in the “debates” over evolution and intelligent design. Acknowledge uncertainty, and that becomes the wedge by means of which the illusion of scientific controversy can be created where there is, in fact, no controversy. What is to be done?

What is not to be done is misrepresenting the contingency of science. It is a mistake to confront the merchants of doubt with the pretense of certainty and proof. The right response is to trust the public to understand the weighing of evidence and the adjustment of policy to the strength of the evidence. The right response is, simply and clearly, to present the evidence. To be sure, climate modeling and population genetics involve sophisticated statistical tools that cannot be explained in detail in a few sentences. But with only a bit effort one can usually explain the general issue in an accessible manner.

A good example of making probabilities accessible is the recent reporting on the hunt for the Higgs boson with the Large Hadron Collider at CERN. Any reader of the New York Times or the Wall Street Journal now knows the expressions “three-sigma” and “five-sigma.” A tutorial on calculating standard deviations was not needed to communicate the point that, when sorting through oceans of data, looking for truly exceptional events, one wants to be sure that what one is seeing is more than what would be expected from random fluctuations. People understand this. If the roulette ball lands on 36 twice in a row one is mildly surprised but doesn’t accuse the croupier of cheating. If it lands on 36 five times in a row, then it’s time to ask to see the manager.

No contentious policy questions turn on the results from CERN, so perhaps it is easier for us to speak about uncertainty in this context. But if we can educate the public about statistics in particle physics, surely we can do it as well when the topic is flu epidemics or vehicle safety or climate change. Here is the evidence for increased global temperatures over the last century. Here is what the models predict for increased sea levels. Here is our degree of confidence in these predictions. Now let’s talk about the costs and benefits of different courses of action. Be firm. Be clear. Don’t be afraid to call a lie a “lie” when others misrepresent the evidence or misdescribe the models. But trust the public to follow the logic of the science if we do a good enough job of explaining that logic.

There might be one final reason why we too often retreat to the rhetoric of proof and truth, a reason that I’ll just mention here, saving a fuller discussion for another occasion. It is that too many of us were, ourselves, badly trained in science. Too many textbooks too many courses, and far, far too many popular science writers still teach the science in ways that encourage the illusion of settled fact where there is none. Thomas Kuhn taught us that science teaching often looks more like indoctrination than we might be comfortable acknowledging. There are remedies for this, foremost among them a more thorough and sophisticated incorporation of history and philosophy of science into science pedagogy. But, again, that is a topic for another post.

*See the excellent book by this title: Naomi Oreskes and Eric Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (Bloomsbury Press, 2010).