On the Moral and Intellectual Bankruptcy of Risk Analysis: Garbage In, Garbage Out

Don Howard

For decades, risk analysis has been the main tool guiding policy in many domains, from environment and public health to workplace and transportation safety, and even to nuclear weapons. One estimates the costs and benefits from various courses of action and their conceivable outcomes, this typically in the form of human suffering and well-being, though other goods, like tactical or strategic advantage, might dominate in some contexts. One also estimates the likelihoods of those outcomes. One multiplies probability times cost or benefit and then sums over all outcomes to produce an expected utility for each course of action. The course of action that maximizes benefit and minimizes harm is recommended. In its most general form, we call this “cost-benefit analysis.” We call it “risk analysis” when our concern is mainly with the down-side consequences, such as the risk of a core meltdown at a nuclear power facility.

I have long been uneasy about the pseudo-rationalism of such analyses. An elaborate formal apparatus conveys the impression of rigor and theoretical sophistication, whereas the widely varying conclusions – one can “justify” almost any policy one chooses – suggest a high degree of subjectivity, if not outright, agenda-driven bias. But my recent entanglement in the debate about the risks and benefits of gain-of-function (GOF) experiments involving pathogens with pandemic potential (PPP) moves me to think and say more about why I am troubled by the risk analysis paradigm (Casadevall, Howard, Imperiale 2014).

The H1N1 Influenza Virus
The H1N1 Influenza Virus

The essential point is suggested by my subtitle: “Garbage In, Garbage Out.” Let’s think about each piece of a cost-benefit analysis in turn. Start with cost and benefit in the form of human suffering and well-being. The question is: “How does one measure pain and pleasure?” Is a full belly more pleasurable than warmth on a cold winter’s night? Is chronic pain worse than fear of torture? There are no objective hedonic and lupenic metrics. And whose pain or pleasure counts for more? Does the well-being of my immediate community or nation trump that of other peoples? Does the suffering of the living count more heavily than the suffering of future generations? Most would say that the unborn get some kind of vote. But, then, how can we estimate the numbers of affected individuals even just twenty years from now, let alone fifty, or one hundred years in the future. And if we include the welfare of too many generations, then our own, contemporary concerns disappear in the calculation.

Next, think about cataloging possible outcomes. Some consequences of our actions are obvious. Punch someone in anger and you are likely to cause pain and injury both to your victim and yourself. We could not function as moral agents were there not some predictability in nature, including human nature and society. But the obvious and near term consequences form a subset of measure zero within the set of all possible consequences of our actions. How can we even begin to guess the distant and long-term consequences of our actions? Forget chaos theory and the butterfly effect, though those are real worries. Who could have predicted in 1905, for example, that Einstein’s discovery that E = mc2 contained within it the potential for annihilating all higher organic life on earth? Who could have foreseen in the 1930s that the discovery of penicillin, while saving millions of lives in the near term, carried with it the threat of producing super-bacteria, resistant to all standard antibiotics, risking many more deaths than penicillin, itself, ever prevented? A risk analysis is only as good as one’s catalog of possible outcomes, and history teaches us that we do a very poor job of anticipating many of the most important.

Then think about estimating the probabilities of outcomes. Some of these estimates, such as estimating the probability of injury or death from driving a car five miles on a sunny day with light traffic, are robust because they are data driven. We have lots of data on accident rates with passenger vehicles. But when we turn to the exceptional and the unusual, there is little or no data to guide us, precisely because the events in question are so rare. We cannot even estimate reliably the risk of injury accidents from transporting oil by rail, because transporting oil by rail used to be uncommon, but now it is very common, and the scant evidence from past experience does not scale in any clear-cut way for the new oil transportation economy. Would pipeline transportation be better or worse? Who knows? When real data are lacking, one tries reasoning by analogy to other, relevantly similar practices. But who can define “relevantly similar”?

It is especially when one comes to extremely rare events, such as a global pandemic, that the whole business of making probability estimates collapses in confusion and disarray. By definition, there is no data on which to base estimates of the probabilities of one-of-a-kind events. Doing it by theory, instead of basing the estimate on data, is a non-starter, for in a vacuum of data there is also a vacuum of theory, theories requiring data for their validation. We are left with nothing but blind guessing.

Put the pieces of the calculation back together again. There are no objective ways of measuring human suffering and well-being. We cannot survey all of the possible outcomes of our actions. Our probability estimates are, in the really important cases, pure fiction. The result is that one can easily manipulate all three factors – measures of pain and pleasure, outcome catalogues, and probability estimates – to produce any result one wishes.

And therein lies both the moral and the intellectual bankruptcy of risk and cost-benefit analysis.

But it’s worse than that. It’s not just that such analyses can be manipulated to serve any end. There is also the problem of deliberate deception. The formal apparatus of risk and cost-benefit analysis – all those graphs and tables and numbers and formulas – creates a pretense of scientific rigor where there is none. Too often that is the point, to use the facade of mathematical precision in order to quash dissent and silence the skeptic.

Back to rare and catastrophic events, like a possible global pandemic produced by a GOF/PPP experiment gone awry. What number to assign to the suffering? However low one’s probability estimate – and, yes, the chances of such a pandemic are low – the catastrophic character of a pandemic gets a suffering score that sends the final risk estimate off the charts. But wait a minute. Didn’t we just say that we cannot anticipate all of the consequences of our actions? Isn’t it possible that any course of action, any innovation, any discovery could lead to a yet unforeseen catastrophe? Unlikely perhaps, but that doesn’t matter, because the consequences would be so dire as to overwhelm even the lowest of probability estimates. Best not to do anything. Which is, of course, an absurd conclusion.

This “apocalypse fallacy,” the invocation of possible catastrophic consequences that overwhelm the cost-benefit calculation, is an all-too-common trope in policy debates. Should nuclear weapons have been eliminated immediately upon the discovery that a “nuclear winter” was possible? There are good arguments for what’s now termed the “nuclear zero” option, but this is not one of them. Should the remote possibility of creating a mini-black hole that would swallow the earth have stopped the search for the Higgs boson at CERN? Well, sure, do some more calculations and theoretical modeling to fix limits on the probability, but don’t stop just because the probability remains finite.

So when the pundits tell you not to invest in new nuclear power generation technologies as the surest and quickest route to a green energy economy because there is a chance of a super-Fukushima nightmare, ignore them. They literally don’t know what they’re talking about. When a do-gooder urges the immediate, widespread use of an Ebola vaccine that has not undergone clinical trials, arguing that the chance of saving thousands, perhaps even millions of lives, outweighs any imaginable untoward consequences, ignore him. He literally does not know what he’s talking about. Does this mean that we should rush into nuclear power generation or that we should refuse the cry for an Ebola vaccine from Africa? Of course not, in both cases. What it means is that we should choose to do or not do those things for good reasons, not bad ones. And risk or cost-benefit arguments, especially in the case of rare eventualities, are always bad arguments.

It would be wrong to conclude from what I’ve just argued that I’m counseling our throwing caution to the wind as we do whatever we damned well please. No. The humble, prudential advice is still good, the advice that one think before acting, that one consider the possible consequences of one’s actions, and that one weigh the odds as best one can. It’s just that one must be aware and wary of the agenda-driven abuse of such moral reflection in the pseudo-rational form of risk and cost-benefit analysis.

That said, there is even still value in the intellectual regimen of risk and cost-benefit analysis, at least in the form of the obligation entailed by that regimen to be as thorough and as objective as one can in assaying the consequences of one’s actions, even if the exercise cannot be reduced to an algorithm. But that is just another way of saying that, to the moral virtue of prudence must be added the intellectual virtues (which are also moral virtues) of honesty and perseverance.

Acknowledgment

Sincere thanks to Arturo Casadevall and Michael Imperiale for conversations that sharpened and enriched my thinking about this issue.

Reference

Arturo Casadevall, Don Howard, and Michael J. Imperiale. 2014. “An Epistemological Perspective on the Value of Gain-of-Function Experiments Involving Pathogens with Pandemic Potential,” mBio. 5(5): . doi:10.1128/mBio.01875-14. (http://mbio.asm.org/content/5/5/e01875-14.full)

The Scientist qua Scientist Has a Duty to Advocate and Act

Don Howard

The new AAAS web site on climate change, “What We Know,” asserts: “As scientists, it is not our role to tell people what they should do or must believe about the rising threat of climate change. But we consider it to be our responsibility as professionals to ensure, to the best of our ability, that people understand what we know.” Am I the only one dismayed by this strong disavowal of any responsibility on the part of climate scientists beyond informing the public? Of course I understand the complicated politics of climate change and the complicated political environment in which an organization like AAAS operates. Still, I think that this is an evasion of responsibility.

Contrast the AAAS stance with the so-called “Franck Report,” a remarkable document drawn up by refugee German physicist James Franck and colleagues at the University of Chicago’s “Metallurgical Laboratory” (part of the Manhattan Project) in the spring of 1945 in a vain effort to dissuade the US government from using the atomic bomb in a surprise attack on a civilian target. They started from the premise that the scientist qua scientist has a responsibility to advise and advocate, not just inform, arguing that their technical expertise entailed an obligation to act:

“The scientists on this project do not presume to speak authoritatively on problems of national and international policy. However, we found ourselves, by the force of events, during the last five years, in the position of a small group of citizens cognizant of a grave danger for the safety of this country as well as for the future of all the other nations, of which the rest of mankind is unaware. We therefore feel it is our duty to urge that the political problems, arising from the mastering of nuclear power, be recognized in all their gravity, and that appropriate steps be taken for their study and the preparation of necessary decisions.”

James Franck. Director of the Manhattan Project's Metallurgical Laboratory at the University of Chicago and primary author of the "Frank Report."
James Franck. Director of the Manhattan Project’s Metallurgical Laboratory at the University of Chicago and primary author of the “Franck Report.”

I have long thought that the Franck Report is a model for how the scientist’s citizen responsibility should be understood. At the time, the view among the signatories to the Franck Report stood in stark contrast to J. Robert Oppenheimer’s definition of the scientist’s responsibility being only to provide technical answers to technical questions. Oppenheimer wrote: “We didn’t think that being scientists especially qualified us as to how to answer this question of how the bombs should be used” (Jungk 1958, 186).

 

J. Robert Oppenheimer Director of the Manhattan Project
J. Robert Oppenheimer
Director of the Manhattan Project

The key argument advanced by Franck and colleagues was, again, that it was precisely their distinctive technical expertise that entailed a moral “duty . . . to urge that the political problems . . . be recognized in all their gravity.” Of course they also urged their colleagues to inform the public so as to enable broader citizen participation in the debate about atomic weapons, a sentiment that eventuated in the creation of the Federation of American Scientists and the Bulletin of the Atomic Scientists. The key point, however, was the link between distinctive expertise and the obligation to act. Obvious institutional and professional pressures rightly enforce a boundary between science and advocacy in the scientist’s day-to-day work. Even the cause of political advocacy requires a solid empirical and logical foundation for that action. But that there might be extraordinary circumstances in which the boundary between science and advocacy must be crossed seems equally obvious. And one is hard pressed to find principled reasons for objecting to that conclusion. Surely there is no easy argument leading from scientific objectivity to a disavowal of any such obligations.

Much of the Franck report was written by Eugene Rabinowitch, who went on to become a major figure in the Pugwash movement, the leader of which, Joseph Rotblat, was awarded the 1995 Nobel Peace Prize for his exemplary efforts in promoting international communication and understanding among nuclear scientists from around the world during the worst of the Cold War. The seemingly omnipresent Leo Szilard also played a significant role in drafting the report, and since 1974 the American Physical Society has given an annual Leo Szilard Lectureship Award to honor physicists who “promote the use of physics to benefit society.” Is it ironic that the 2007 winner was NASA atmospheric physicist James E. Hansen who has become controversial in the climate science community precisely because he decided to urge action on climate change?

That distinctive expertise entails an obligation to act is, in other settings, a principle to which we all assent. An EMT, even when off duty, is expected to help a heart attack victim precisely because he or she has knowledge, skills, and experience not common among the general public. Why should we not think about scientists and engineers as intellectual first responders?

Physicists, at least, seem to have assimilated within their professional culture a clear understanding that specialist expertise sometimes entails an obligation to take political action. That fact will, no doubt, surprise many who stereotype physics as the paradigm of a morally and politically disengaged discipline. There are many examples from other disciplines of scientists who have gone so far as to risk their careers to speak out in service to a higher good, including climate scientists like Michael Mann, who recently defended the scientist’s obligation to speak up in a blunt op-ed in the New York Times, “If You See Something, Say Something”). The question remains, why, nonetheless, the technical community has, for the most part, followed the lead of Oppenheimer, not Franck, when, in fact, our very identity as scientists does, sometimes, entail a moral obligation “to tell people what they should do” about the most compelling problems confronting our nation and our world.

Reference

Jungk, Robert (1958). Brighter than a Thousand Suns: A Personal History of the Atomic Scientists. New York: Harcourt, Brace and Company.