On the Moral and Intellectual Bankruptcy of Risk Analysis: Garbage In, Garbage Out

Don Howard

For decades, risk analysis has been the main tool guiding policy in many domains, from environment and public health to workplace and transportation safety, and even to nuclear weapons. One estimates the costs and benefits from various courses of action and their conceivable outcomes, this typically in the form of human suffering and well-being, though other goods, like tactical or strategic advantage, might dominate in some contexts. One also estimates the likelihoods of those outcomes. One multiplies probability times cost or benefit and then sums over all outcomes to produce an expected utility for each course of action. The course of action that maximizes benefit and minimizes harm is recommended. In its most general form, we call this “cost-benefit analysis.” We call it “risk analysis” when our concern is mainly with the down-side consequences, such as the risk of a core meltdown at a nuclear power facility.

I have long been uneasy about the pseudo-rationalism of such analyses. An elaborate formal apparatus conveys the impression of rigor and theoretical sophistication, whereas the widely varying conclusions – one can “justify” almost any policy one chooses – suggest a high degree of subjectivity, if not outright, agenda-driven bias. But my recent entanglement in the debate about the risks and benefits of gain-of-function (GOF) experiments involving pathogens with pandemic potential (PPP) moves me to think and say more about why I am troubled by the risk analysis paradigm (Casadevall, Howard, Imperiale 2014).

The H1N1 Influenza Virus
The H1N1 Influenza Virus

The essential point is suggested by my subtitle: “Garbage In, Garbage Out.” Let’s think about each piece of a cost-benefit analysis in turn. Start with cost and benefit in the form of human suffering and well-being. The question is: “How does one measure pain and pleasure?” Is a full belly more pleasurable than warmth on a cold winter’s night? Is chronic pain worse than fear of torture? There are no objective hedonic and lupenic metrics. And whose pain or pleasure counts for more? Does the well-being of my immediate community or nation trump that of other peoples? Does the suffering of the living count more heavily than the suffering of future generations? Most would say that the unborn get some kind of vote. But, then, how can we estimate the numbers of affected individuals even just twenty years from now, let alone fifty, or one hundred years in the future. And if we include the welfare of too many generations, then our own, contemporary concerns disappear in the calculation.

Next, think about cataloging possible outcomes. Some consequences of our actions are obvious. Punch someone in anger and you are likely to cause pain and injury both to your victim and yourself. We could not function as moral agents were there not some predictability in nature, including human nature and society. But the obvious and near term consequences form a subset of measure zero within the set of all possible consequences of our actions. How can we even begin to guess the distant and long-term consequences of our actions? Forget chaos theory and the butterfly effect, though those are real worries. Who could have predicted in 1905, for example, that Einstein’s discovery that E = mc2 contained within it the potential for annihilating all higher organic life on earth? Who could have foreseen in the 1930s that the discovery of penicillin, while saving millions of lives in the near term, carried with it the threat of producing super-bacteria, resistant to all standard antibiotics, risking many more deaths than penicillin, itself, ever prevented? A risk analysis is only as good as one’s catalog of possible outcomes, and history teaches us that we do a very poor job of anticipating many of the most important.

Then think about estimating the probabilities of outcomes. Some of these estimates, such as estimating the probability of injury or death from driving a car five miles on a sunny day with light traffic, are robust because they are data driven. We have lots of data on accident rates with passenger vehicles. But when we turn to the exceptional and the unusual, there is little or no data to guide us, precisely because the events in question are so rare. We cannot even estimate reliably the risk of injury accidents from transporting oil by rail, because transporting oil by rail used to be uncommon, but now it is very common, and the scant evidence from past experience does not scale in any clear-cut way for the new oil transportation economy. Would pipeline transportation be better or worse? Who knows? When real data are lacking, one tries reasoning by analogy to other, relevantly similar practices. But who can define “relevantly similar”?

It is especially when one comes to extremely rare events, such as a global pandemic, that the whole business of making probability estimates collapses in confusion and disarray. By definition, there is no data on which to base estimates of the probabilities of one-of-a-kind events. Doing it by theory, instead of basing the estimate on data, is a non-starter, for in a vacuum of data there is also a vacuum of theory, theories requiring data for their validation. We are left with nothing but blind guessing.

Put the pieces of the calculation back together again. There are no objective ways of measuring human suffering and well-being. We cannot survey all of the possible outcomes of our actions. Our probability estimates are, in the really important cases, pure fiction. The result is that one can easily manipulate all three factors – measures of pain and pleasure, outcome catalogues, and probability estimates – to produce any result one wishes.

And therein lies both the moral and the intellectual bankruptcy of risk and cost-benefit analysis.

But it’s worse than that. It’s not just that such analyses can be manipulated to serve any end. There is also the problem of deliberate deception. The formal apparatus of risk and cost-benefit analysis – all those graphs and tables and numbers and formulas – creates a pretense of scientific rigor where there is none. Too often that is the point, to use the facade of mathematical precision in order to quash dissent and silence the skeptic.

Back to rare and catastrophic events, like a possible global pandemic produced by a GOF/PPP experiment gone awry. What number to assign to the suffering? However low one’s probability estimate – and, yes, the chances of such a pandemic are low – the catastrophic character of a pandemic gets a suffering score that sends the final risk estimate off the charts. But wait a minute. Didn’t we just say that we cannot anticipate all of the consequences of our actions? Isn’t it possible that any course of action, any innovation, any discovery could lead to a yet unforeseen catastrophe? Unlikely perhaps, but that doesn’t matter, because the consequences would be so dire as to overwhelm even the lowest of probability estimates. Best not to do anything. Which is, of course, an absurd conclusion.

This “apocalypse fallacy,” the invocation of possible catastrophic consequences that overwhelm the cost-benefit calculation, is an all-too-common trope in policy debates. Should nuclear weapons have been eliminated immediately upon the discovery that a “nuclear winter” was possible? There are good arguments for what’s now termed the “nuclear zero” option, but this is not one of them. Should the remote possibility of creating a mini-black hole that would swallow the earth have stopped the search for the Higgs boson at CERN? Well, sure, do some more calculations and theoretical modeling to fix limits on the probability, but don’t stop just because the probability remains finite.

So when the pundits tell you not to invest in new nuclear power generation technologies as the surest and quickest route to a green energy economy because there is a chance of a super-Fukushima nightmare, ignore them. They literally don’t know what they’re talking about. When a do-gooder urges the immediate, widespread use of an Ebola vaccine that has not undergone clinical trials, arguing that the chance of saving thousands, perhaps even millions of lives, outweighs any imaginable untoward consequences, ignore him. He literally does not know what he’s talking about. Does this mean that we should rush into nuclear power generation or that we should refuse the cry for an Ebola vaccine from Africa? Of course not, in both cases. What it means is that we should choose to do or not do those things for good reasons, not bad ones. And risk or cost-benefit arguments, especially in the case of rare eventualities, are always bad arguments.

It would be wrong to conclude from what I’ve just argued that I’m counseling our throwing caution to the wind as we do whatever we damned well please. No. The humble, prudential advice is still good, the advice that one think before acting, that one consider the possible consequences of one’s actions, and that one weigh the odds as best one can. It’s just that one must be aware and wary of the agenda-driven abuse of such moral reflection in the pseudo-rational form of risk and cost-benefit analysis.

That said, there is even still value in the intellectual regimen of risk and cost-benefit analysis, at least in the form of the obligation entailed by that regimen to be as thorough and as objective as one can in assaying the consequences of one’s actions, even if the exercise cannot be reduced to an algorithm. But that is just another way of saying that, to the moral virtue of prudence must be added the intellectual virtues (which are also moral virtues) of honesty and perseverance.

Acknowledgment

Sincere thanks to Arturo Casadevall and Michael Imperiale for conversations that sharpened and enriched my thinking about this issue.

Reference

Arturo Casadevall, Don Howard, and Michael J. Imperiale. 2014. “An Epistemological Perspective on the Value of Gain-of-Function Experiments Involving Pathogens with Pandemic Potential,” mBio. 5(5): . doi:10.1128/mBio.01875-14. (http://mbio.asm.org/content/5/5/e01875-14.full)

“I Sing the Body Electric”

Don Howard

(Originally written for presentation as part of a panel discussion on “Machine/Human Interface” at the 2013 Fall conference, “Fearfully and Wonderfully Made: The Body and Human Identity,” Notre Dame Center for Ethics and Culture, 8 November 2013.)

Our topic today is supposed to be the “machine/human interface.” But I’m not going to talk about that, at least not under that description. Why not? The main reason, to be elaborated in a moment, is that the metaphor of the “interface” entails assumptions about the technology of biomechanical and bioelectric engineering that are already surprisingly obsolete. And therein lies a lesson of paramount importance for those of us interested in technoethics, namely, that the pace of technological change is such as often to leave us plodding humanists arguing about the problems of yesterday, not the problems of tomorrow. Some see here a tragic irony of modernity, that moral reflection cannot, perhaps as a matter of principle, keep pace with technical change. We plead the excuse that careful and thorough philosophical and theological reflection take time. But I don’t buy that. Engineering problems are just as hard as philosophical ones. The difference is that the engineers hunker down and do the work, whereas we humanists are a lazy bunch. And we definitely don’t spend enough time reading the technical literature if our goal is to see over the horizon.

Biocompatible nanoscale wiring embedded in synthetic tissue.
Biocompatible nanoscale wiring embedded in synthetic tissue.

Back to the issue of the moment. What’s wrong with the “interface” metaphor? It’s that it assumes a spatially localized mechanism and a spatially localized part of a human that meet or join in a topologically simple way, in a plane or a plug and socket, perhaps like a usb port in one’s temple. We all remember Commander Data’s data port, which looked suspiciously like a 1990s-vintage avionics connector. There are machine/human or machine/animal interfaces of that kind already. They are known, collectively, as “brain-computer-interfaces” or BCIs, and they have already made possible some remarkable feats, such as partial restoration of hearing in the deaf, direct brain control of a prosthesis, implanting false memories in a rat, and downloading a rat’s memory of how to press a lever to get food and then uploading the memory after the original memory has been chemically destroyed. And there will be more such.

The problem for us, today, is that plugs, and ports, and all such interfaces are already an inelegant technology that represents no more than a transitional form, one that will soon seem as quaint as a crank starter for an automobile, a dial on a telephone, or broadcast television. What the future will be could have been glimpsed in an announcement from just over a year ago. A joint MIT, Harvard, and Boston Children’s Hospital research team led by Robert Langer, Charles Lieber, and Daniel Kohane developed a technique for growing synthetic biological tissue on a substrate containing biocompatible, nanoscale wires, the wiring eventually becoming a permanent part of the fully-grown tissue (Tian et al. 2012). This announcement came seven weeks after the announcement in London of the first ever successful implantation of a synthetic organ, a fully-functional trachea grown from the patient’s own stem cells, work led by the pioneering researcher, Paolo Macchiarini (Melnick 2012). Taken together, these two announcements opened a window on a world that will be remarkably different from the one we inhabit today.

The near-term professed aim of the work on nanoscale wiring implanted in synthetic tissue is to provide sensing and remote adjustment capabilities with implants. But the mind quickly runs to far more exotic scenarios. Wouldn’t you like full-color, video tattoos, ones that you can turn off for a day in the office and turn on for a night of clubbing, all thanks to grafted, synthetic nanowired skin? Or what about vastly enhanced control capabilities for a synthetic heart the pumping rate and capacity of which could be fine-tuned to changing demands and environmental circumstances, with actuators in the heart responding to data from sensors in the lung and limbs? And if we can implant wiring, then, in principle, we can turn the body or any part of it into a computer.

With that the boundary between human and machine dissolves. The human is a synthetic machine, all the way down to the sub-cellular level. And the synthetic machine is, itself, literally, a living organism. No plugs, ports, and sockets. No interfaces, except in the most abstract, conceptual sense. The natural and the artificial merge in a seamlessly integrated whole. I am Watson; Deep Blue is me.

Here lies the really important challenge from the AI and robotics side to received notions of the body and human identity, namely, the deep integration of computing and electronics as a functional part of the human body, essential in ever more cases and ways to the maintenance of life and the healthy functioning of the person.
Such extreme, deep integration of computing and electronics with the human body surely elicits in most people a sense that we have crossed a boundary that shouldn’t be crossed. But explaining how and why is not easy. After all, most of us have no problem with prosthetic limbs, even those directly actuated by the brain, nor with pace makers, cochlear implants, or any of the other now long domesticated, implantable, artificial, electronic devices that we use to enhance and prolong life. Should we think differently about merely shrinking the scale of the implants and increasing the computing power? “Proceed with caution” is good advice with almost all technical innovations. But “do not enter” seems more the sentiment of many when first confronted by the prospect of such enhanced human-electronic integration. Why?

One guess is that boundaries are important for defining personhood, the skin being the first and most salient. Self is what lies within; other is without. The topologically simple “interface” allows us still to preserve a notion of boundedness, even if some of the boundaries are wholly under the skin, as with a pacemaker. But the boundedness of the person is at risk with integrated nanoscale electronics.

Control is surely another important issue implicated by enhanced human-electronic integration. One of the main points of the new research is precisely to afford greater capabilities for control from the outside. The aim, at present, is therapeutic, as with our current abilities to recharge and reprogram a pacemaker via RF signals. But anxieties about loss of control already arise with such devices, as witness Dick Cheney’s turning off the wi-fi capability in his implanted defibrillator. Integrated nanoscale electronics brings with it the technical possibility of much more extensive and intrusive interventions running the gamut from malicious hacking to sinister social and psychological manipulation.

Integrity might name another aspect of personhood put at risk by the dissolution of the machine-human distinction. But it is harder to explain in non-metaphorical terms wherein this integrity consists – “oneness” and “wholeness” are just synonyms, not explicanda – and, perhaps for that reason, it is harder to say exactly how integrated nanoscale electronics threatens the integrity of the human person. After all, the reason why such technology is novel and important is, precisely, that it is so deeply and thoroughly integrated with the body. A machine-human hybrid wouldn’t be less integrated, it would just be differently integrated. And it can’t be that bodily and personal integrity are threatened by the mere incorporation of something alien within the body, for then a hip replacement or an organ transplant would equally threaten human integrity, as would a cheese sandwich.

A blurring or transgressing of bodily boundaries and a loss of personal control are both very definitely threatened by one of the more noteworthy technical possibilities deriving from integrated nanoscale electronics, which is that wired bodies can be put into direct communication with one another all the way down at the cellular level and below. If my doctor can get real-time data about the performance of an implanted, wired organ and can reprogram some of its functions, then it’s only a short step to my becoming part of a network of linked human computers. The technical infrastructure for creating the Borg Collective has arrived. You will be assimilated. Resistance is futile. Were this our future, it would entail a radical transformation in the concept of human personhood, one dense with implications for psychology, philosophy, theology, and even the law.

Or would it? We are already, in a sense, spatially extended and socially entangled persons. I am who I am thanks in no small measure to the pattern of my relationships with others. Today those relationships are mediated by words and pheromones. Should adding Bluetooth make a big difference? This might be one of those situations in which a difference of degree becomes a difference in kind, for RF networking down to the nanoscale would bring with it dramatically enhanced capabilities for extensive, real-time, coordination.

On the other hand, science in an entirely different domain has recently forced us to think about the possibility that the human person really is and always has been socially networked, not an atomic individual, and this at a very basic, biological level. Study of what is termed the “human microbiome,” the microbial ecosystem that each of us hosts, has made many surprising new discoveries. For one thing, we now understand that there are vastly more microbial genes contained within and upon our bodies than somatic genes. In that sense, I am, from a genetic point of view, much more than just my “own” DNA, so much so that some thinkers now argue that the human person should be viewed not as an individual, but as a collective. Moreover, we are learning that our microbes are crucial to much more than just digestion. They play a vital role in things like mood regulation, recent work pointing to connections between, say, depression and our gut bacteria colonies, microbial purges and transplants now being suggested as therapies for psychological disorders. This is interesting because we tend to think of mood and state of mind as being much more intimately related to personhood than the accident of the foodstuffs passing through our bodies. There is new evidence that our microbes play an essential role in immune response. One study released just a couple of days ago suggested a role for gut bacteria in cases of severe rheumatoid arthritis, for example (Scher et al. 2013). This is interesting because the immune system is deeply implicated in any discussion of the self-other distinction.

Most relevant to the foregoing discussion, however, is new evidence that our regularly exchanging microbes when we sneeze, shake hands, and share work surfaces does much more than communicate disease. It establishes enduring, shared, microbial communities among people who more regularly group together, from families, friends, and office mates to church groups and neighborhoods. And some researchers think that this sharing of microbial communities plays a crucial role in our subtle, only half-conscious sense of wellness and belonging when we are with our family and friends rather than total strangers. Indeed, the definition of “stranger” might now have to be “one with whom I share comparatively few microbial types.” In other words, my being as part of my essence a socially networked individual might already occur down at the microbial level. If so, that is important, because it means that purely natural, as opposed to artificial, circumstances already put serious pressure on the notion of the self as something wholly contained within one’s skin.

We started with my challenging the notion of the “interface” as the most helpful metaphor for understanding the ever more sophisticated interminglings of computers and biological humans that are now within our technical reach. We talked about new technologies for growing artificial human tissue with embedded, nanoscale, biocompatible wiring, which implies a deep integration of electronics and computing of a kind that annihilates the distinction between human and the machine, perhaps also the distinction between the natural and the artificial. And we ended with a vision of such wired persons becoming thereby members of highly interconnected social networks in which the bandwidth available for those interconnections is such as perhaps to make obsolete the notion of the atomic individual.

We face a new world. It simply won’t do to stamp our feet and just say “no.” The technology will move forward at best only a little slowed down by fretting and harangue from the humanists. The important question is not “whether?”, but “how?” Philosophers, theologians, and thoughtful people of every kind, including scientists and engineers, must be part of that conversation.

References

Melnick, Meredith (2012). “Cancer Patient Gets World’s First Artificial Trachea.” Time Magazine, July 8, 2012. http://healthland.time.com/2011/07/08/cancer-patient-gets-worlds-first-artificial-trachea/

Scher, J. U. et al. (2013). “Expansion of Intestinal Prevotella copri Correlates with Enhanced Susceptibility to Arthritis. eLife 2 (0): e01202 DOI: 10.7554/eLife.01202#sthash.b3jK5FW4.dpuf

Tian, Bozhi et al. (2012). “Macroporous Nanowire Nanoelectronic Scaffolds for Synthetic Tissues.” Nature Materials 11, 986-994.