On the Moral and Intellectual Bankruptcy of Risk Analysis: Garbage In, Garbage Out

Don Howard

For decades, risk analysis has been the main tool guiding policy in many domains, from environment and public health to workplace and transportation safety, and even to nuclear weapons. One estimates the costs and benefits from various courses of action and their conceivable outcomes, this typically in the form of human suffering and well-being, though other goods, like tactical or strategic advantage, might dominate in some contexts. One also estimates the likelihoods of those outcomes. One multiplies probability times cost or benefit and then sums over all outcomes to produce an expected utility for each course of action. The course of action that maximizes benefit and minimizes harm is recommended. In its most general form, we call this “cost-benefit analysis.” We call it “risk analysis” when our concern is mainly with the down-side consequences, such as the risk of a core meltdown at a nuclear power facility.

I have long been uneasy about the pseudo-rationalism of such analyses. An elaborate formal apparatus conveys the impression of rigor and theoretical sophistication, whereas the widely varying conclusions – one can “justify” almost any policy one chooses – suggest a high degree of subjectivity, if not outright, agenda-driven bias. But my recent entanglement in the debate about the risks and benefits of gain-of-function (GOF) experiments involving pathogens with pandemic potential (PPP) moves me to think and say more about why I am troubled by the risk analysis paradigm (Casadevall, Howard, Imperiale 2014).

The H1N1 Influenza Virus
The H1N1 Influenza Virus

The essential point is suggested by my subtitle: “Garbage In, Garbage Out.” Let’s think about each piece of a cost-benefit analysis in turn. Start with cost and benefit in the form of human suffering and well-being. The question is: “How does one measure pain and pleasure?” Is a full belly more pleasurable than warmth on a cold winter’s night? Is chronic pain worse than fear of torture? There are no objective hedonic and lupenic metrics. And whose pain or pleasure counts for more? Does the well-being of my immediate community or nation trump that of other peoples? Does the suffering of the living count more heavily than the suffering of future generations? Most would say that the unborn get some kind of vote. But, then, how can we estimate the numbers of affected individuals even just twenty years from now, let alone fifty, or one hundred years in the future. And if we include the welfare of too many generations, then our own, contemporary concerns disappear in the calculation.

Next, think about cataloging possible outcomes. Some consequences of our actions are obvious. Punch someone in anger and you are likely to cause pain and injury both to your victim and yourself. We could not function as moral agents were there not some predictability in nature, including human nature and society. But the obvious and near term consequences form a subset of measure zero within the set of all possible consequences of our actions. How can we even begin to guess the distant and long-term consequences of our actions? Forget chaos theory and the butterfly effect, though those are real worries. Who could have predicted in 1905, for example, that Einstein’s discovery that E = mc2 contained within it the potential for annihilating all higher organic life on earth? Who could have foreseen in the 1930s that the discovery of penicillin, while saving millions of lives in the near term, carried with it the threat of producing super-bacteria, resistant to all standard antibiotics, risking many more deaths than penicillin, itself, ever prevented? A risk analysis is only as good as one’s catalog of possible outcomes, and history teaches us that we do a very poor job of anticipating many of the most important.

Then think about estimating the probabilities of outcomes. Some of these estimates, such as estimating the probability of injury or death from driving a car five miles on a sunny day with light traffic, are robust because they are data driven. We have lots of data on accident rates with passenger vehicles. But when we turn to the exceptional and the unusual, there is little or no data to guide us, precisely because the events in question are so rare. We cannot even estimate reliably the risk of injury accidents from transporting oil by rail, because transporting oil by rail used to be uncommon, but now it is very common, and the scant evidence from past experience does not scale in any clear-cut way for the new oil transportation economy. Would pipeline transportation be better or worse? Who knows? When real data are lacking, one tries reasoning by analogy to other, relevantly similar practices. But who can define “relevantly similar”?

It is especially when one comes to extremely rare events, such as a global pandemic, that the whole business of making probability estimates collapses in confusion and disarray. By definition, there is no data on which to base estimates of the probabilities of one-of-a-kind events. Doing it by theory, instead of basing the estimate on data, is a non-starter, for in a vacuum of data there is also a vacuum of theory, theories requiring data for their validation. We are left with nothing but blind guessing.

Put the pieces of the calculation back together again. There are no objective ways of measuring human suffering and well-being. We cannot survey all of the possible outcomes of our actions. Our probability estimates are, in the really important cases, pure fiction. The result is that one can easily manipulate all three factors – measures of pain and pleasure, outcome catalogues, and probability estimates – to produce any result one wishes.

And therein lies both the moral and the intellectual bankruptcy of risk and cost-benefit analysis.

But it’s worse than that. It’s not just that such analyses can be manipulated to serve any end. There is also the problem of deliberate deception. The formal apparatus of risk and cost-benefit analysis – all those graphs and tables and numbers and formulas – creates a pretense of scientific rigor where there is none. Too often that is the point, to use the facade of mathematical precision in order to quash dissent and silence the skeptic.

Back to rare and catastrophic events, like a possible global pandemic produced by a GOF/PPP experiment gone awry. What number to assign to the suffering? However low one’s probability estimate – and, yes, the chances of such a pandemic are low – the catastrophic character of a pandemic gets a suffering score that sends the final risk estimate off the charts. But wait a minute. Didn’t we just say that we cannot anticipate all of the consequences of our actions? Isn’t it possible that any course of action, any innovation, any discovery could lead to a yet unforeseen catastrophe? Unlikely perhaps, but that doesn’t matter, because the consequences would be so dire as to overwhelm even the lowest of probability estimates. Best not to do anything. Which is, of course, an absurd conclusion.

This “apocalypse fallacy,” the invocation of possible catastrophic consequences that overwhelm the cost-benefit calculation, is an all-too-common trope in policy debates. Should nuclear weapons have been eliminated immediately upon the discovery that a “nuclear winter” was possible? There are good arguments for what’s now termed the “nuclear zero” option, but this is not one of them. Should the remote possibility of creating a mini-black hole that would swallow the earth have stopped the search for the Higgs boson at CERN? Well, sure, do some more calculations and theoretical modeling to fix limits on the probability, but don’t stop just because the probability remains finite.

So when the pundits tell you not to invest in new nuclear power generation technologies as the surest and quickest route to a green energy economy because there is a chance of a super-Fukushima nightmare, ignore them. They literally don’t know what they’re talking about. When a do-gooder urges the immediate, widespread use of an Ebola vaccine that has not undergone clinical trials, arguing that the chance of saving thousands, perhaps even millions of lives, outweighs any imaginable untoward consequences, ignore him. He literally does not know what he’s talking about. Does this mean that we should rush into nuclear power generation or that we should refuse the cry for an Ebola vaccine from Africa? Of course not, in both cases. What it means is that we should choose to do or not do those things for good reasons, not bad ones. And risk or cost-benefit arguments, especially in the case of rare eventualities, are always bad arguments.

It would be wrong to conclude from what I’ve just argued that I’m counseling our throwing caution to the wind as we do whatever we damned well please. No. The humble, prudential advice is still good, the advice that one think before acting, that one consider the possible consequences of one’s actions, and that one weigh the odds as best one can. It’s just that one must be aware and wary of the agenda-driven abuse of such moral reflection in the pseudo-rational form of risk and cost-benefit analysis.

That said, there is even still value in the intellectual regimen of risk and cost-benefit analysis, at least in the form of the obligation entailed by that regimen to be as thorough and as objective as one can in assaying the consequences of one’s actions, even if the exercise cannot be reduced to an algorithm. But that is just another way of saying that, to the moral virtue of prudence must be added the intellectual virtues (which are also moral virtues) of honesty and perseverance.

Acknowledgment

Sincere thanks to Arturo Casadevall and Michael Imperiale for conversations that sharpened and enriched my thinking about this issue.

Reference

Arturo Casadevall, Don Howard, and Michael J. Imperiale. 2014. “An Epistemological Perspective on the Value of Gain-of-Function Experiments Involving Pathogens with Pandemic Potential,” mBio. 5(5): . doi:10.1128/mBio.01875-14. (http://mbio.asm.org/content/5/5/e01875-14.full)

“I Sing the Body Electric”

Don Howard

(Originally written for presentation as part of a panel discussion on “Machine/Human Interface” at the 2013 Fall conference, “Fearfully and Wonderfully Made: The Body and Human Identity,” Notre Dame Center for Ethics and Culture, 8 November 2013.)

Our topic today is supposed to be the “machine/human interface.” But I’m not going to talk about that, at least not under that description. Why not? The main reason, to be elaborated in a moment, is that the metaphor of the “interface” entails assumptions about the technology of biomechanical and bioelectric engineering that are already surprisingly obsolete. And therein lies a lesson of paramount importance for those of us interested in technoethics, namely, that the pace of technological change is such as often to leave us plodding humanists arguing about the problems of yesterday, not the problems of tomorrow. Some see here a tragic irony of modernity, that moral reflection cannot, perhaps as a matter of principle, keep pace with technical change. We plead the excuse that careful and thorough philosophical and theological reflection take time. But I don’t buy that. Engineering problems are just as hard as philosophical ones. The difference is that the engineers hunker down and do the work, whereas we humanists are a lazy bunch. And we definitely don’t spend enough time reading the technical literature if our goal is to see over the horizon.

Biocompatible nanoscale wiring embedded in synthetic tissue.
Biocompatible nanoscale wiring embedded in synthetic tissue.

Back to the issue of the moment. What’s wrong with the “interface” metaphor? It’s that it assumes a spatially localized mechanism and a spatially localized part of a human that meet or join in a topologically simple way, in a plane or a plug and socket, perhaps like a usb port in one’s temple. We all remember Commander Data’s data port, which looked suspiciously like a 1990s-vintage avionics connector. There are machine/human or machine/animal interfaces of that kind already. They are known, collectively, as “brain-computer-interfaces” or BCIs, and they have already made possible some remarkable feats, such as partial restoration of hearing in the deaf, direct brain control of a prosthesis, implanting false memories in a rat, and downloading a rat’s memory of how to press a lever to get food and then uploading the memory after the original memory has been chemically destroyed. And there will be more such.

The problem for us, today, is that plugs, and ports, and all such interfaces are already an inelegant technology that represents no more than a transitional form, one that will soon seem as quaint as a crank starter for an automobile, a dial on a telephone, or broadcast television. What the future will be could have been glimpsed in an announcement from just over a year ago. A joint MIT, Harvard, and Boston Children’s Hospital research team led by Robert Langer, Charles Lieber, and Daniel Kohane developed a technique for growing synthetic biological tissue on a substrate containing biocompatible, nanoscale wires, the wiring eventually becoming a permanent part of the fully-grown tissue (Tian et al. 2012). This announcement came seven weeks after the announcement in London of the first ever successful implantation of a synthetic organ, a fully-functional trachea grown from the patient’s own stem cells, work led by the pioneering researcher, Paolo Macchiarini (Melnick 2012). Taken together, these two announcements opened a window on a world that will be remarkably different from the one we inhabit today.

The near-term professed aim of the work on nanoscale wiring implanted in synthetic tissue is to provide sensing and remote adjustment capabilities with implants. But the mind quickly runs to far more exotic scenarios. Wouldn’t you like full-color, video tattoos, ones that you can turn off for a day in the office and turn on for a night of clubbing, all thanks to grafted, synthetic nanowired skin? Or what about vastly enhanced control capabilities for a synthetic heart the pumping rate and capacity of which could be fine-tuned to changing demands and environmental circumstances, with actuators in the heart responding to data from sensors in the lung and limbs? And if we can implant wiring, then, in principle, we can turn the body or any part of it into a computer.

With that the boundary between human and machine dissolves. The human is a synthetic machine, all the way down to the sub-cellular level. And the synthetic machine is, itself, literally, a living organism. No plugs, ports, and sockets. No interfaces, except in the most abstract, conceptual sense. The natural and the artificial merge in a seamlessly integrated whole. I am Watson; Deep Blue is me.

Here lies the really important challenge from the AI and robotics side to received notions of the body and human identity, namely, the deep integration of computing and electronics as a functional part of the human body, essential in ever more cases and ways to the maintenance of life and the healthy functioning of the person.
Such extreme, deep integration of computing and electronics with the human body surely elicits in most people a sense that we have crossed a boundary that shouldn’t be crossed. But explaining how and why is not easy. After all, most of us have no problem with prosthetic limbs, even those directly actuated by the brain, nor with pace makers, cochlear implants, or any of the other now long domesticated, implantable, artificial, electronic devices that we use to enhance and prolong life. Should we think differently about merely shrinking the scale of the implants and increasing the computing power? “Proceed with caution” is good advice with almost all technical innovations. But “do not enter” seems more the sentiment of many when first confronted by the prospect of such enhanced human-electronic integration. Why?

One guess is that boundaries are important for defining personhood, the skin being the first and most salient. Self is what lies within; other is without. The topologically simple “interface” allows us still to preserve a notion of boundedness, even if some of the boundaries are wholly under the skin, as with a pacemaker. But the boundedness of the person is at risk with integrated nanoscale electronics.

Control is surely another important issue implicated by enhanced human-electronic integration. One of the main points of the new research is precisely to afford greater capabilities for control from the outside. The aim, at present, is therapeutic, as with our current abilities to recharge and reprogram a pacemaker via RF signals. But anxieties about loss of control already arise with such devices, as witness Dick Cheney’s turning off the wi-fi capability in his implanted defibrillator. Integrated nanoscale electronics brings with it the technical possibility of much more extensive and intrusive interventions running the gamut from malicious hacking to sinister social and psychological manipulation.

Integrity might name another aspect of personhood put at risk by the dissolution of the machine-human distinction. But it is harder to explain in non-metaphorical terms wherein this integrity consists – “oneness” and “wholeness” are just synonyms, not explicanda – and, perhaps for that reason, it is harder to say exactly how integrated nanoscale electronics threatens the integrity of the human person. After all, the reason why such technology is novel and important is, precisely, that it is so deeply and thoroughly integrated with the body. A machine-human hybrid wouldn’t be less integrated, it would just be differently integrated. And it can’t be that bodily and personal integrity are threatened by the mere incorporation of something alien within the body, for then a hip replacement or an organ transplant would equally threaten human integrity, as would a cheese sandwich.

A blurring or transgressing of bodily boundaries and a loss of personal control are both very definitely threatened by one of the more noteworthy technical possibilities deriving from integrated nanoscale electronics, which is that wired bodies can be put into direct communication with one another all the way down at the cellular level and below. If my doctor can get real-time data about the performance of an implanted, wired organ and can reprogram some of its functions, then it’s only a short step to my becoming part of a network of linked human computers. The technical infrastructure for creating the Borg Collective has arrived. You will be assimilated. Resistance is futile. Were this our future, it would entail a radical transformation in the concept of human personhood, one dense with implications for psychology, philosophy, theology, and even the law.

Or would it? We are already, in a sense, spatially extended and socially entangled persons. I am who I am thanks in no small measure to the pattern of my relationships with others. Today those relationships are mediated by words and pheromones. Should adding Bluetooth make a big difference? This might be one of those situations in which a difference of degree becomes a difference in kind, for RF networking down to the nanoscale would bring with it dramatically enhanced capabilities for extensive, real-time, coordination.

On the other hand, science in an entirely different domain has recently forced us to think about the possibility that the human person really is and always has been socially networked, not an atomic individual, and this at a very basic, biological level. Study of what is termed the “human microbiome,” the microbial ecosystem that each of us hosts, has made many surprising new discoveries. For one thing, we now understand that there are vastly more microbial genes contained within and upon our bodies than somatic genes. In that sense, I am, from a genetic point of view, much more than just my “own” DNA, so much so that some thinkers now argue that the human person should be viewed not as an individual, but as a collective. Moreover, we are learning that our microbes are crucial to much more than just digestion. They play a vital role in things like mood regulation, recent work pointing to connections between, say, depression and our gut bacteria colonies, microbial purges and transplants now being suggested as therapies for psychological disorders. This is interesting because we tend to think of mood and state of mind as being much more intimately related to personhood than the accident of the foodstuffs passing through our bodies. There is new evidence that our microbes play an essential role in immune response. One study released just a couple of days ago suggested a role for gut bacteria in cases of severe rheumatoid arthritis, for example (Scher et al. 2013). This is interesting because the immune system is deeply implicated in any discussion of the self-other distinction.

Most relevant to the foregoing discussion, however, is new evidence that our regularly exchanging microbes when we sneeze, shake hands, and share work surfaces does much more than communicate disease. It establishes enduring, shared, microbial communities among people who more regularly group together, from families, friends, and office mates to church groups and neighborhoods. And some researchers think that this sharing of microbial communities plays a crucial role in our subtle, only half-conscious sense of wellness and belonging when we are with our family and friends rather than total strangers. Indeed, the definition of “stranger” might now have to be “one with whom I share comparatively few microbial types.” In other words, my being as part of my essence a socially networked individual might already occur down at the microbial level. If so, that is important, because it means that purely natural, as opposed to artificial, circumstances already put serious pressure on the notion of the self as something wholly contained within one’s skin.

We started with my challenging the notion of the “interface” as the most helpful metaphor for understanding the ever more sophisticated interminglings of computers and biological humans that are now within our technical reach. We talked about new technologies for growing artificial human tissue with embedded, nanoscale, biocompatible wiring, which implies a deep integration of electronics and computing of a kind that annihilates the distinction between human and the machine, perhaps also the distinction between the natural and the artificial. And we ended with a vision of such wired persons becoming thereby members of highly interconnected social networks in which the bandwidth available for those interconnections is such as perhaps to make obsolete the notion of the atomic individual.

We face a new world. It simply won’t do to stamp our feet and just say “no.” The technology will move forward at best only a little slowed down by fretting and harangue from the humanists. The important question is not “whether?”, but “how?” Philosophers, theologians, and thoughtful people of every kind, including scientists and engineers, must be part of that conversation.

References

Melnick, Meredith (2012). “Cancer Patient Gets World’s First Artificial Trachea.” Time Magazine, July 8, 2012. http://healthland.time.com/2011/07/08/cancer-patient-gets-worlds-first-artificial-trachea/

Scher, J. U. et al. (2013). “Expansion of Intestinal Prevotella copri Correlates with Enhanced Susceptibility to Arthritis. eLife 2 (0): e01202 DOI: 10.7554/eLife.01202#sthash.b3jK5FW4.dpuf

Tian, Bozhi et al. (2012). “Macroporous Nanowire Nanoelectronic Scaffolds for Synthetic Tissues.” Nature Materials 11, 986-994.

Where’s the Intelligence in Intelligent Design?

Don Howard

(Originally published in 2008 as part of a Reilly Center Reports issue on “Evolution and Intelligent Design” that contains excellent pieces by George Coyne, S.J., the former director of the Vatican Observatory, William E. Carroll, the Thomas Aquinas Fellow in Science and Religion at Blackfriars Hall, Oxford, Noah Efron, from Bar Ilan University, Israel, and Reilly Center Fellows Matthew Ashley, Christopher Hamlin, Gerald McKenny, and Phillip Sloan.)

Intelligent design is an idea with a history going back at least to the late seventeenth and early eighteenth centuries, when Deists, especially, were moved by the seeming clockwork precision of the universe as described by Newton to infer the existence of a clockmaker God with an intelligence equal to the cosmic task of creation and design. Just as old are critical philosophical commentaries on design arguments, the most famous from the eighteenth century being David Hume’s mocking attack in his posthumously-published Dialogues Concerning Natural Religion (London, 1779).

Two features of design arguments impressed Hume. The first was that, since design arguments are arguments by analogy, they are, like all analogical reasoning, inductive arguments. That means that, at best, they confer on their conclusions only a high probability and not the necessity that one finds in the rigorous deductive proofs of Euclid’s geometry. Does induction suffice as a demonstration of God’s existence through His works? The second feature that impressed Hume was the arbitrary and persuasive choice of analogies upon which design arguments are grounded. See the universe as being like a watch and the inference to an intelligent designer God is inviting. But why that analogy rather than another? In the Dialogues, Hume’s spokesperson, Philo replies to Cleanthes’ defense of the design argument by suggesting that one could just as well focus on features of the universe that make it like an animal body or a vegetable, from which one could then infer that, like an organism, the universe must be the product of generation or vegetation, rather than reason and design. Absent a prior and independent commitment to the existence of a designer God, one could, thus, with equal reason infer that the universe was the product of sexual union between a cosmic mother and father or of the kind of budding whereby various plants, yeasts, or viruses reproduce.

Other questions loom larger when considering the kinds of design arguments popular today. Consider first that while design inferences are perfectly sensible, indeed, essential in various mundane settings, as in ordinary detective work, their employment in a cosmological setting or in the context of discussions of human origins is a riskier business. The main reason is that, in these extramundane settings, the major premises of a design argument are drawn not from unvarnished observation of the world, as when Holmes noted the hound that did not bark, but from what are typically theoretically sophisticated scientific descriptions of the world, as in the cosmological fine tuning argument.

Why is this problematic? It’s because of the contingency of those theoretical accounts. According to what philosophers of science call the “pessimistic meta-induction,” any current theory is likely to turn out, in future, to be false or at least seriously limited in scope. There is no reason to think that inflationary cosmology will be any exception to this rule, in spite of impressive and growing evidence in its favor. I’m old enough to remember a day when it had not occurred to anyone to think of the universe as having its origins in a cosmic explosion followed by expansion. When I was young, the steady-state model was the accepted wisdom. For two-hundred and fifty years, Newtonian mechanics could claim evidential warrant just as impressive as that attaching to the inflation model. But we now know that Newton was wrong. We don’t know, now, how inflationary cosmology will turn out to be wrong or of limited scope, but that it will seems to be the lesson of history. One might well be puzzled by a theology that dares to rest conclusions about fundamental aspects of religious doctrine on such a fragile, contingent, scientific foundation.

Even were it not for the contingent character of our theories, another question arises. If one is to take the major premises of a design argument from our current best science, is it not incumbent upon us to accept the whole of what that science tells us about such things as the place of intelligent human life in the cosmos? It is surely a striking fact about our current best cosmological models–if it is a fact about them–that intelligent life would have been impossible had the values of various cosmological parameters differed from their current values by a few thousandth’s of a decimal. But some of those same cosmological models also imply that the universe will develop in such a way as to become, in future, radically inhospitable to intelligent human life. If the fine tuning needed to make our corner of the universe home to intelligent life now is part of a cosmic design, then so too are all aspects of the cosmology in question. So was it the designer’s intention to create a universe in which for just the briefest tick of the cosmic clock intelligent human life could appear, only to be followed by cosmic aeons of emptiness? From such a more comprehensive point of view, the emergence of intelligent human life could hardly appear to have been the main goal of the enterprise.

Design arguments in the context of theories of human origins raise a similar question. First, as an aside, note the irony in the fact that the Darwinian story of human origins, a story introduced in part precisely to show how random variation with selection can imitate design, is now, instead, itself invoked as a premise in a design argument. Now it is not the individual human species that is the product of evolution that is held up as evidence of design, but the very evolutionary process that produced the human species that is taken as evidence of design. The natural process whose discovery Darwin thought obviated the need for assumptions of design is now said by the proponents of intelligent design itself to require the assumption of design.

But, as in the cosmological context, so too in the context of evolutionary stories of human origins one has to buy all of the science, not just some of it. Evolution has worked so as to produce intelligent human life. But the Darwinian story tells us that species fitness is always relative to an environment. When the environment changes, species adapted to it–if they cannot accommodate the changes–either evolve into new species or go extinct. From the Darwinian point of view, environmental change is largely a matter of external contingencies, not something implicated by the theory itself. Darwinian evolution does not predict mass extinctions consequent upon a giant meteoroid’s striking the earth at the end of the Cretaceous period, because it knows nothing of solar system dynamics. But it does predict that, if environmental change is drastic enough, extinction is possible or even likely. So, what if the environment to which the human species is adapted changes drastically, say as a result of another meteoroid impact, human-induced global climate change, or all-out nuclear war? Poof! No more human beings, the point being that, if one accepts the whole package, then, in this context too, it no longer appears as though the emergence of intelligent human life was a designer’s main goal.

I can see only one way around objections to design arguments based on the contingency of the theories providing the major premises. It would be to argue that, though theories come and theories go, any theoretical description of the universe that can claim the status of science must describe the universe in terms of some principles of order. What specific order is ascribed to nature might change as theories change, but order will be part of any scientific description of the universe, and so the conclusion still holds that from the order thereby described, design should be inferred. But am I alone in thinking that this maneuver trivializes the design argument, making it true by definition? Moreover, one would think that the specifics of the order described could make a difference to the conclusion one draws about causes. As Hume pointed out, if the order one discerns is like that of an artificial contrivance like a watch, then an intelligent designer as cause is suggested. But if the order is like that found in the plant and animal worlds, then sexual congress or vegetative reproduction is the cause suggested. And today one might add that, if the order described is like that of crystalline structure, then self-assembly in accord with fundamental structural principles (bonding angles, etc.) suggests itself as the cause.

The believer may rightly be enjoined to seek and find in nature the traces of a divine intelligence’s creative activity. If there is a designer God, then at least the main features of his blueprint should be inferable from the nature built according to that plan. By his fruits ye shall know him. But design arguments wrongly turn the arrow of implication in the opposite direction, holding that, if there is order in nature, then a designer God must be responsible for that order. Such might well be the origin of order, but it is a plain fact that order arises in other ways too. Some order is the product of other order, as in crystal formation. Some order is biological in origin, as with the magnetite in Mars rocks that some think was produced by magnetotactic bacteria. And some order is, like it or not, the product of chance, as when, on average, one roll of the dice in every thirty-six yields a perfect pair of snake eyes.