The Scientist qua Scientist Has a Duty to Advocate and Act

Don Howard

The new AAAS web site on climate change, “What We Know,” asserts: “As scientists, it is not our role to tell people what they should do or must believe about the rising threat of climate change. But we consider it to be our responsibility as professionals to ensure, to the best of our ability, that people understand what we know.” Am I the only one dismayed by this strong disavowal of any responsibility on the part of climate scientists beyond informing the public? Of course I understand the complicated politics of climate change and the complicated political environment in which an organization like AAAS operates. Still, I think that this is an evasion of responsibility.

Contrast the AAAS stance with the so-called “Franck Report,” a remarkable document drawn up by refugee German physicist James Franck and colleagues at the University of Chicago’s “Metallurgical Laboratory” (part of the Manhattan Project) in the spring of 1945 in a vain effort to dissuade the US government from using the atomic bomb in a surprise attack on a civilian target. They started from the premise that the scientist qua scientist has a responsibility to advise and advocate, not just inform, arguing that their technical expertise entailed an obligation to act:

“The scientists on this project do not presume to speak authoritatively on problems of national and international policy. However, we found ourselves, by the force of events, during the last five years, in the position of a small group of citizens cognizant of a grave danger for the safety of this country as well as for the future of all the other nations, of which the rest of mankind is unaware. We therefore feel it is our duty to urge that the political problems, arising from the mastering of nuclear power, be recognized in all their gravity, and that appropriate steps be taken for their study and the preparation of necessary decisions.”

James Franck. Director of the Manhattan Project's Metallurgical Laboratory at the University of Chicago and primary author of the "Frank Report."

James Franck. Director of the Manhattan Project’s Metallurgical Laboratory at the University of Chicago and primary author of the “Franck Report.”

I have long thought that the Franck Report is a model for how the scientist’s citizen responsibility should be understood. At the time, the view among the signatories to the Franck Report stood in stark contrast to J. Robert Oppenheimer’s definition of the scientist’s responsibility being only to provide technical answers to technical questions. Oppenheimer wrote: “We didn’t think that being scientists especially qualified us as to how to answer this question of how the bombs should be used” (Jungk 1958, 186).

 

J. Robert Oppenheimer Director of the Manhattan Project

J. Robert Oppenheimer
Director of the Manhattan Project

The key argument advanced by Franck and colleagues was, again, that it was precisely their distinctive technical expertise that entailed a moral “duty . . . to urge that the political problems . . . be recognized in all their gravity.” Of course they also urged their colleagues to inform the public so as to enable broader citizen participation in the debate about atomic weapons, a sentiment that eventuated in the creation of the Federation of American Scientists and the Bulletin of the Atomic Scientists. The key point, however, was the link between distinctive expertise and the obligation to act. Obvious institutional and professional pressures rightly enforce a boundary between science and advocacy in the scientist’s day-to-day work. Even the cause of political advocacy requires a solid empirical and logical foundation for that action. But that there might be extraordinary circumstances in which the boundary between science and advocacy must be crossed seems equally obvious. And one is hard pressed to find principled reasons for objecting to that conclusion. Surely there is no easy argument leading from scientific objectivity to a disavowal of any such obligations.

Much of the Franck report was written by Eugene Rabinowitch, who went on to become a major figure in the Pugwash movement, the leader of which, Joseph Rotblat, was awarded the 1995 Nobel Peace Prize for his exemplary efforts in promoting international communication and understanding among nuclear scientists from around the world during the worst of the Cold War. The seemingly omnipresent Leo Szilard also played a significant role in drafting the report, and since 1974 the American Physical Society has given an annual Leo Szilard Lectureship Award to honor physicists who “promote the use of physics to benefit society.” Is it ironic that the 2007 winner was NASA atmospheric physicist James E. Hansen who has become controversial in the climate science community precisely because he decided to urge action on climate change?

That distinctive expertise entails an obligation to act is, in other settings, a principle to which we all assent. An EMT, even when off duty, is expected to help a heart attack victim precisely because he or she has knowledge, skills, and experience not common among the general public. Why should we not think about scientists and engineers as intellectual first responders?

Physicists, at least, seem to have assimilated within their professional culture a clear understanding that specialist expertise sometimes entails an obligation to take political action. That fact will, no doubt, surprise many who stereotype physics as the paradigm of a morally and politically disengaged discipline. There are many examples from other disciplines of scientists who have gone so far as to risk their careers to speak out in service to a higher good, including climate scientists like Michael Mann, who recently defended the scientist’s obligation to speak up in a blunt op-ed in the New York Times, “If You See Something, Say Something”). The question remains, why, nonetheless, the technical community has, for the most part, followed the lead of Oppenheimer, not Franck, when, in fact, our very identity as scientists does, sometimes, entail a moral obligation “to tell people what they should do” about the most compelling problems confronting our nation and our world.

Reference

Jungk, Robert (1958). Brighter than a Thousand Suns: A Personal History of the Atomic Scientists. New York: Harcourt, Brace and Company.

“I Sing the Body Electric”

Don Howard

(Originally written for presentation as part of a panel discussion on “Machine/Human Interface” at the 2013 Fall conference, “Fearfully and Wonderfully Made: The Body and Human Identity,” Notre Dame Center for Ethics and Culture, 8 November 2013.)

Our topic today is supposed to be the “machine/human interface.” But I’m not going to talk about that, at least not under that description. Why not? The main reason, to be elaborated in a moment, is that the metaphor of the “interface” entails assumptions about the technology of biomechanical and bioelectric engineering that are already surprisingly obsolete. And therein lies a lesson of paramount importance for those of us interested in technoethics, namely, that the pace of technological change is such as often to leave us plodding humanists arguing about the problems of yesterday, not the problems of tomorrow. Some see here a tragic irony of modernity, that moral reflection cannot, perhaps as a matter of principle, keep pace with technical change. We plead the excuse that careful and thorough philosophical and theological reflection take time. But I don’t buy that. Engineering problems are just as hard as philosophical ones. The difference is that the engineers hunker down and do the work, whereas we humanists are a lazy bunch. And we definitely don’t spend enough time reading the technical literature if our goal is to see over the horizon.

Biocompatible nanoscale wiring embedded in synthetic tissue.

Biocompatible nanoscale wiring embedded in synthetic tissue.

Back to the issue of the moment. What’s wrong with the “interface” metaphor? It’s that it assumes a spatially localized mechanism and a spatially localized part of a human that meet or join in a topologically simple way, in a plane or a plug and socket, perhaps like a usb port in one’s temple. We all remember Commander Data’s data port, which looked suspiciously like a 1990s-vintage avionics connector. There are machine/human or machine/animal interfaces of that kind already. They are known, collectively, as “brain-computer-interfaces” or BCIs, and they have already made possible some remarkable feats, such as partial restoration of hearing in the deaf, direct brain control of a prosthesis, implanting false memories in a rat, and downloading a rat’s memory of how to press a lever to get food and then uploading the memory after the original memory has been chemically destroyed. And there will be more such.

The problem for us, today, is that plugs, and ports, and all such interfaces are already an inelegant technology that represents no more than a transitional form, one that will soon seem as quaint as a crank starter for an automobile, a dial on a telephone, or broadcast television. What the future will be could have been glimpsed in an announcement from just over a year ago. A joint MIT, Harvard, and Boston Children’s Hospital research team led by Robert Langer, Charles Lieber, and Daniel Kohane developed a technique for growing synthetic biological tissue on a substrate containing biocompatible, nanoscale wires, the wiring eventually becoming a permanent part of the fully-grown tissue (Tian et al. 2012). This announcement came seven weeks after the announcement in London of the first ever successful implantation of a synthetic organ, a fully-functional trachea grown from the patient’s own stem cells, work led by the pioneering researcher, Paolo Macchiarini (Melnick 2012). Taken together, these two announcements opened a window on a world that will be remarkably different from the one we inhabit today.

The near-term professed aim of the work on nanoscale wiring implanted in synthetic tissue is to provide sensing and remote adjustment capabilities with implants. But the mind quickly runs to far more exotic scenarios. Wouldn’t you like full-color, video tattoos, ones that you can turn off for a day in the office and turn on for a night of clubbing, all thanks to grafted, synthetic nanowired skin? Or what about vastly enhanced control capabilities for a synthetic heart the pumping rate and capacity of which could be fine-tuned to changing demands and environmental circumstances, with actuators in the heart responding to data from sensors in the lung and limbs? And if we can implant wiring, then, in principle, we can turn the body or any part of it into a computer.

With that the boundary between human and machine dissolves. The human is a synthetic machine, all the way down to the sub-cellular level. And the synthetic machine is, itself, literally, a living organism. No plugs, ports, and sockets. No interfaces, except in the most abstract, conceptual sense. The natural and the artificial merge in a seamlessly integrated whole. I am Watson; Deep Blue is me.

Here lies the really important challenge from the AI and robotics side to received notions of the body and human identity, namely, the deep integration of computing and electronics as a functional part of the human body, essential in ever more cases and ways to the maintenance of life and the healthy functioning of the person.
Such extreme, deep integration of computing and electronics with the human body surely elicits in most people a sense that we have crossed a boundary that shouldn’t be crossed. But explaining how and why is not easy. After all, most of us have no problem with prosthetic limbs, even those directly actuated by the brain, nor with pace makers, cochlear implants, or any of the other now long domesticated, implantable, artificial, electronic devices that we use to enhance and prolong life. Should we think differently about merely shrinking the scale of the implants and increasing the computing power? “Proceed with caution” is good advice with almost all technical innovations. But “do not enter” seems more the sentiment of many when first confronted by the prospect of such enhanced human-electronic integration. Why?

One guess is that boundaries are important for defining personhood, the skin being the first and most salient. Self is what lies within; other is without. The topologically simple “interface” allows us still to preserve a notion of boundedness, even if some of the boundaries are wholly under the skin, as with a pacemaker. But the boundedness of the person is at risk with integrated nanoscale electronics.

Control is surely another important issue implicated by enhanced human-electronic integration. One of the main points of the new research is precisely to afford greater capabilities for control from the outside. The aim, at present, is therapeutic, as with our current abilities to recharge and reprogram a pacemaker via RF signals. But anxieties about loss of control already arise with such devices, as witness Dick Cheney’s turning off the wi-fi capability in his implanted defibrillator. Integrated nanoscale electronics brings with it the technical possibility of much more extensive and intrusive interventions running the gamut from malicious hacking to sinister social and psychological manipulation.

Integrity might name another aspect of personhood put at risk by the dissolution of the machine-human distinction. But it is harder to explain in non-metaphorical terms wherein this integrity consists – “oneness” and “wholeness” are just synonyms, not explicanda – and, perhaps for that reason, it is harder to say exactly how integrated nanoscale electronics threatens the integrity of the human person. After all, the reason why such technology is novel and important is, precisely, that it is so deeply and thoroughly integrated with the body. A machine-human hybrid wouldn’t be less integrated, it would just be differently integrated. And it can’t be that bodily and personal integrity are threatened by the mere incorporation of something alien within the body, for then a hip replacement or an organ transplant would equally threaten human integrity, as would a cheese sandwich.

A blurring or transgressing of bodily boundaries and a loss of personal control are both very definitely threatened by one of the more noteworthy technical possibilities deriving from integrated nanoscale electronics, which is that wired bodies can be put into direct communication with one another all the way down at the cellular level and below. If my doctor can get real-time data about the performance of an implanted, wired organ and can reprogram some of its functions, then it’s only a short step to my becoming part of a network of linked human computers. The technical infrastructure for creating the Borg Collective has arrived. You will be assimilated. Resistance is futile. Were this our future, it would entail a radical transformation in the concept of human personhood, one dense with implications for psychology, philosophy, theology, and even the law.

Or would it? We are already, in a sense, spatially extended and socially entangled persons. I am who I am thanks in no small measure to the pattern of my relationships with others. Today those relationships are mediated by words and pheromones. Should adding Bluetooth make a big difference? This might be one of those situations in which a difference of degree becomes a difference in kind, for RF networking down to the nanoscale would bring with it dramatically enhanced capabilities for extensive, real-time, coordination.

On the other hand, science in an entirely different domain has recently forced us to think about the possibility that the human person really is and always has been socially networked, not an atomic individual, and this at a very basic, biological level. Study of what is termed the “human microbiome,” the microbial ecosystem that each of us hosts, has made many surprising new discoveries. For one thing, we now understand that there are vastly more microbial genes contained within and upon our bodies than somatic genes. In that sense, I am, from a genetic point of view, much more than just my “own” DNA, so much so that some thinkers now argue that the human person should be viewed not as an individual, but as a collective. Moreover, we are learning that our microbes are crucial to much more than just digestion. They play a vital role in things like mood regulation, recent work pointing to connections between, say, depression and our gut bacteria colonies, microbial purges and transplants now being suggested as therapies for psychological disorders. This is interesting because we tend to think of mood and state of mind as being much more intimately related to personhood than the accident of the foodstuffs passing through our bodies. There is new evidence that our microbes play an essential role in immune response. One study released just a couple of days ago suggested a role for gut bacteria in cases of severe rheumatoid arthritis, for example (Scher et al. 2013). This is interesting because the immune system is deeply implicated in any discussion of the self-other distinction.

Most relevant to the foregoing discussion, however, is new evidence that our regularly exchanging microbes when we sneeze, shake hands, and share work surfaces does much more than communicate disease. It establishes enduring, shared, microbial communities among people who more regularly group together, from families, friends, and office mates to church groups and neighborhoods. And some researchers think that this sharing of microbial communities plays a crucial role in our subtle, only half-conscious sense of wellness and belonging when we are with our family and friends rather than total strangers. Indeed, the definition of “stranger” might now have to be “one with whom I share comparatively few microbial types.” In other words, my being as part of my essence a socially networked individual might already occur down at the microbial level. If so, that is important, because it means that purely natural, as opposed to artificial, circumstances already put serious pressure on the notion of the self as something wholly contained within one’s skin.

We started with my challenging the notion of the “interface” as the most helpful metaphor for understanding the ever more sophisticated interminglings of computers and biological humans that are now within our technical reach. We talked about new technologies for growing artificial human tissue with embedded, nanoscale, biocompatible wiring, which implies a deep integration of electronics and computing of a kind that annihilates the distinction between human and the machine, perhaps also the distinction between the natural and the artificial. And we ended with a vision of such wired persons becoming thereby members of highly interconnected social networks in which the bandwidth available for those interconnections is such as perhaps to make obsolete the notion of the atomic individual.

We face a new world. It simply won’t do to stamp our feet and just say “no.” The technology will move forward at best only a little slowed down by fretting and harangue from the humanists. The important question is not “whether?”, but “how?” Philosophers, theologians, and thoughtful people of every kind, including scientists and engineers, must be part of that conversation.

References

Melnick, Meredith (2012). “Cancer Patient Gets World’s First Artificial Trachea.” Time Magazine, July 8, 2012. http://healthland.time.com/2011/07/08/cancer-patient-gets-worlds-first-artificial-trachea/

Scher, J. U. et al. (2013). “Expansion of Intestinal Prevotella copri Correlates with Enhanced Susceptibility to Arthritis. eLife 2 (0): e01202 DOI: 10.7554/eLife.01202#sthash.b3jK5FW4.dpuf

Tian, Bozhi et al. (2012). “Macroporous Nanowire Nanoelectronic Scaffolds for Synthetic Tissues.” Nature Materials 11, 986-994.

Nuclear Options: What Is Not in the Interim Agreement with Iran

Don Howard

No one wants war with Iran over its nuclear ambitions. But the euphoria over the EU3+3 interim agreement with Iran, as well as many of the political attacks on the agreement, obscure core technical issues that should be fundamental to any assessment of what has really been achieved. There is no denying that much has been gained by way of Iran’s agreeing temporarily to cease uranium enrichment beyond the 5% level necessary for energy production and its agreeing to on-site inspections at its Fordow and Natanz facilities. But important questions remain about what is not included in the interim agreement. Here are four issues that should be more prominent in the debate:

1. The Interim Agreement Mandates No Reduction in Iran’s Capability for Uranium Enrichment. Iran agrees to cease uranium enrichment beyond the 5% level necessary for energy production and not to expand or enhance its uranium enrichment capabilities, for the duration of the interim agreement. Moreover, Iran agrees to dilute half of its 20%-enriched uranium hexaflouride (UF6) to a 5% level and to convert the remaining half to uranium oxide (UO2) for use in making fuel for its Terhran research reactor. But Iran has not agreed to any permanent reduction of its capability for uranium enrichment, a capability that significantly exceeds what is necessary for energy production. It is hoped that a yet-to-be-negotiated, long-term agreement will include a reduction in that capability. But the interim agreement requires no such reduction. At any moment, Iran could resume enrichment to bomb-grade levels. Moreover, the UF6 that is to be converted to UO2 can be reconverted to UF6 and then further enriched.

Arak Heavy Water Reactor

Arak Heavy Water Reactor

2. The Interim Agreement Requires No Inspections at the Arak (IR-40) Heavy Water Reactor. As explained in a helpful recent article by Jeremy Bernstein, the Arak reactor is central to any evaluation of Iran’s nuclear ambitions. It is not designed as a reactor for power generation. Though Iran says that the reactor will be used to produce medical isotopes, its most plausible purpose is to be a breeder reactor for the production of plutonium, which is the other standard fuel for atomic weapons that rely upon the process of nuclear fission (as with the North Korean bomb). It was Iran’s refusal to allow on-site inspections at the Arak reactor that stalled the talks a couple of weeks ago when France demanded more access to Arak. The new interim agreement does require Iran to provide to the International Atomic Energy Agency (IAEA) an updated “Design Information Questionnaire” regarding the Arak reactor, it stipulates that there will be no “further advances of [Iran’s] activities at Arak, it obligates Iran to take “steps to agree with the IAEA on conclusion of the Safeguards Approach for the reactor at Arak” (whatever that means), and Iran agrees to do no reprocessing of spent fuel (the main purpose of which would be to extract plutonium) and not to construct reprocessing facilities. But the interim agreement does not obligate Iran to allow on-site inspections at Arak. Inspections are stipulated for the Fordow and Natanz uranium enrichment facilities, but not at Arak. Iran’s intransigence on this point should give us pause as we try to determine the real purpose of that reactor. If plutonium production is the goal, then our obsession with Iran’s uranium enrichment capability could be distracting us from a more serious threat. A quick route to an Iranian atomic bomb could well be via plutonium produced at Arak. And, at present, Iran has agreed to no degradation of this potential plutonium production capability.

3. The Interim Agreement Does Not Address the Question of Weapons Delivery Systems. Iran is a technically sophisticated nation that has made impressive advances in missile technology in recent years. Much of this missile technology was borrowed from earlier Russian and Korean models. But the new, solid-fuel, Sejil-2 rocket, which was first tested five years ago, is an original Iranian design. It has an impressive, 2,000-km range with a 750 kg payload capacity and anti-radar coatings. The Sejil-2 could put a nuclear warhead on a target as far away as Cairo, Athens, or Kiev. Moreover, Iran has been making gains in its guidance technology.

That we should be paying attention to Iranian weapons delivery capabilities was made clear when, two days after the announcement of the interim agreement, Brigadier General Hossein Salami, the Lieutenant Commander of the Iranian Revolutionary Guard Corps IRGC), announced that Iran’s indigenous ballistic missile capability had recently achieved a “near zero” margin of error in targeting accuracy.

That it was General Salami who made the announcement about advances in Iranian ballistic missile technology reminds us of a political, not technical, issue that has also received insufficient attention in the public debate about the interim agreement. The question is, “Who is really in control?” The interim agreement was negotiated by Iranian Foreign Minister Mohammed Javad Zarif on behalf of the government of President Hassan Rouhani. But the Revolutionary Guard functions as almost a shadow government, with considerable independent authority. And much of the most impressive Iranian ballistic missile research and development has been conducted in facilities under IRGC control, such as the IRGC missile base at Bid Kaneh, where a mysterious explosion during a missile test in November 2011 killed General Hassan Tehrani Moqaddam, who was the head of the IRGC’s “Arms and Military Equipment Self-Sufficiency Program.”

4. The Interim Agreement Does Not Address Aspects of Nuclear Weapons Technology Aside from the Production of Fissile Materials. Nothing in the interim agreement restricts Iran’s ability to continue developing other technologies essential to nuclear weapons production, such as timing circuitry, detonators, and refined conventional explosives techniques involved in the assembly of a critical mass of fissile material. It is perhaps not well and widely enough understood that some of the bigger technical challenges for a nation seeking nuclear weapons lie not in the production of fissile material but in areas such as these. Consider the basic design of a plutonium bomb of the kind dropped on Nagasaki. A critical mass of plutonium is achieved by compressing the plutonium with a spherical blast wave from spherical shell of conventional explosives. The precise shaping of those conventional explosive charges and their precise, simultaneous detonation are among the most difficult technical challenges in bomb design and manufacture. By contrast, while enriching uranium and breeding plutonium require a major technical infrastructure, the physical, chemical, and engineering processes involved are widely understood and, in principle, not all that difficult to achieve. But the interim agreement places no obstacles in the way of research and development on these other aspects of nuclear weapons design. Iran is free to pursue such research as vigorously as it will and to produce a fully functional nuclear weapon awaiting only the insertion of the fissile material.

An assessment of what has been achieved with the interim agreement depends crucially upon a prior assessment of Iran’s goals with respect to nuclear weapons capability. If Iran’s aim had been to produce nuclear weapons as soon as possible, then the interim agreement at least slows down progress toward that goal. But another view is that Iran’s aim all along has been to develop the basic technical infrastructure for the rapid production of bomb-grade fissile material for use if and when it chooses. If that is Iran’s aim, then the interim agreement achieves much less by way of delaying progress to the goal.

We have to wait and see how the interim agreement works. But the celebration of seeming progress on the diplomatic front must be tempered by a clear understanding of the technical issues that are not addressed in the interim agreement, issues that must be the focus of any, longer term, follow-on agreement. Should there be no progress on enrichment capabilities, the Arak reactor, delivery systems, and the fundamentals of bomb design, then options other than diplomacy might have to be explored, starting with the re-imposition of sanctions.

Robots on the Road: The Moral Imperative of the Driverless Car

Don Howard

Driverless cars are a reality, not just in California and not just as test vehicles being run by Google. They are now legal in three states: California, Florida, and Nevada. Semi-autonomous vehicles are already the norm, incorporating capabilities like adaptive cruise control and braking, rear-collision prevention, and self-parking. All of the basic technical problems have been solved, although work is still to be done on problems like sensor reliability, robust GPS connections, and security against hacking. Current design concepts enable easy integration with existing driver-controlled vehicles, which will make possible a smooth transition as the percentage of driverless cars on the road rises steadily. Every major manufacturer has announced plans to market fully autonomous vehicles within the next few years, Volvo, for example, promises to have them in the showroom by 2018. The question is not “whether?”, but “when?”

And the answer to that question is, “as soon as humanly possible,” this rapid transition in transportation technology being among the foremost moral imperatives of the day. We must do this, and we must do it now, for moral reasons. Here are three such reasons.

1. We will save over one million lives per year.

Approximately 1.24 million people die every year, world-wide, from automobile accidents, with somewhere between 20 million and 50 million people suffering non-fatal injuries (WHO 2013a). The Campaign for Global Road Safety labels this an “epidemic” of “crisis proportions” (Watkins 2012). Can you name any other single technology or technological system that kills and injures at such a rate? Can you think of any even remotely comparable example of our having compromised human health and safety for the sake of convenience and economic gain?

Accident_2010But as driverless cars replace driver-controlled cars, we will reduce the rate of death and injury to near zero. This is because the single largest cause of death and injury from automobile accidents is driver impairment, whether through drunkenness, stupidity, sleep deprivation, road rage, inattention, or poor driver training. All of that goes away with the driverless car, as will contributing causes like limited human sensing capabilities. There will still be equipment failures, and so there will still be accidents, but equipment failure represents only a tiny fraction of the causes of automobile accidents. There are new risks, such as hacking, but there are technical ways to reduce such risks.

Thus, the most rapid possible transition to a transportation system built around autonomous vehicles will save one million lives and prevent as many as fifty million non-fatal injuries annually. And this transition entails only the most minimal economic cost, with no serious negative impact of any other kind. In my mind, then, a rapid transition to a transportation system designed around the driverless car is a moral imperative. Any delay whatsoever, whether on the part of designers, manufacturers, regulators, or consumers will be a moral failing on a monumental scale. If you have the technical capability to prevent so much death and suffering but do nothing or even drag your feet, then you have blood on your hands. I’m sorry to be so blunt, but I see no way around that conclusion.

2. The lives of the disabled will be enriched.

Consider first the blind. The World Health Organization estimates that there are 39 million blind people around the world (WHO 2013b). Since 90% of those people live in the developing world, not all of them have access even to adequate roads, nor can they afford a vehicle of any kind. But many of them do and can. The driverless car restores to the blind more or less total mobility under individual, independent control. Can you think of any other technical innovation that will, by itself, so dramatically empower the disabled and enhance the quality of their lives? I cannot. Add to the list the amputee just returned from Afghanistan, the brilliant mind trapped in a body crippled by cerebral palsy, your octagenarian grandparents, and your teenaged son on his way home from a party. Get the picture?

If you have the means to help so many people lead more fulfilling and more independent lives and you do nothing, then you have done a serious wrong.

3. Our failing cities will be revitalized.

Think now mainly of the United States. After the devolution of our manufacturing economy and the export of so many manufacturing jobs overseas, the single largest cause of the decline of American cities, especially mid-size cities in the industrial heartland, has been the exodus of the white middle class to the suburbs. And that exodus was driven, if you will, by the rapid rise in private automobile ownership, which made possible one’s working and living in widely separated locations. Once that transition was complete, with most of us dependent upon the private automobile for transportation, the commercial cores of our cities were destroyed as congestion and lack of access to parking pushed shops and restaurants out to the suburbs. Many people still drive to work in our cities, but the department stores, even the supermarkets and the pharmacies are gone. Once that commercial infrastructure goes, then even those who might otherwise want to live in town find it hard to do so.

The solution is at hand. Combine the driverless car with the zip car. As an alternative to the private ownership of autonomous cars, let people buy membership in a driverless zip car program. Pay a modest annual fee and a modest per-mile charge, perhaps also carry your own insurance. Then, whenever you need a ride, click the app on your mobile phone, the zip car takes you wherever you need to go, then hurries off to ferry the next passenger to another destination. When you are done with your shopping or your night on the town, click again and the driverless zip car shows up at the restaurant door. You don’t have to worry about parking. With that, the single largest impediment to the return of commercial business to our city centers is gone.

The impact will be differential. Megacities like New York, with good public transportation, will benefit less, though a big disincentive to my driving into Manhattan or midtown Chicago is, again, the problem of parking. But the impact on cities like South Bend could be enormous.

I happen to think that restoring our failing cities is a moral imperative, because more than just a flourishing business economy is implicated, like adequate funding for public schools, but about that we might disagree. Surely, though, you agree that, if it doesn’t rise to the level of a moral imperative, it would be at least a social good were we to make our cities thrive again.

So there you have three reasons why the most rapid possible transition to a transportation system based on the driverless car is a moral imperative. Indeed, it is one of the most compelling moral challenges of our day. If we have the means to save one million lives a year, and we do, then we must do all that we can as quickly as we can to bring about that change.

Yet many people resist the idea. To me, that’s a great puzzle. We are all now perfectly comfortable with air travel in totally autonomous aircraft that can and often do fly gate to gate entirely on autopilot. Yes, the human pilot is in the cabin to monitor the controls and deal with any problems that might arise, as will be the case with the “driver” in driverless cars, at least for the near term. But many of the most serious airplane accidents these days are due to human error. The recent Asiana Airlines crash upon landing at San Francisco in July was evidently due to pilot error. One of the most edifying recent examples is the crash of Air France flight 447 from Rio to Paris in June of 2009. A sensor malfunction due to ice crystals caused the autopilot to disengage as per its design specifications, but then the human crew reacted wrongly to turbulence, putting the aircraft into a stall. In this case, the aircraft probably would have performed better had the switch to manual not been designed into the system (BEA 2012). If we can safely fly thousands of aircraft and tens of thousands of passengers around the world every day on totally automated aircraft, we can surely do the same with automobiles.

And if we can do it, then we must.

Acknowledgement:

Many thanks to Mark P. Mills (http://www.forbes.com/sites/markpmills/)
for helpful and stimulating conversation about the issues addressed here.

References:

BEA 2012. Final Report On the Accident on 1st June 2009 to the Airbus A330-203 Registered F-GZCP Operated by Air France Flight AF 447 Rio de Janeiro – Paris. Bureau d’Enquêtes et d’Analyses pour la sécurité de l’aviation civile. Paris.

Watkins, Kevin 2012. Safe and Sustainable Roads: An Agenda for Rio+20. The Campaign for Global Road Safety. http://www.makeroadssafe.org/publications/Documents/Rio_20_Report_lr.pdf

WHO 2013a. Global Status Report on Road Safety 2013: Supporting a Decade of Action. World Health Organization. http://www.who.int/iris/bitstream/10665/78256/1/9789241564564_eng.pdf

WHO 2013b. “Visual Impairment and Blindness.” Fact Sheet No. 282, updated October 2013. World Health Organization. http://www.who.int/mediacentre/factsheets/fs282/en/index.html

Science in the Crosshairs

Don Howard

Sometime over the weekend of September 28-29, Mojtaba Ahmadi, a specialist in cyber-defense and the Commander of Iran’s Cyber War Headquarters, was found dead with two bullets to the heart. Nothing has been said officially, but it is widely suspected that Ahmadi was targeted for assassination, some pointing the finger of blame at Israel. The method of the attack, reportedly assassins on motorbikes, is reminiscent of earlier assassinations or attempted assassinations of five Iranian nuclear scientists going back to 2007, those attacks also widely assumed to have been the work of Israeli operatives.

Noteworthy is the fact that, as with those earlier assassinations, this latest attack is receiving scant attention in the mainstream press. Nor has it occasioned the kind of protest that one might have expected from the international scientific community. This silence is worrisome for several reasons.

Were Iran in a state of armed conflict with an adversary, as defined by the international law of armed conflict (ILOAC), and if one of its technical personnel were directly involved in weapons development, then that individual would be a legitimate target, as when the OSS targeted Werner Heisenberg for assassination in WWII owing to his role at the head of the German atomic bomb project. But such is not the case. Iran is not in a state of armed conflict with any potential adversary. That being so, the silence on the part of other governments and the lack of protest from NGOs, professional associations, and other stakeholders means that we are allowing a precedent to be set that could have the effect of legitimating such assassinations as part of customary law.

Were this to become accepted practice, then the consequences would be profound. It would then be perfectly legal for a targeted nation, such as Iran, to retaliate in kind with attacks targeted against technical personnel within countries reasonably deemed responsible for sponsoring the original attack. Thus, were it to emerge that the US had a hand in these events, even if only by way of logistical or intelligence support, then any US cyberwarfare specialist would become a legitimate target, as would be any US nuclear weapons technical personnel. Quite frankly, I worry that it is only a matter of time before Iran attempts precisely that, and the US being a softer target than Israel, I worry that it may happen here first.

Technical professional associations such as IEEE or the American Physical Society have, I think, a major responsibility to make this a public issue and to take a stand calling for a cessation of such attacks.

The alternative is to condone the globalization and domestication of the permanent state of undeclared conflict in which we seem to find ourselves today. Critics of US foreign and military policy might applaud this as just desserts for unwarranted meddling in the affairs of other nations. That is most definitely not my view, for I believe that bad actors have to be dealt with firmly by all legal means. My concern is that these targeted assassinations, while currently illegal, may become accepted practice. And I don’t want our children to grow up in the kind of world that would result.

How to Talk about Science to the Public – 2. Speak Honestly about Uncertainty

Don Howard

We are all Humeans, all of us who are trained in science, at least. We know that empirical evidence confers at most high probability, never certainty, on a scientific claim, and this no matter how sophisticated the inductive logic that we preach. Enumerative induction doesn’t do it. That the sun rose every day in recorded history and before does not imply that it will, of necessity, rise tomorrow. Inference to the best explanation doesn’t do it, for such inferences depend on a changing explanandum (that which is to be explained) and upon both an obscure quality metric (what determines the “better than” metric) and a never complete reference class of competing explanations. Bayes’s theorem can’t do it either.

No. All of us who are trained in science know that every theory, principle, law, and observation is open to challenge and that many once thought secure now populate the museum of dead theories. Sophisticated philosophers of science have invented the intimidating name, “the pessimistic meta-induction” for the thesis that, just as all theories in the past have turned out to be false or significantly limited in scope, so, too, most likely, will our current best and future science.

No. We all know that science is a matter of tentative hypotheses and best guesses. Some principles that have proven their mettle over the long haul, such as the conservation of energy, rightly earn our confidence that they can be reliable guides in the future. But more than one scientist has been willing to sacrifice even the conservation of energy if that were the price to solve another intractable riddle, as when Niels Bohr twice proposed theories that assumed violations of energy conservation.

That science does not deal in certainty is a major part of what makes it such a precious cultural achievement. Science is not dogma. Science admits its failings and learns from its mistakes. That it does so is key to how it achieved the dramatic expansion of scientific understanding that we have witnessed at least since the Renaissance.

Why, then, do we have so much trouble speaking honestly to the public about uncertainty? Why, when debating on the campaign trail, do we give in to the temptation to describe anthropogenic climate change as “proven fact.”? Why, when on the witness stand, do we feel the need to assert that a Darwinian story about human origins is established “beyond all reasonable doubt”? We have lots of good reasons for believing in human-caused climate change and Darwinian evolution. Few scientific claims are as well established as these. But about both we might be wrong in some as yet unforeseen or unforeseeable way. Why lie? Why not speak honestly?

There are at least two reasons why, when speaking to the public, we so often seek refuge in the rhetoric of proof and truth. The first is that we wrongly think that the scientific laity cannot understand uncertainty and probability. This is one of the most worrisome ways in which we insult the intelligence of our audience.

That lots of us – scientists and non-scientists alike – make lots of inductive and probabilistic mistakes is obvious. Casinos, state lotteries, and race tracks are all the evidence one needs. They profit only thanks to those mistakes. Nor are any of us rational utility maximizers, soundly weighing expected gains and losses against the probabilities of various outcomes. The stock market provides the relevant evidence here.

But the fact that lots of people make inductive errors doesn’t imply that the educated public can’t deal with uncertainty. We all deal with uncertainty all the time, and, in the main, we do a good job with it. Do I take I-294 or the Skyway, the Dan Ryan, and the Kennedy to O’Hare? What are the odds of congestion on each at this time of day? How much of a time cushion do I have? What are the consequences of being early or late? How likely am I to miss my flight if there is a ten-minute delay, a twenty-minute delay, or an hour-long delay? Chance of rain? Do I take the umbrella or also my overcoat? Much of life is like this. We make mistakes, but we get by, don’t we?

Naomi Oreskes and Erik Conway. Merchants of Doubt. Bloomsbury Press, 2010.

The second major reason why we retreat to the rhetoric of proof and truth is that we allow ourselves to be intimidated by the merchants of doubt.* The political exploitation of uncertainty to create the illusion of scientific dissensus and thereby stymie policy making on global warming, public health, energy, and other issues is now, itself, big business. There are lobbying firms, fictitious “think tanks,” corporate public relations offices, sham public interest groups, and members of congress who might as well be paid spokespersons. Much of the same kind of apparatus is encountered in the “debates” over evolution and intelligent design. Acknowledge uncertainty, and that becomes the wedge by means of which the illusion of scientific controversy can be created where there is, in fact, no controversy. What is to be done?

What is not to be done is misrepresenting the contingency of science. It is a mistake to confront the merchants of doubt with the pretense of certainty and proof. The right response is to trust the public to understand the weighing of evidence and the adjustment of policy to the strength of the evidence. The right response is, simply and clearly, to present the evidence. To be sure, climate modeling and population genetics involve sophisticated statistical tools that cannot be explained in detail in a few sentences. But with only a bit effort one can usually explain the general issue in an accessible manner.

A good example of making probabilities accessible is the recent reporting on the hunt for the Higgs boson with the Large Hadron Collider at CERN. Any reader of the New York Times or the Wall Street Journal now knows the expressions “three-sigma” and “five-sigma.” A tutorial on calculating standard deviations was not needed to communicate the point that, when sorting through oceans of data, looking for truly exceptional events, one wants to be sure that what one is seeing is more than what would be expected from random fluctuations. People understand this. If the roulette ball lands on 36 twice in a row one is mildly surprised but doesn’t accuse the croupier of cheating. If it lands on 36 five times in a row, then it’s time to ask to see the manager.

No contentious policy questions turn on the results from CERN, so perhaps it is easier for us to speak about uncertainty in this context. But if we can educate the public about statistics in particle physics, surely we can do it as well when the topic is flu epidemics or vehicle safety or climate change. Here is the evidence for increased global temperatures over the last century. Here is what the models predict for increased sea levels. Here is our degree of confidence in these predictions. Now let’s talk about the costs and benefits of different courses of action. Be firm. Be clear. Don’t be afraid to call a lie a “lie” when others misrepresent the evidence or misdescribe the models. But trust the public to follow the logic of the science if we do a good enough job of explaining that logic.

There might be one final reason why we too often retreat to the rhetoric of proof and truth, a reason that I’ll just mention here, saving a fuller discussion for another occasion. It is that too many of us were, ourselves, badly trained in science. Too many textbooks too many courses, and far, far too many popular science writers still teach the science in ways that encourage the illusion of settled fact where there is none. Thomas Kuhn taught us that science teaching often looks more like indoctrination than we might be comfortable acknowledging. There are remedies for this, foremost among them a more thorough and sophisticated incorporation of history and philosophy of science into science pedagogy. But, again, that is a topic for another post.

*See the excellent book by this title: Naomi Oreskes and Eric Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (Bloomsbury Press, 2010).

Science and Values – 1. The Challenge for the Philosopher

Don Howard

Science, by which I mean also the technologies that flow from and inform it, is a form of social practice. It has evolved distinct institutions and a distinct sociology. It has accumulated and refined an array of formal techniques and instrumental means for knowledge production and certification. That it is also socially embedded, affected by and affecting every aspect of human life, is a trivial truth. The only question, albeit a large one, is, “How?” By what means, in which respects, and to what extent does science change our world and does the world change science? Some changes are obvious, as with the accelerating transformation of material culture effected by science, and changes in our understanding of self, the worlds our selves inhabit, their relation to one another, and the relation of both to nature and spirit. Other changes, and the manner of the change, are less so, as with the content and modes of production of scientific knowledge. Does it make a difference when science is done in a democracy? Does it make a difference when research is funded by the private sector rather than the state? Is science neutral, objective, and above the fray? Understanding how science affects and is affected by its surround is necessary if we wish to effect intelligent control over science and the part of human life that it touches, which is well nigh the whole of the human experience.

Philosophers of science are supposed to understand the structure, methods, and interpretation of science. But apart from modest progress on the formal side and a few helpful insights in the foundations of some individual sciences, philosophy’s record from the early twentieth century has been, until late, rather spotty. In the main, when it comes to all but the more formal questions, philosophers of science have handed the task to their colleagues in history and sociology. History has given good service. Fans of technical history of science have been a tad disappointed in recent decades, but otherwise the history of science is a thriving field, with an expanding scope and a healthy plurality of approaches. Historians have taught us much about how science works and how it lives in its many contexts. But history remains, for the most part, a descriptive, narrative, or hermeneutic enterprise, deliberately eschewing critique and normativity. We may argue about how good a job the sociologists have done since the advent of the “strong programme” (“strong” = context shapes the content of science, not just its aims and institutions) some thirty plus years ago. Instead let’s thank them for forcing everyone to take the question of context seriously and for unsettling our lazy assumptions about the distinctive superiority of science among other social practices, its objectivity, and its social detachment. Subversion of prejudice is a form of critique, but sociology of science, like history of science, remains a largely descriptive, not critical enterprise.

Which brings us back to the challenge for the philosophers of science, my native tribe. Until late, we have struggled to say much that is helpful about the embedding of science in society because we were in thrall to an ideology of value neutrality and the social detachment of science, wrongly think these to be necessary conditions for scientific objectivity. We used to credit logical positivism for this deep insight into objectivity, citing Hans Reichenbach’s distinction between the “context of justification” and the “context of discovery,” the latter being the dustbin into which history, sociology, and all interesting questions about context were cast, the former being the sandbox in which elite philosophers of science alone were allowed to play. Now we regard such dogma not as insight, but as blindness, and the newer historiography explains it as not just the conclusion of a bad argument but as the discipline’s defensive response to political persecution before (Hitler) and after (Joe McCarthy) the Second World War. Weak inductive evidence for the new historiography is afforded by the fact that, curiously, philosophers of science began to overcome their fear of values talk at about the same time that the Berlin Wall came down.

Today, one is happy to report, everyone is eager to get on the science and values bandwagon. There are conferences, books, anthologies, special issues of journals, albeit, as yet, no prizes. Philosophers of science are eager to learn about science policy. They now invite historians and sociologists to their meetings, and they try hard to be respectful, even as they struggle to figure out exactly how empirical evidence bearing on the actual practice of science is supposed to inform their philosophers’ questions. But that is precisely the problem, for what the philosophy of science still lacks are tools for theorizing the manner and consequences of the social embedding of science.

This is not for want of trying. Our feminist colleagues have been at it for thirty or more years. They have taught us a lot about episodes where science has been more deeply affected by its social embedding – read now its “gendering” – than many of us had or wanted to think. Among them there is a proliferation of analytical frameworks, from feminist empiricism to standpoint theory, difference feminism, and postmodernist feminism, each of which has taught us new ways to query once-settled pieties. Phil Kitcher is probably the most prominent philosopher of science otherwise to have taken the plunge, borrowing ideas from John Rawls to think about the place of science in democracy while holding onto what some think are rather shopworn notions of truth and realism (perhaps also a shopworn notion of democracy). Most interesting to me are those projects that mine the past for fresh insights on science, values, and social embedding, as with Heather Douglas’s re-reading of Richard Rudner, Tom Uebel’s rehabilitation of Otto Neurath, and Matt Brown’s resuscitation of John Dewey (more on all of which anon). New theoretical ideas emerge, thus, from attentive history that is more than mere antiquarianism and rational reconstruction.

Lots of commotion. Still we lack, by my lights, the kinds of theoretical tools needed to answer the “How?” question posed above: “By what means, in which respects, and to what extent does science change our world and does the world change science?” We need a theory of science that integrates the history, philosophy, anthropology, psychology, sociology, and even biology of science and scientists into a comprehensive project. In its critical and reformist aspects this theory of science must learn to be normative not just after the fashion of the inductive logician but also in the way of the political theorist and the moral theorist. Promotion of the common good should be the guiding principle. And it would be fun it if could even be a bit utopian.

The next post will set us on our way with a more specific list of necessary conditions for the possibility of such a theory of science.

Physics as Theodicy

Don Howard

A few years ago I had the good fortune to participate in a great conference at the Vatican Observatory on “Scientific Perspectives on the Problem of Natural Evil.” The conference was organized by the Center for Theology and the Natural Sciences, at Berkeley, and co-convened by CTNS and the Vatican Observatory. The Observatory shares Castel Gandolfo with the Papal summer residence, and Pope Benedict was in residence during the entirety of the conference. Many fond memories, among them a state visit by Queen Noor of Jordan, and our being serenaded by Benedict one afternoon as he practiced a Beethoven sonata on the piano. But the really cool thing was being saluted by members of the Swiss Guard every morning as we entered and every evening as we left, snapping to attention with the greeting, “Buongiorno” or “Buonasera.”

Nancey Murphy, Robert John Russell, and William Stoeger, S.J., eds. Physics and Cosmology: Scientific Perspectives on Natural Evil. Vatican City: Vatican Observatory, 2007.

There were many fine presentations by a first-rate group of scholars. I measure the quality of a conference by how much I learn that is new and interesting to me. By those metrics, this meeting is among the very best I’ve ever attended. Take a look at the contents of the published volume, which came to fruition largely through the efforts of Nancey Murphy and her colleagues at Fuller Theological Seminary in Pasadena, and was co-published by CTNS and the Vatican Observatory:

Physics and Cosmology: Scientific Perspectives on the Problem of Natural Evil

My own presentation was entitled “Physics as Theodicy.” A “theodicy” is a solution to the problem of natural evil. Traditionally we distinguish “natural evil” from “moral evil.” Natural evil is suffering that is a consequence of the operation of natural law. Death and destruction wrought by earthquakes, hurricanes, and disease are classic examples. Moral evil concerns suffering in consequence of the moral failings of human beings. Murder, slavery, and too many other sins afford examples. The classic problem of natural evil, famously discussed by Leibniz in his Théodicée (1710) and Voltaire in Candide (1759), is how there can be natural evil in a world governed by an omnisicient, omnipotent, and benevolent God. Leibniz argued that ours is the best of all possible worlds, a view echoed by Alexander Pope in his “Essay on Man” (1734) and viciously mocked by Voltaire.

The traditional problem of evil interests me less than the question of where and how we draw the line between natural and moral evil. The main point of my talk was a simple one: With the progress of science, physics leading the way, we learn more about the laws of nature and so acquire an ever greater capacity to prevent or ameliorate the suffering caused by disease or natural catastrophes. We still cannot prevent an earthquake or a tsunami, but we can predict them, and we can build office towers and bridges that can survive an earthquake, sea walls that can control storm surges, and warning systems that can give people time to take refuge. But do we choose to exercise this power? If we could have prevented a catastrophe or lessened the suffering, but chose not to do so, then the evil is moral, not natural. Thus, with the progress of science, the boundary between natural and moral evil shifts. As science teaches us more about our world, we must accept the moral responsibility for making the world a better place. Even without global climate change, Hurricane Katrina would have been a terrible storm. But at the very least, we could have built stronger dikes. We could not have prevented the earthquake that caused the horrific Indian Ocean tsunami of 2004, but we could have put in place a tsunami warning system that would have saved many tens of thousands of lives. Those deaths are our fault, not nature’s or God’s.

Want to know more? You can download the full paper here:

“Physics as Theodicy”
(Made available here with the permission of the Vatican Observatory.)

And you can buy the book through the University of Notre Dame Press:

http://undpress.nd.edu/book/P01260

How to Talk about Science to the Public – 1. Don’t Insult the Intelligence of Your Audience

Don Howard

About ten years ago I wrote the Einstein article for the new edition of a major encyclopedia. It shall remain unnamed, but you would most definitely recognize it. I enjoyed the challenge and am proud of the product, both because such writing is important and because it is hard work. One must be engaging, intelligible, and concise. Academics must resist the urge to splurge on words.

Writing this article was, however, harder than it should have been, because my editor kept repeating the old journalist’s mantra about writing to the level of the typical fourteen-year-old. We fought. I resisted. He won. He demanded plainer language. He insisted on tediously pedantic explanations of what I thought the reader would see as simple, even if slightly technical concepts. He struck whole paragraphs that I thought were wonderful and he thought were too arcane. Time and again I said that the real fourteen-year-olds I knew could easily understand points that he thought beyond the reach of his imaginary, teen reader. I don’t think that I made a friend. I taunted him by noting that the reader confused about concept X could simply look up the article on X elsewhere in the same encyclopedia. Impolitic, yes, but I couldn’t stop myself. Naughty Don.

A few years later I was asked to do a series of lectures on Einstein for the company then called The Teaching Company and now re-branded as The Great Courses. This was a totally different and far more enjoyable experience, largely because the smart folks in charge at The Great Courses start with a very different assumption about the audience. They asked me to imagine an audience of college-educated professionals, people who loved their student experiences and were hungry for more. Of course, one still had to adjust one’s writing to the level and background of the audience, as one must do with any class one teaches. That is a trivial truth. But what I knew about those kinds of students in my classes was that they wanted to be pushed and challenged. They wanted to be taught new things. They didn’t run in fear of difficult concepts and ideas. Like athletes striving for a personal best, they enjoyed the hard work. The muscles ache, the brain needs a rest, but the achievement makes it worthwhile. Most important is that such students appreciate one’s flattering them with the assumption that they have brains, that they are smart, well-educated, and able to rise to the moment.

I am really proud of the lectures: Albert Einstein: Physicist, Philosopher, Humanitarian. The uniformly positive feedback confirms the point that the intelligent student, reader, and listener can and wants to understand more than journalistic mythology asserts.

Don Howard. Albert Einstein: Physicist, Philosopher, Humanitarian. The Great Courses.

My old encyclopedia editor friend will object, I’m sure: “What about all of the others, the ones who didn’t have a college education or weren’t even ‘B+’ students?” Well, yes indeed, what of them? They are a numerous lot. And if one has the crime “beat” at the local newspaper or writes the “Friends and Neighbors” column, then, yes, ok, I suppose that one must write down to the level of a poorly-educated, fourteen-year-old.

But is that the audience for those of us who write about science for a general public? I hope not. Is it elitist of me to say that I don’t want “Joe the Plumber” making science policy for the 21st century?

I like to think of the main target audience for good science writing as the educated, scientific laity or those (such as smart high school students) who are soon to become part of it. These are the neighbors and fellow citizens who must be involved in the national and global conversation about science and technology for the future. These are the people whose voices should count in debates about climate change, biotechnology, space exploration, and cyberconflict. These are the people for whom we must learn to write and speak.

They deserve our respect.

(Subsequent posts in this series will address more specific challenges in writing about science and technology for the general public.)

Where’s the Intelligence in Intelligent Design?

Don Howard

(Originally published in 2008 as part of a Reilly Center Reports issue on “Evolution and Intelligent Design” that contains excellent pieces by George Coyne, S.J., the former director of the Vatican Observatory, William E. Carroll, the Thomas Aquinas Fellow in Science and Religion at Blackfriars Hall, Oxford, Noah Efron, from Bar Ilan University, Israel, and Reilly Center Fellows Matthew Ashley, Christopher Hamlin, Gerald McKenny, and Phillip Sloan.)

Intelligent design is an idea with a history going back at least to the late seventeenth and early eighteenth centuries, when Deists, especially, were moved by the seeming clockwork precision of the universe as described by Newton to infer the existence of a clockmaker God with an intelligence equal to the cosmic task of creation and design. Just as old are critical philosophical commentaries on design arguments, the most famous from the eighteenth century being David Hume’s mocking attack in his posthumously-published Dialogues Concerning Natural Religion (London, 1779).

Two features of design arguments impressed Hume. The first was that, since design arguments are arguments by analogy, they are, like all analogical reasoning, inductive arguments. That means that, at best, they confer on their conclusions only a high probability and not the necessity that one finds in the rigorous deductive proofs of Euclid’s geometry. Does induction suffice as a demonstration of God’s existence through His works? The second feature that impressed Hume was the arbitrary and persuasive choice of analogies upon which design arguments are grounded. See the universe as being like a watch and the inference to an intelligent designer God is inviting. But why that analogy rather than another? In the Dialogues, Hume’s spokesperson, Philo replies to Cleanthes’ defense of the design argument by suggesting that one could just as well focus on features of the universe that make it like an animal body or a vegetable, from which one could then infer that, like an organism, the universe must be the product of generation or vegetation, rather than reason and design. Absent a prior and independent commitment to the existence of a designer God, one could, thus, with equal reason infer that the universe was the product of sexual union between a cosmic mother and father or of the kind of budding whereby various plants, yeasts, or viruses reproduce.

Other questions loom larger when considering the kinds of design arguments popular today. Consider first that while design inferences are perfectly sensible, indeed, essential in various mundane settings, as in ordinary detective work, their employment in a cosmological setting or in the context of discussions of human origins is a riskier business. The main reason is that, in these extramundane settings, the major premises of a design argument are drawn not from unvarnished observation of the world, as when Holmes noted the hound that did not bark, but from what are typically theoretically sophisticated scientific descriptions of the world, as in the cosmological fine tuning argument.

Why is this problematic? It’s because of the contingency of those theoretical accounts. According to what philosophers of science call the “pessimistic meta-induction,” any current theory is likely to turn out, in future, to be false or at least seriously limited in scope. There is no reason to think that inflationary cosmology will be any exception to this rule, in spite of impressive and growing evidence in its favor. I’m old enough to remember a day when it had not occurred to anyone to think of the universe as having its origins in a cosmic explosion followed by expansion. When I was young, the steady-state model was the accepted wisdom. For two-hundred and fifty years, Newtonian mechanics could claim evidential warrant just as impressive as that attaching to the inflation model. But we now know that Newton was wrong. We don’t know, now, how inflationary cosmology will turn out to be wrong or of limited scope, but that it will seems to be the lesson of history. One might well be puzzled by a theology that dares to rest conclusions about fundamental aspects of religious doctrine on such a fragile, contingent, scientific foundation.

Even were it not for the contingent character of our theories, another question arises. If one is to take the major premises of a design argument from our current best science, is it not incumbent upon us to accept the whole of what that science tells us about such things as the place of intelligent human life in the cosmos? It is surely a striking fact about our current best cosmological models–if it is a fact about them–that intelligent life would have been impossible had the values of various cosmological parameters differed from their current values by a few thousandth’s of a decimal. But some of those same cosmological models also imply that the universe will develop in such a way as to become, in future, radically inhospitable to intelligent human life. If the fine tuning needed to make our corner of the universe home to intelligent life now is part of a cosmic design, then so too are all aspects of the cosmology in question. So was it the designer’s intention to create a universe in which for just the briefest tick of the cosmic clock intelligent human life could appear, only to be followed by cosmic aeons of emptiness? From such a more comprehensive point of view, the emergence of intelligent human life could hardly appear to have been the main goal of the enterprise.

Design arguments in the context of theories of human origins raise a similar question. First, as an aside, note the irony in the fact that the Darwinian story of human origins, a story introduced in part precisely to show how random variation with selection can imitate design, is now, instead, itself invoked as a premise in a design argument. Now it is not the individual human species that is the product of evolution that is held up as evidence of design, but the very evolutionary process that produced the human species that is taken as evidence of design. The natural process whose discovery Darwin thought obviated the need for assumptions of design is now said by the proponents of intelligent design itself to require the assumption of design.

But, as in the cosmological context, so too in the context of evolutionary stories of human origins one has to buy all of the science, not just some of it. Evolution has worked so as to produce intelligent human life. But the Darwinian story tells us that species fitness is always relative to an environment. When the environment changes, species adapted to it–if they cannot accommodate the changes–either evolve into new species or go extinct. From the Darwinian point of view, environmental change is largely a matter of external contingencies, not something implicated by the theory itself. Darwinian evolution does not predict mass extinctions consequent upon a giant meteoroid’s striking the earth at the end of the Cretaceous period, because it knows nothing of solar system dynamics. But it does predict that, if environmental change is drastic enough, extinction is possible or even likely. So, what if the environment to which the human species is adapted changes drastically, say as a result of another meteoroid impact, human-induced global climate change, or all-out nuclear war? Poof! No more human beings, the point being that, if one accepts the whole package, then, in this context too, it no longer appears as though the emergence of intelligent human life was a designer’s main goal.

I can see only one way around objections to design arguments based on the contingency of the theories providing the major premises. It would be to argue that, though theories come and theories go, any theoretical description of the universe that can claim the status of science must describe the universe in terms of some principles of order. What specific order is ascribed to nature might change as theories change, but order will be part of any scientific description of the universe, and so the conclusion still holds that from the order thereby described, design should be inferred. But am I alone in thinking that this maneuver trivializes the design argument, making it true by definition? Moreover, one would think that the specifics of the order described could make a difference to the conclusion one draws about causes. As Hume pointed out, if the order one discerns is like that of an artificial contrivance like a watch, then an intelligent designer as cause is suggested. But if the order is like that found in the plant and animal worlds, then sexual congress or vegetative reproduction is the cause suggested. And today one might add that, if the order described is like that of crystalline structure, then self-assembly in accord with fundamental structural principles (bonding angles, etc.) suggests itself as the cause.

The believer may rightly be enjoined to seek and find in nature the traces of a divine intelligence’s creative activity. If there is a designer God, then at least the main features of his blueprint should be inferable from the nature built according to that plan. By his fruits ye shall know him. But design arguments wrongly turn the arrow of implication in the opposite direction, holding that, if there is order in nature, then a designer God must be responsible for that order. Such might well be the origin of order, but it is a plain fact that order arises in other ways too. Some order is the product of other order, as in crystal formation. Some order is biological in origin, as with the magnetite in Mars rocks that some think was produced by magnetotactic bacteria. And some order is, like it or not, the product of chance, as when, on average, one roll of the dice in every thirty-six yields a perfect pair of snake eyes.