On July 20, 2019 we will celebrate the fiftieth anniversary of the Apollo 11 moon landing. People of my generation thought that the Apollo project was just the first step in what would be steady progress toward the human exploration of the solar system and beyond. In that expectation, we were cruelly disappointed. Partisan politics, a collective failure of will and ambition, and, yes, US victory in the Cold War all played a part, because the US space program of the 1960s through the 1980s was more about perfecting technologies for defeating the Soviet Union than about pursuing knowledge for its own sake. Still, our abandoning the dream of human exploration of space was seen by me and my generation as a great betrayal. And, yes, it was a betrayal, not just of the dreams of my science-loving sisters and brothers of that era, but of all of humankind.
Fifty years have passed and, happily, we have once again recommitted to putting humans on the Moon, Mars, and beyond. It is important that we detach ourselves from the momentary, political motivations of figures like current NASA director and Trump appointee, Jim Bridenstine, and the commercial motivations of entrepreneurs like Elon Musk and Jeff Bezos, however much one appreciates the impetus such figures have provided for a renewed focus on human space exploration. Concentrate, instead, on the more fundamental reasons for putting humans in space.
If it were just a matter of gathering new knowledge of distant planets and moons, human space travel would not be needed. Robots can do that as well as or better than humans, at far lower cost, and with zero risk of harm to human astronauts. But one of the most compelling reasons for human space exploration is precisely because it is difficult, dangerous, and expensive. This was the reason so eloquently voiced by John Kennedy at Rice University in Houston in September of 1962:
“We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills.”
We embrace the big challenges because, in doing so, we make ourselves better, smarter, stronger, and more courageous, and our doing that is worth the cost and the risk. Of course there are other difficult challenges. A colleague said to me recently that the internet, robotics, and AI revolutions are this era’s Apollo program. We have certainly made ourselves smarter in this way. What is missing in these initiatives, however, is a moral challenge. There are plenty of ethical questions posed by these new, technical achievements. But mastering these merely technical and intellectual challenges stands little chance of promoting the moral growth of individuals and communities. In this respect, a more appropriate analogue of the challenge of human space exploration would be the challenge of the climate emergency. Mastering that will make us smarter and it will require both individual and collective courage along with other virtues, such as compassion, mercy, self-sacrifice, and generosity. It is also like the challenge of putting humans in deep space in that it is a necessary response to an existential threat to humankind.
That last point brings us to the most fundamental, moral argument for redoubling our efforts to establish a human presence elsewhere than on Earth. The ultimate problem is that the Earth cannot be a permanent home for the human species. It’s only a matter of time. That time will be short, perhaps no more than a few hundred years, if we don’t put a halt to climate change and remediate the harm done by global warming. But if we do solve that problem, it’s still only a matter of time. The time we have left on Earth might be measured on scale of centuries or millennia were the planet to be rendered uninhabitable by another meteor impact with a magnitude like that which caused the Cretaceous extinction. Should we escape that fate by luck or clever preventive measures, it’s still the case that, on a yet longer time scale, Earth will become uninhabitable for reasons of simple physics. It might be only a few hundred million years before solar evolution, with steadily increasing solar radiation, does to the planet what anthropogenic climate change is doing now.
Clever humans can and probably will find ways to accommodate us to living on a planet much warmer than the Earth is now. But, in the end, it will all be for naught if we fail to find a way to establish new homes for human civilization elsewhere in the universe.
Some will say that this argument is an instance of speciesism. It isn’t, for we can choose to take along as many non-human species as we wish, and some, of course, will be necessary for our own survival.
Some will say that the heat death of Earth due to solar evolution is so far in the future as to be of no practical concern for a long, long time to come and so does not require our mastering off-Earth human habitation now. That’s true. There is no immediate necessity. But why wait? It’s a really hard problem, so, the sooner we get started, the better.
Some will say that a concerted effort to establish a human presence off-Earth will distract attention and divert resources from the more urgent challenge of global warming. That could happen, if we were stupid. But it’s not a zero-sum game. We can tackle both problems at the same time, just as we first went to the Moon while fighting an illegal and immoral war in Vietnam and, perhaps unwisely, building a massive nuclear arsenal and delivery capability. There will also be spill-over effects between both projects, as with the first, really effective solar cells having been developed by Bell labs for use in outer space, not for green electricity generation on Earth.
If we can do it, if we can find a way to transplant human life and civilization to new homes in space, then our descendants millions of years from now will look back to us with the same gratitude that we should feel for our hominid ancestors who first mastered the use of fire some two million years ago. The human project already has a history measured on a scale of millions of years. We need to think about the future of the human project on a similar scale, and we must take seriously our moral obligations to those who will come after us.
Elon Musk is wrong. In a speech to the 2017 annual meeting of the National Governors Association, Musk warned that artificial intelligence (AI) constitutes an “existential risk” to humankind. Musk has sounded this alarm before, as have other figures, including Stephen Hawking and Bill Gates. The argument is rarely spelled out in detail, but the basic idea is that, as futurist Ray Kurzweil has long been predicting, there will soon come a time, the “singularity,” when AI will outstrip human intelligence, and that, when that happens, super-smart AI will decide that humans are to be treated as pets, or that humans are expendable, or that, in the worst case scenario, humans represent a threat that must be exterminated. Musk argues that, given the magnitude of the risk, we must begin now to regulate the development of AI in ways that will guarantee human control.
That we should be thoughtful and prudent about how we develop and deploy AI is not controversial. But, ironically, Musk’s alarmist pronouncements make that task harder rather than easier. Let me explain why.
Start with this. Technology forecasting is a wicked hard problem. Our track record in predicting how technology will change human life is wretched. No one foresaw, in 1985, how the internet would radically transform our culture, our economy, or our political system. A famous 1937 report on “Technology Trends and National Policy,” commissioned by President Roosevelt, failed to anticipate nuclear energy, radar, antibiotics, jet aircraft, rocketry, space exploration, computers, microelectronics, and genetic engineering, even though the scientific and technical bases for nearly all of these developments were already in place in 1937. Kurzweil’s predictions about a coming AI singularity are based on the highly questionable assumption that the exponential growth with time in the density of transistors that could be crammed into an integrated circuit would be replicated in all other areas of computing, robotics, and AI. Even granting that memory capacity and computing speed have followed somewhat similar growth curves, can we extrapolate from those trends to the growing sophistication of AI? Is there even a measure of “sophistication” that we can plot on a graph?
If, instead, we look for guidance to the way AI is currently being developed by folks like Musk, himself, or rather his company, Tesla, what stands out is that, while rapid progress is being made, it is all in the form of domain specific AI. Tesla and Waymo are engineering ever better AI to control self-driving cars. Google is developing AI for machine translation. IBM is designing AI for medical diagnostics. And Microsoft is pioneering AI that can beat grand masters at the game of Go. No one is trying to build an all-encompassing, universal, AI, however much the fear mongers fantasize about such a future. Why not? For the simple reason that domain specific AI is what the market demands and what the prudent investment of research dollars dictates. Isn’t it more reasonable to extrapolate this trendline? Of course the AI will get better and better, but I see no reason to think that future AI won’t also be tailored to specific tasks. There is no efficiency or cost advantage to developing universal AI. God or evolution might have engineered human intelligence in a general form. But the fact that most humans are really bad at performing most tasks – like driving cars, or playing the piano, or slam-dunking a basketball – proves that specialized intelligence is almost always the better way to go.
Finally, an obsession with a fictional AI apocalypse frustrates rational thinking about our technological future, for two reasons. First, if we assign infinite negative value – existential risk, the extermination of all human life – to an imagined future, however slight the probability of that future, then that infinite risk swamps all other considerations in a rational assessment of risk and benefit. No matter what the promised benefits if things turn out well, we should not move forward if the risk, however unlikely, is the total annihilation of humankind. But that’s an absurd way to think about the future, because every innovation carries with it a tiny, tiny risk of some as yet unimagined cataclysmic consequences. [I have explored the errors of this kind of reasoning about apocalypse in another blog post on risk analysis and in an editorial on the influenza virus gain-of-function debate.]
Second, as we obsess about an AI apocalypse, we are distracted from the much more important, near-term, ethical and policy challenges of more sophisticated AI, whether those be in the domain of autonomous weapons, predictive policing, technological unemployment, or intrusive and pervasive technologies of surveillance. Our intelligence and moral energies are far better spent in grappling with such real, present-day problems with AI.
One is reminded of the fable of Chicken Little, who, when an acorn falls on his head, immediately concludes that the sky is falling. Chicken Little persuades Henny Penny and Ducky Lucky that apocalypse is near. Along comes Foxey Loxey, who offers them all shelter from impending doom in his lair. And then he eats them all alive.
This essay is dedicated to two extraordinary individuals whose leadership made possible the growth of institutions fostering interdisciplinarity, institutions crucial to my career:
Frederick B. Dutton (1906-1995), chemist, science educator, and founding dean of Lyman Briggs College at Michigan State University, 1967. John D. (“Jack”) Reilly (1942-2014) engineer, businessman, and founding donor of the Reilly Center for Science, Technology, and Values at the University of Notre Dame, 1985.
From the beginning of my life in the academy, back in the 1960s, I have heard again, and again, and again the complaint that the modern university and other institutions of research and intellection erect too many barriers to inter-, trans-, and cross-disciplinary interaction. Specialization and fragmentation are portrayed as the cause of a great cultural crisis. It is said that they encourage the development of science and technology bereft of value and of philosophy and theology ignorant of the way the world really works. We are warned that they have engendered a deep spiritual crisis of modernity as the human soul, itself, is fractured. It is argued that breaching disciplinary walls is necessary for solving many of the problems that humankind faces, like anthropogenic climate change, the threat of artificial intelligence run amok, and endemic poverty and disease in less developed parts of the world, but that the “silo” structure of the modern academy stands in the way.
On the other hand, from the beginning of my life in the academy, I have been deeply puzzled by all of this wailing and gnashing of teeth. Those in this chorus of lament seem to inhabit an intellectual and institutional landscape remarkably different from the one in which I learned and now live. Of course I’ve encountered obstacles to collaboration across boundaries, but the world as I see it is one in which those obstacles are usually little more than annoyances, impediments easily overcome with a bit of effort. The world as I see it is one in which transgressing boundaries is commonplace and often richly rewarded. I’m left wondering how my experience can have been so different from that of the complainers.
Let me begin by acknowledging that mine might be an unusual perspective. My home discipline of the history and philosophy of science is radically interdisciplinary in construction and function, and has been so since its inception more than one-hundred and fifty years ago. My still more local niches of the philosophical foundations of physics and technology ethics are, likewise, radically interdisciplinary, and have been so from the very beginning. My first degree was in physics, pursued within the designedly interdisciplinary, undergraduate, residential science studies college, Lyman Briggs College, at Michigan State University. My postgraduate degrees are both in philosophy, from a philosophy department at Boston University where, in the 1970s, advanced course work in the sciences was strongly reinforced and where three of my philosophy faculty had cross appointments in physics, one of them, Robert Cohen, having chaired both departments. I live today between the worlds of physics and philosophy. My tenure is in philosophy, but I am a Fellow of the American Physical Society, where I have held and hold important leadership responsibilities. And I have directed, at Notre Dame, both the History and Philosophy of Science Graduate Program and the Reilly Center for Science, Technology, and Values, the name of which bespeaks the interdisciplinary ambitions with which it was built thirty years ago and which it has achieved, many times over.
Well and good, you say, but surely yours is an exceptional case. To which I respond: No, it is not. Remember that I am, among other things, a historian of science. When I survey the history of the map of the disciplines from the founding of the modern university in the nineteenth century to the present, what I see is not a static but a highly dynamic landscape, with lots of seismic and tectonic activity. Disciplines come and disciplines go. Some disciplines bifurcate or trifurcate. Philosophy, psychology, and pedagogy were commonly one department in the late-nineteenth century. Some disciplines merge or birth hybrid offspring. The great revolution in the biosciences in the twentieth century came about through the creation of wholly new fields, like biophysics, biochemistry, and molecular biology. Especially at the allegedly impermeable boundaries of the disciplines, lots of smart, creative, entrepreneurial types crafted and today still craft exciting, new, intellectual formations, such as digital humanities, network analysis, bioinformatics, and big data analytics, which latter is reshaping everything from genomics to national security and medical discovery. Just last fall, I learned of a new field of “biomechatronics” – a synthesis of biomechanics, robotics, prosthetics, and artificial intelligence – with its own new center at MIT. Here at my own university, I have watched a civil engineering department become a Department of Civil and Environmental Engineering and Earth Science. I have witnessed the creation of remarkable new, purposely interdisciplinary centers, such as the Wireless Institute, the Environmental Change Initiative, the Energy Center, the Center for Nano Science and Technology, and the Advanced Diagnostics and Therapeutics Initiative. Nor is this a uniquely Notre Dame phenomenon, some special fruit of our being a Catholic institution. No, it is the norm at all of the better institutions. Thus, at the University of South Carolina, two of my philosophy friends have served as assistant director of USC’s world-class Nano Center. And, more than a few years ago, Arizona State University simply blew up the old departmental structure, replacing it with topically-focused “Schools” of this and that, which explains how a sociologist can be the director of ASU’s Center for Nanotechnology.
Within each of these new formations a new disciplinarity emerges, of course. But that is right and good, for the word, “discipline,” denotes both an institutional structure and standards of rigor and quality within a field. It’s a good thing that we don’t give the amateurs a vote. There are better and worse ways of knowledge making – we philosophers of science have spent decades articulating that point. While most opinions deserve our respect, and while “outsiders” can sometimes reshape a whole field (think of Luis Alvarez, iridium, and the Cretaceous extinction), that is the exception, not the norm. Those willing to do the hard work of mastering techniques and knowledge bases should be and are welcome, as when my Doktorvater, Abner Shimony, added to his Yale philosophy Ph.D. a Princeton physics Ph.D. under the direction of Eugene Wigner and went on to create the exciting and hugely important field of experimental tests of Bell’s theorem, straddling the division between experimental physics and philosophy.
But right there is the key insight. Hard work. It takes hard work. I know a theologian who has co-authored world-class experimental physics papers, and a student of Schrödinger’s who went on to be one of the world’s most important voices on science and theology. What they had in common was that they devoted years to mastering the other relevant discipline before daring to think and work on both sides of the fence. As it happens, I also know some world-famous physicists who have caused only embarrassment when they tried to refashion themselves as theologians, and world famous theologians who caused equal embarrassment when they pretended to find in contemporary physics the explanation of theological mysteries. And the problem in those cases was, precisely, that the individuals in question didn’t do the hard work to master the other field.
Years ago I was fond of joking that the call for interdisciplinarity was really just a plea to be allowed to do badly in two fields what one perhaps couldn’t even do well in one. That might be a slightly uncharitable way to put the point, because we rightly celebrate interests that stretch beyond one’s home domain and we rightly encourage dialogue of all kinds. Moreover, we rightly strive to create more flexible and accommodating administrative structures, as with the Arizona State experiment. But the real problem of interdisciplinarity is, in most cases, that of a lack of effort or of talent, a failure to do what needs to be done to earn the respect of one’s colleagues in other fields, respect born out of study and demonstrated achievement. I’m sorry to be so harsh, but too many of the complainers are just lazy dilettantes. Hard working, smart folk see barriers as just bumps in the road on the way to the construction of richly interdisciplinary research careers, educational programs, professional associations, and whatever else is needed to get the job done. Confronted by a befuddled dean or a reluctant provost, they don’t stop, they accelerate.
History teaches us another lesson. It teaches us that what always plays the leading role in disciplinary change are the problems, themselves. Many of the most interesting problems grow up at the interfaces between different fields. Thus, as I explain to my students, the quantum revolution had its start at the end of the nineteenth century, when theoretical physicists began to pay attention to exciting new work on precision measurements in industrial labs. It was the engineers and the materials scientists whose work first alerted the theorists to the problem of anomalous specific heats and to the curious features of the black-body radiation spectrum. In Germany, in the 1870s, the government created the Physikalisch-Technische Reichsanstalt [Imperial Physical-Technical Institute] specifically as a space in which such collaborations between industrial and academic scientists and engineers could flourish. That was a very smart move. And it teaches us that nimble and flexible administrative structures are needed in order to make it possible for the problems to play the leading role. “Aha!” say the whiners, “that’s just the point. University administrations are inflexible.” Well, if that’s so, then please explain how it’s possible that, ever since the birth of the modern university, all of the wonderful experiments in boundary busting adduced in this short essay (and many more besides) could have occurred. They occurred when university presidents, agency directors, and program managers rightly said to people proposing new centers and labs, “convince me,” and then the champions of the new did the hard work to do just that.
Adapted from remarks delivered at the conference: Transcending Orthodoxies: “Re-examining Academic Freedom in Religiously-Affiliated Colleges and Universities,” University of Notre Dame, October 29-November 1, 2015.
Like it or not, the global economy still depends on a large, steady supply of oil, natural gas, and refined petroleum products. That must change if we are to solve the problem of carbon dioxide and methane emissions and associated, global climate change. But the nations of the world have yet to evince either the political will or the technical capability to shift us to a totally green energy economy in the near future. All plausible scenarios still leave us dependent upon fossil fuels for decades to come. That being so, demand for fossil fuels, especially oil and natural gas, will remain strong for the foreseeable future. Which brings us to the question of the Keystone pipeline.
Start with some facts. First, the Keystone pipeline already exists and is moving hundreds of thousands of barrels of oil per day from the oil sands area of Alberta and the Bakken region in North Dakota to refineries, storage facilities, and shipping terminals in Illinois, Nebraska, Oklahoma, and Louisiana. Second, the pipeline carries both heavy shale oil and light crude, along with “dilbit,” which is shale oil diluted by lighter materials that are typical by-products of natural gas production. What generates all the excitement and controversy today is only the completion of phase IV of the pipeline, which would functionally replace the segment of the phase I line from Hardesty, Alberta to Steele City, Nebraska, and make possible also the addition to the pipeline of US-produced crude at a station in Baker, Montana, in the Bakken formation. So, the pipeline is built, and oil has been flowing for over four years. The question now is only whether to replace one segment with another, shorter, higher-capacity line.
There are perfectly reasonable questions about environmental risk in some especially environmentally sensitive areas through which the phase IV pipeline would pass, such as the Sand Hills region of Nebraska, and about impacts on some Native American and First Nations lands. These questions must be addressed in ways satisfactory to all relevant parties.
But many of the pipeline’s opponents object to its construction not only because of such local concerns. They object also on broader environmentalist grounds that boil down to opposition to a fossil fuels energy economy in the first place. That objection, however, misses the point that should be the focus of debate. More or less everyone agrees that a green energy economy is the long-term goal. But oil will be needed for decades to come. Oil will be extracted, shipped, refined, and marketed. The question is not whether we should do that. We have to do that. The question is how to do it in the most environmentally and socially responsible way. Which brings us back to the question of the Keystone pipeline.
There are two main technologies for overland shipping of large volumes of oil: pipelines and rail. So the only real question that should be up for debate concerns which is the safer, more environmentally and socially responsible way to move large volumes of oil from producing fields to refineries and on to markets. And the answer to that question is, indisputably, pipeline transport.
At present, the Bakken region is producing oil at a prodigious rate, about one million barrels per day, far outpacing our capacity to ship it with existing pipelines. The result is that Bakken oil is moving by rail. But US carriers lack the tanker car and engine capacity, as well as bandwidth on the rails, to move the oil. That means that hundreds of thousands of older and poorly designed tank cars (DOT 111 model) have been pressed into service, while engines and track have been diverted from delivering other essential goods, such as grain from the Great Plains, to delivering oil. The economic cost to farmers, food companies, and consumers is huge. We are all paying a hidden tax at the supermarket for our lack of critical oil transport infrastructure. (Federman 2014.)
But that economic cost is minor compared with the huge environmental risks and social impacts of transporting oil by rail in antiquated rolling stock over rail lines pushed well beyond designed capacity. Nor are the risks and impacts merely theoretical.
The single largest, transportation related oil spill on land in the US and Canada over the past ten years was not the Enbridge pipeline spill in Michigan in July 2010 or the Lake Buffalo spill in Alberta in April of 2011. No it was this:
July 6, 2013 in Lac-Mégantic, Quebec. A 74-car trainload of Bakken crude exploded in the center of town. Nearly 5,000 metric tons of oil were spilled, at least forty-two people were killed, and thirty buildings were destroyed. This is the price we pay for not being able to ship that oil by pipeline.
And there is more. On November 30, 2013, a train carrying 2.7 million gallons of Bakken crude derailed and exploded outside of Aliceville, Alabama. On December 30, 2013, an oil train collided with another train outside of Casselton, North Dakota, spilling more than 400,000 gallons of oil.
Then on April 30 of this year, an oil train derailed and exploded in downtown Lynchburg, Virginia, spilling perhaps as much as 30,000 gallons of oil, some of it into the James River:
Fortunately no one was killed, and the damage was much less extensive than in the Lac-Mégantic derailment.
These are only the most serious of numerous recent oil train accidents. According to one estimate, in 2013 alone, oil train accidents led to the spilling of more than 1.15 million gallons of oil just in the United States. The spillage from pipeline accidents pales in comparison.
Two additional factors increase the risk from rail transport. The first concerns specifically the Bakken crude that constitutes the bulk of what is now being shipped by rail. It is that some are convinced that Bakken crude is an uncommonly volatile mix of oil and lighter, hence more explosive and combustible components like butane and propane. Some are calling for pre-processing to remove those components before shipping. But this would be less of a problem with pipeline shipment. (Dawson and Gold 2014.)
The other factor increasing the risk from rail transport is that rail lines tend to pass right through the hearts of densely populated urban areas, whereas oil pipelines are deliberately constructed so as to avoid urban areas to the greatest extent possible. For example, Norfolk-Southern alone ships between 13 and 24 million gallons of North Dakota oil through the center of my home town, South Bend, Indiana, every week. That rail line passes two blocks away from my son’s high school. (Widener 2014.) And, yes, there is also the threat of terrorist attacks, a really worrisome prospect in city centers.
The point is simply this. We have to move oil from oil fields to refineries and distribution centers. Pipelines break, but our recent experience has shown that it is far more risky to move the oil by rail, which is the only alternative. And those risks extend not just to environmental consequences but to human suffering and economic loss.
Of course pipelines also fail, and when they do, the consequences can be quite serious. But it is instructive to examine some of the recent pipeline spills, such as the mentioned accidents in Alberta and Michigan. In most such cases, the problems go back to aging pipeline infrastructure combined with poor maintenance and monitoring. But the lesson from those episodes is not to abandon pipelines for rail transport. It is to replace aging pipelines with new ones, which is exactly what Enbridge did after the Michigan spill. (Enbridge 2014.)
The recent decline in oil prices could change the equation, squeezing the profit margin on crude from Alberta tar sands and the Bakken formation. The break-even point for Bakken crude is now estimated to be around $73/barrel. And some of the tar sands producers, especially those smaller firms extracting harder-to-produce oil, are already in trouble. But oil would have to fall well below $70/barrel and stay there for some time before any significant effect on production will be seen, and, as oil prices fall, energy from renewable sources like wind and solar will become less competitive, thus probably increasing demand for fossil fuels. Moreover, other producer nations, like Saudia Arabia, are far more heavily affected by oil price declines, so if there are to be production cutbacks, those are far more likely to occur with other sources of oil. It is, thus, hard to imagine economic circumstances that would lead to a halt or a significant decline in production from the sources served by the Keystone pipeline. (Randall 2014.)
So let’s summarize the argument. Oil will be an essential part of the global energy economy for decades to come. It will be extracted and shipped. Overland shipment is possible only by pipeline or rail. Rail transport of oil is far more risky from both an environmental and social point of view. If, therefore, you are an environmentalist who also cares about human well being, you will support the Keystone pipeline. It – and other such pipeline projects – are the only environmentally and socially responsible choice.
For decades, risk analysis has been the main tool guiding policy in many domains, from environment and public health to workplace and transportation safety, and even to nuclear weapons. One estimates the costs and benefits from various courses of action and their conceivable outcomes, this typically in the form of human suffering and well-being, though other goods, like tactical or strategic advantage, might dominate in some contexts. One also estimates the likelihoods of those outcomes. One multiplies probability times cost or benefit and then sums over all outcomes to produce an expected utility for each course of action. The course of action that maximizes benefit and minimizes harm is recommended. In its most general form, we call this “cost-benefit analysis.” We call it “risk analysis” when our concern is mainly with the down-side consequences, such as the risk of a core meltdown at a nuclear power facility.
I have long been uneasy about the pseudo-rationalism of such analyses. An elaborate formal apparatus conveys the impression of rigor and theoretical sophistication, whereas the widely varying conclusions – one can “justify” almost any policy one chooses – suggest a high degree of subjectivity, if not outright, agenda-driven bias. But my recent entanglement in the debate about the risks and benefits of gain-of-function (GOF) experiments involving pathogens with pandemic potential (PPP) moves me to think and say more about why I am troubled by the risk analysis paradigm (Casadevall, Howard, Imperiale 2014).
The essential point is suggested by my subtitle: “Garbage In, Garbage Out.” Let’s think about each piece of a cost-benefit analysis in turn. Start with cost and benefit in the form of human suffering and well-being. The question is: “How does one measure pain and pleasure?” Is a full belly more pleasurable than warmth on a cold winter’s night? Is chronic pain worse than fear of torture? There are no objective hedonic and lupenic metrics. And whose pain or pleasure counts for more? Does the well-being of my immediate community or nation trump that of other peoples? Does the suffering of the living count more heavily than the suffering of future generations? Most would say that the unborn get some kind of vote. But, then, how can we estimate the numbers of affected individuals even just twenty years from now, let alone fifty, or one hundred years in the future. And if we include the welfare of too many generations, then our own, contemporary concerns disappear in the calculation.
Next, think about cataloging possible outcomes. Some consequences of our actions are obvious. Punch someone in anger and you are likely to cause pain and injury both to your victim and yourself. We could not function as moral agents were there not some predictability in nature, including human nature and society. But the obvious and near term consequences form a subset of measure zero within the set of all possible consequences of our actions. How can we even begin to guess the distant and long-term consequences of our actions? Forget chaos theory and the butterfly effect, though those are real worries. Who could have predicted in 1905, for example, that Einstein’s discovery that E = mc2 contained within it the potential for annihilating all higher organic life on earth? Who could have foreseen in the 1930s that the discovery of penicillin, while saving millions of lives in the near term, carried with it the threat of producing super-bacteria, resistant to all standard antibiotics, risking many more deaths than penicillin, itself, ever prevented? A risk analysis is only as good as one’s catalog of possible outcomes, and history teaches us that we do a very poor job of anticipating many of the most important.
Then think about estimating the probabilities of outcomes. Some of these estimates, such as estimating the probability of injury or death from driving a car five miles on a sunny day with light traffic, are robust because they are data driven. We have lots of data on accident rates with passenger vehicles. But when we turn to the exceptional and the unusual, there is little or no data to guide us, precisely because the events in question are so rare. We cannot even estimate reliably the risk of injury accidents from transporting oil by rail, because transporting oil by rail used to be uncommon, but now it is very common, and the scant evidence from past experience does not scale in any clear-cut way for the new oil transportation economy. Would pipeline transportation be better or worse? Who knows? When real data are lacking, one tries reasoning by analogy to other, relevantly similar practices. But who can define “relevantly similar”?
It is especially when one comes to extremely rare events, such as a global pandemic, that the whole business of making probability estimates collapses in confusion and disarray. By definition, there is no data on which to base estimates of the probabilities of one-of-a-kind events. Doing it by theory, instead of basing the estimate on data, is a non-starter, for in a vacuum of data there is also a vacuum of theory, theories requiring data for their validation. We are left with nothing but blind guessing.
Put the pieces of the calculation back together again. There are no objective ways of measuring human suffering and well-being. We cannot survey all of the possible outcomes of our actions. Our probability estimates are, in the really important cases, pure fiction. The result is that one can easily manipulate all three factors – measures of pain and pleasure, outcome catalogues, and probability estimates – to produce any result one wishes.
And therein lies both the moral and the intellectual bankruptcy of risk and cost-benefit analysis.
But it’s worse than that. It’s not just that such analyses can be manipulated to serve any end. There is also the problem of deliberate deception. The formal apparatus of risk and cost-benefit analysis – all those graphs and tables and numbers and formulas – creates a pretense of scientific rigor where there is none. Too often that is the point, to use the facade of mathematical precision in order to quash dissent and silence the skeptic.
Back to rare and catastrophic events, like a possible global pandemic produced by a GOF/PPP experiment gone awry. What number to assign to the suffering? However low one’s probability estimate – and, yes, the chances of such a pandemic are low – the catastrophic character of a pandemic gets a suffering score that sends the final risk estimate off the charts. But wait a minute. Didn’t we just say that we cannot anticipate all of the consequences of our actions? Isn’t it possible that any course of action, any innovation, any discovery could lead to a yet unforeseen catastrophe? Unlikely perhaps, but that doesn’t matter, because the consequences would be so dire as to overwhelm even the lowest of probability estimates. Best not to do anything. Which is, of course, an absurd conclusion.
This “apocalypse fallacy,” the invocation of possible catastrophic consequences that overwhelm the cost-benefit calculation, is an all-too-common trope in policy debates. Should nuclear weapons have been eliminated immediately upon the discovery that a “nuclear winter” was possible? There are good arguments for what’s now termed the “nuclear zero” option, but this is not one of them. Should the remote possibility of creating a mini-black hole that would swallow the earth have stopped the search for the Higgs boson at CERN? Well, sure, do some more calculations and theoretical modeling to fix limits on the probability, but don’t stop just because the probability remains finite.
So when the pundits tell you not to invest in new nuclear power generation technologies as the surest and quickest route to a green energy economy because there is a chance of a super-Fukushima nightmare, ignore them. They literally don’t know what they’re talking about. When a do-gooder urges the immediate, widespread use of an Ebola vaccine that has not undergone clinical trials, arguing that the chance of saving thousands, perhaps even millions of lives, outweighs any imaginable untoward consequences, ignore him. He literally does not know what he’s talking about. Does this mean that we should rush into nuclear power generation or that we should refuse the cry for an Ebola vaccine from Africa? Of course not, in both cases. What it means is that we should choose to do or not do those things for good reasons, not bad ones. And risk or cost-benefit arguments, especially in the case of rare eventualities, are always bad arguments.
It would be wrong to conclude from what I’ve just argued that I’m counseling our throwing caution to the wind as we do whatever we damned well please. No. The humble, prudential advice is still good, the advice that one think before acting, that one consider the possible consequences of one’s actions, and that one weigh the odds as best one can. It’s just that one must be aware and wary of the agenda-driven abuse of such moral reflection in the pseudo-rational form of risk and cost-benefit analysis.
That said, there is even still value in the intellectual regimen of risk and cost-benefit analysis, at least in the form of the obligation entailed by that regimen to be as thorough and as objective as one can in assaying the consequences of one’s actions, even if the exercise cannot be reduced to an algorithm. But that is just another way of saying that, to the moral virtue of prudence must be added the intellectual virtues (which are also moral virtues) of honesty and perseverance.
Arturo Casadevall, Don Howard, and Michael J. Imperiale. 2014. “An Epistemological Perspective on the Value of Gain-of-Function Experiments Involving Pathogens with Pandemic Potential,” mBio. 5(5): . doi:10.1128/mBio.01875-14. (http://mbio.asm.org/content/5/5/e01875-14.full)
The new AAAS web site on climate change, “What We Know,”asserts: “As scientists, it is not our role to tell people what they should do or must believe about the rising threat of climate change. But we consider it to be our responsibility as professionals to ensure, to the best of our ability, that people understand what we know.” Am I the only one dismayed by this strong disavowal of any responsibility on the part of climate scientists beyond informing the public? Of course I understand the complicated politics of climate change and the complicated political environment in which an organization like AAAS operates. Still, I think that this is an evasion of responsibility.
Contrast the AAAS stance with the so-called “Franck Report,” a remarkable document drawn up by refugee German physicist James Franck and colleagues at the University of Chicago’s “Metallurgical Laboratory” (part of the Manhattan Project) in the spring of 1945 in a vain effort to dissuade the US government from using the atomic bomb in a surprise attack on a civilian target. They started from the premise that the scientist qua scientist has a responsibility to advise and advocate, not just inform, arguing that their technical expertise entailed an obligation to act:
“The scientists on this project do not presume to speak authoritatively on problems of national and international policy. However, we found ourselves, by the force of events, during the last five years, in the position of a small group of citizens cognizant of a grave danger for the safety of this country as well as for the future of all the other nations, of which the rest of mankind is unaware. We therefore feel it is our duty to urge that the political problems, arising from the mastering of nuclear power, be recognized in all their gravity, and that appropriate steps be taken for their study and the preparation of necessary decisions.”
I have long thought that the Franck Report is a model for how the scientist’s citizen responsibility should be understood. At the time, the view among the signatories to the Franck Report stood in stark contrast to J. Robert Oppenheimer’s definition of the scientist’s responsibility being only to provide technical answers to technical questions. Oppenheimer wrote: “We didn’t think that being scientists especially qualified us as to how to answer this question of how the bombs should be used” (Jungk 1958, 186).
The key argument advanced by Franck and colleagues was, again, that it was precisely their distinctive technical expertise that entailed a moral “duty . . . to urge that the political problems . . . be recognized in all their gravity.” Of course they also urged their colleagues to inform the public so as to enable broader citizen participation in the debate about atomic weapons, a sentiment that eventuated in the creation of the Federation of American Scientists and the Bulletin of the Atomic Scientists. The key point, however, was the link between distinctive expertise and the obligation to act. Obvious institutional and professional pressures rightly enforce a boundary between science and advocacy in the scientist’s day-to-day work. Even the cause of political advocacy requires a solid empirical and logical foundation for that action. But that there might be extraordinary circumstances in which the boundary between science and advocacy must be crossed seems equally obvious. And one is hard pressed to find principled reasons for objecting to that conclusion. Surely there is no easy argument leading from scientific objectivity to a disavowal of any such obligations.
Much of the Franck report was written by Eugene Rabinowitch, who went on to become a major figure in the Pugwash movement, the leader of which, Joseph Rotblat, was awarded the 1995 Nobel Peace Prize for his exemplary efforts in promoting international communication and understanding among nuclear scientists from around the world during the worst of the Cold War. The seemingly omnipresent Leo Szilard also played a significant role in drafting the report, and since 1974 the American Physical Society has given an annual Leo Szilard Lectureship Award to honor physicists who “promote the use of physics to benefit society.” Is it ironic that the 2007 winner was NASA atmospheric physicist James E. Hansen who has become controversial in the climate science community precisely because he decided to urge action on climate change?
That distinctive expertise entails an obligation to act is, in other settings, a principle to which we all assent. An EMT, even when off duty, is expected to help a heart attack victim precisely because he or she has knowledge, skills, and experience not common among the general public. Why should we not think about scientists and engineers as intellectual first responders?
Physicists, at least, seem to have assimilated within their professional culture a clear understanding that specialist expertise sometimes entails an obligation to take political action. That fact will, no doubt, surprise many who stereotype physics as the paradigm of a morally and politically disengaged discipline. There are many examples from other disciplines of scientists who have gone so far as to risk their careers to speak out in service to a higher good, including climate scientists like Michael Mann, who recently defended the scientist’s obligation to speak up in a blunt op-ed in the New York Times, “If You See Something, Say Something”). The question remains, why, nonetheless, the technical community has, for the most part, followed the lead of Oppenheimer, not Franck, when, in fact, our very identity as scientists does, sometimes, entail a moral obligation “to tell people what they should do” about the most compelling problems confronting our nation and our world.
Reference
Jungk, Robert (1958). Brighter than a Thousand Suns: A Personal History of the Atomic Scientists. New York: Harcourt, Brace and Company.
(Originally written for presentation as part of a panel discussion on “Machine/Human Interface” at the 2013 Fall conference, “Fearfully and Wonderfully Made: The Body and Human Identity,” Notre Dame Center for Ethics and Culture, 8 November 2013.)
Our topic today is supposed to be the “machine/human interface.” But I’m not going to talk about that, at least not under that description. Why not? The main reason, to be elaborated in a moment, is that the metaphor of the “interface” entails assumptions about the technology of biomechanical and bioelectric engineering that are already surprisingly obsolete. And therein lies a lesson of paramount importance for those of us interested in technoethics, namely, that the pace of technological change is such as often to leave us plodding humanists arguing about the problems of yesterday, not the problems of tomorrow. Some see here a tragic irony of modernity, that moral reflection cannot, perhaps as a matter of principle, keep pace with technical change. We plead the excuse that careful and thorough philosophical and theological reflection take time. But I don’t buy that. Engineering problems are just as hard as philosophical ones. The difference is that the engineers hunker down and do the work, whereas we humanists are a lazy bunch. And we definitely don’t spend enough time reading the technical literature if our goal is to see over the horizon.
Back to the issue of the moment. What’s wrong with the “interface” metaphor? It’s that it assumes a spatially localized mechanism and a spatially localized part of a human that meet or join in a topologically simple way, in a plane or a plug and socket, perhaps like a usb port in one’s temple. We all remember Commander Data’s data port, which looked suspiciously like a 1990s-vintage avionics connector. There are machine/human or machine/animal interfaces of that kind already. They are known, collectively, as “brain-computer-interfaces” or BCIs, and they have already made possible some remarkable feats, such as partial restoration of hearing in the deaf, direct brain control of a prosthesis, implanting false memories in a rat, and downloading a rat’s memory of how to press a lever to get food and then uploading the memory after the original memory has been chemically destroyed. And there will be more such.
The problem for us, today, is that plugs, and ports, and all such interfaces are already an inelegant technology that represents no more than a transitional form, one that will soon seem as quaint as a crank starter for an automobile, a dial on a telephone, or broadcast television. What the future will be could have been glimpsed in an announcement from just over a year ago. A joint MIT, Harvard, and Boston Children’s Hospital research team led by Robert Langer, Charles Lieber, and Daniel Kohane developed a technique for growing synthetic biological tissue on a substrate containing biocompatible, nanoscale wires, the wiring eventually becoming a permanent part of the fully-grown tissue (Tian et al. 2012). This announcement came seven weeks after the announcement in London of the first ever successful implantation of a synthetic organ, a fully-functional trachea grown from the patient’s own stem cells, work led by the pioneering researcher, Paolo Macchiarini (Melnick 2012). Taken together, these two announcements opened a window on a world that will be remarkably different from the one we inhabit today.
The near-term professed aim of the work on nanoscale wiring implanted in synthetic tissue is to provide sensing and remote adjustment capabilities with implants. But the mind quickly runs to far more exotic scenarios. Wouldn’t you like full-color, video tattoos, ones that you can turn off for a day in the office and turn on for a night of clubbing, all thanks to grafted, synthetic nanowired skin? Or what about vastly enhanced control capabilities for a synthetic heart the pumping rate and capacity of which could be fine-tuned to changing demands and environmental circumstances, with actuators in the heart responding to data from sensors in the lung and limbs? And if we can implant wiring, then, in principle, we can turn the body or any part of it into a computer.
With that the boundary between human and machine dissolves. The human is a synthetic machine, all the way down to the sub-cellular level. And the synthetic machine is, itself, literally, a living organism. No plugs, ports, and sockets. No interfaces, except in the most abstract, conceptual sense. The natural and the artificial merge in a seamlessly integrated whole. I am Watson; Deep Blue is me.
Here lies the really important challenge from the AI and robotics side to received notions of the body and human identity, namely, the deep integration of computing and electronics as a functional part of the human body, essential in ever more cases and ways to the maintenance of life and the healthy functioning of the person.
Such extreme, deep integration of computing and electronics with the human body surely elicits in most people a sense that we have crossed a boundary that shouldn’t be crossed. But explaining how and why is not easy. After all, most of us have no problem with prosthetic limbs, even those directly actuated by the brain, nor with pace makers, cochlear implants, or any of the other now long domesticated, implantable, artificial, electronic devices that we use to enhance and prolong life. Should we think differently about merely shrinking the scale of the implants and increasing the computing power? “Proceed with caution” is good advice with almost all technical innovations. But “do not enter” seems more the sentiment of many when first confronted by the prospect of such enhanced human-electronic integration. Why?
One guess is that boundaries are important for defining personhood, the skin being the first and most salient. Self is what lies within; other is without. The topologically simple “interface” allows us still to preserve a notion of boundedness, even if some of the boundaries are wholly under the skin, as with a pacemaker. But the boundedness of the person is at risk with integrated nanoscale electronics.
Control is surely another important issue implicated by enhanced human-electronic integration. One of the main points of the new research is precisely to afford greater capabilities for control from the outside. The aim, at present, is therapeutic, as with our current abilities to recharge and reprogram a pacemaker via RF signals. But anxieties about loss of control already arise with such devices, as witness Dick Cheney’s turning off the wi-fi capability in his implanted defibrillator. Integrated nanoscale electronics brings with it the technical possibility of much more extensive and intrusive interventions running the gamut from malicious hacking to sinister social and psychological manipulation.
Integrity might name another aspect of personhood put at risk by the dissolution of the machine-human distinction. But it is harder to explain in non-metaphorical terms wherein this integrity consists – “oneness” and “wholeness” are just synonyms, not explicanda – and, perhaps for that reason, it is harder to say exactly how integrated nanoscale electronics threatens the integrity of the human person. After all, the reason why such technology is novel and important is, precisely, that it is so deeply and thoroughly integrated with the body. A machine-human hybrid wouldn’t be less integrated, it would just be differently integrated. And it can’t be that bodily and personal integrity are threatened by the mere incorporation of something alien within the body, for then a hip replacement or an organ transplant would equally threaten human integrity, as would a cheese sandwich.
A blurring or transgressing of bodily boundaries and a loss of personal control are both very definitely threatened by one of the more noteworthy technical possibilities deriving from integrated nanoscale electronics, which is that wired bodies can be put into direct communication with one another all the way down at the cellular level and below. If my doctor can get real-time data about the performance of an implanted, wired organ and can reprogram some of its functions, then it’s only a short step to my becoming part of a network of linked human computers. The technical infrastructure for creating the Borg Collective has arrived. You will be assimilated. Resistance is futile. Were this our future, it would entail a radical transformation in the concept of human personhood, one dense with implications for psychology, philosophy, theology, and even the law.
Or would it? We are already, in a sense, spatially extended and socially entangled persons. I am who I am thanks in no small measure to the pattern of my relationships with others. Today those relationships are mediated by words and pheromones. Should adding Bluetooth make a big difference? This might be one of those situations in which a difference of degree becomes a difference in kind, for RF networking down to the nanoscale would bring with it dramatically enhanced capabilities for extensive, real-time, coordination.
On the other hand, science in an entirely different domain has recently forced us to think about the possibility that the human person really is and always has been socially networked, not an atomic individual, and this at a very basic, biological level. Study of what is termed the “human microbiome,” the microbial ecosystem that each of us hosts, has made many surprising new discoveries. For one thing, we now understand that there are vastly more microbial genes contained within and upon our bodies than somatic genes. In that sense, I am, from a genetic point of view, much more than just my “own” DNA, so much so that some thinkers now argue that the human person should be viewed not as an individual, but as a collective. Moreover, we are learning that our microbes are crucial to much more than just digestion. They play a vital role in things like mood regulation, recent work pointing to connections between, say, depression and our gut bacteria colonies, microbial purges and transplants now being suggested as therapies for psychological disorders. This is interesting because we tend to think of mood and state of mind as being much more intimately related to personhood than the accident of the foodstuffs passing through our bodies. There is new evidence that our microbes play an essential role in immune response. One study released just a couple of days ago suggested a role for gut bacteria in cases of severe rheumatoid arthritis, for example (Scher et al. 2013). This is interesting because the immune system is deeply implicated in any discussion of the self-other distinction.
Most relevant to the foregoing discussion, however, is new evidence that our regularly exchanging microbes when we sneeze, shake hands, and share work surfaces does much more than communicate disease. It establishes enduring, shared, microbial communities among people who more regularly group together, from families, friends, and office mates to church groups and neighborhoods. And some researchers think that this sharing of microbial communities plays a crucial role in our subtle, only half-conscious sense of wellness and belonging when we are with our family and friends rather than total strangers. Indeed, the definition of “stranger” might now have to be “one with whom I share comparatively few microbial types.” In other words, my being as part of my essence a socially networked individual might already occur down at the microbial level. If so, that is important, because it means that purely natural, as opposed to artificial, circumstances already put serious pressure on the notion of the self as something wholly contained within one’s skin.
We started with my challenging the notion of the “interface” as the most helpful metaphor for understanding the ever more sophisticated interminglings of computers and biological humans that are now within our technical reach. We talked about new technologies for growing artificial human tissue with embedded, nanoscale, biocompatible wiring, which implies a deep integration of electronics and computing of a kind that annihilates the distinction between human and the machine, perhaps also the distinction between the natural and the artificial. And we ended with a vision of such wired persons becoming thereby members of highly interconnected social networks in which the bandwidth available for those interconnections is such as perhaps to make obsolete the notion of the atomic individual.
We face a new world. It simply won’t do to stamp our feet and just say “no.” The technology will move forward at best only a little slowed down by fretting and harangue from the humanists. The important question is not “whether?”, but “how?” Philosophers, theologians, and thoughtful people of every kind, including scientists and engineers, must be part of that conversation.
References
Melnick, Meredith (2012). “Cancer Patient Gets World’s First Artificial Trachea.” Time Magazine, July 8, 2012. http://healthland.time.com/2011/07/08/cancer-patient-gets-worlds-first-artificial-trachea/
Scher, J. U. et al. (2013). “Expansion of Intestinal Prevotella copri Correlates with Enhanced Susceptibility to Arthritis. eLife 2 (0): e01202 DOI: 10.7554/eLife.01202#sthash.b3jK5FW4.dpuf
Tian, Bozhi et al. (2012). “Macroporous Nanowire Nanoelectronic Scaffolds for Synthetic Tissues.” Nature Materials 11, 986-994.
No one wants war with Iran over its nuclear ambitions. But the euphoria over the EU3+3 interim agreement with Iran, as well as many of the political attacks on the agreement, obscure core technical issues that should be fundamental to any assessment of what has really been achieved. There is no denying that much has been gained by way of Iran’s agreeing temporarily to cease uranium enrichment beyond the 5% level necessary for energy production and its agreeing to on-site inspections at its Fordow and Natanz facilities. But important questions remain about what is not included in the interim agreement. Here are four issues that should be more prominent in the debate:
1. The Interim Agreement Mandates No Reduction in Iran’s Capability for Uranium Enrichment. Iran agrees to cease uranium enrichment beyond the 5% level necessary for energy production and not to expand or enhance its uranium enrichment capabilities, for the duration of the interim agreement. Moreover, Iran agrees to dilute half of its 20%-enriched uranium hexaflouride (UF6) to a 5% level and to convert the remaining half to uranium oxide (UO2) for use in making fuel for its Terhran research reactor. But Iran has not agreed to any permanent reduction of its capability for uranium enrichment, a capability that significantly exceeds what is necessary for energy production. It is hoped that a yet-to-be-negotiated, long-term agreement will include a reduction in that capability. But the interim agreement requires no such reduction. At any moment, Iran could resume enrichment to bomb-grade levels. Moreover, the UF6 that is to be converted to UO2 can be reconverted to UF6 and then further enriched.
2. The Interim Agreement Requires No Inspections at the Arak (IR-40) Heavy Water Reactor. As explained in a helpful recent article by Jeremy Bernstein, the Arak reactor is central to any evaluation of Iran’s nuclear ambitions. It is not designed as a reactor for power generation. Though Iran says that the reactor will be used to produce medical isotopes, its most plausible purpose is to be a breeder reactor for the production of plutonium, which is the other standard fuel for atomic weapons that rely upon the process of nuclear fission (as with the North Korean bomb). It was Iran’s refusal to allow on-site inspections at the Arak reactor that stalled the talks a couple of weeks ago when France demanded more access to Arak. The new interim agreement does require Iran to provide to the International Atomic Energy Agency (IAEA) an updated “Design Information Questionnaire” regarding the Arak reactor, it stipulates that there will be no “further advances of [Iran’s] activities at Arak, it obligates Iran to take “steps to agree with the IAEA on conclusion of the Safeguards Approach for the reactor at Arak” (whatever that means), and Iran agrees to do no reprocessing of spent fuel (the main purpose of which would be to extract plutonium) and not to construct reprocessing facilities. But the interim agreement does not obligate Iran to allow on-site inspections at Arak. Inspections are stipulated for the Fordow and Natanz uranium enrichment facilities, but not at Arak. Iran’s intransigence on this point should give us pause as we try to determine the real purpose of that reactor. If plutonium production is the goal, then our obsession with Iran’s uranium enrichment capability could be distracting us from a more serious threat. A quick route to an Iranian atomic bomb could well be via plutonium produced at Arak. And, at present, Iran has agreed to no degradation of this potential plutonium production capability.
3. The Interim Agreement Does Not Address the Question of Weapons Delivery Systems. Iran is a technically sophisticated nation that has made impressive advances in missile technology in recent years. Much of this missile technology was borrowed from earlier Russian and Korean models. But the new, solid-fuel, Sejil-2 rocket, which was first tested five years ago, is an original Iranian design. It has an impressive, 2,000-km range with a 750 kg payload capacity and anti-radar coatings. The Sejil-2 could put a nuclear warhead on a target as far away as Cairo, Athens, or Kiev. Moreover, Iran has been making gains in its guidance technology.
That we should be paying attention to Iranian weapons delivery capabilities was made clear when, two days after the announcement of the interim agreement, Brigadier General Hossein Salami, the Lieutenant Commander of the Iranian Revolutionary Guard Corps IRGC),announced that Iran’s indigenous ballistic missile capability had recently achieved a “near zero” margin of error in targeting accuracy.
That it was General Salami who made the announcement about advances in Iranian ballistic missile technology reminds us of a political, not technical, issue that has also received insufficient attention in the public debate about the interim agreement. The question is, “Who is really in control?” The interim agreement was negotiated by Iranian Foreign Minister Mohammed Javad Zarif on behalf of the government of President Hassan Rouhani. But the Revolutionary Guard functions as almost a shadow government, with considerable independent authority. And much of the most impressive Iranian ballistic missile research and development has been conducted in facilities under IRGC control, such as the IRGC missile base at Bid Kaneh, where a mysterious explosion during a missile test in November 2011 killed General Hassan Tehrani Moqaddam, who was the head of the IRGC’s “Arms and Military Equipment Self-Sufficiency Program.”
4. The Interim Agreement Does Not Address Aspects of Nuclear Weapons Technology Aside from the Production of Fissile Materials. Nothing in the interim agreement restricts Iran’s ability to continue developing other technologies essential to nuclear weapons production, such as timing circuitry, detonators, and refined conventional explosives techniques involved in the assembly of a critical mass of fissile material. It is perhaps not well and widely enough understood that some of the bigger technical challenges for a nation seeking nuclear weapons lie not in the production of fissile material but in areas such as these. Consider the basic design of a plutonium bomb of the kind dropped on Nagasaki. A critical mass of plutonium is achieved by compressing the plutonium with a spherical blast wave from spherical shell of conventional explosives. The precise shaping of those conventional explosive charges and their precise, simultaneous detonation are among the most difficult technical challenges in bomb design and manufacture. By contrast, while enriching uranium and breeding plutonium require a major technical infrastructure, the physical, chemical, and engineering processes involved are widely understood and, in principle, not all that difficult to achieve. But the interim agreement places no obstacles in the way of research and development on these other aspects of nuclear weapons design. Iran is free to pursue such research as vigorously as it will and to produce a fully functional nuclear weapon awaiting only the insertion of the fissile material.
An assessment of what has been achieved with the interim agreement depends crucially upon a prior assessment of Iran’s goals with respect to nuclear weapons capability. If Iran’s aim had been to produce nuclear weapons as soon as possible, then the interim agreement at least slows down progress toward that goal. But another view is that Iran’s aim all along has been to develop the basic technical infrastructure for the rapid production of bomb-grade fissile material for use if and when it chooses. If that is Iran’s aim, then the interim agreement achieves much less by way of delaying progress to the goal.
We have to wait and see how the interim agreement works. But the celebration of seeming progress on the diplomatic front must be tempered by a clear understanding of the technical issues that are not addressed in the interim agreement, issues that must be the focus of any, longer term, follow-on agreement. Should there be no progress on enrichment capabilities, the Arak reactor, delivery systems, and the fundamentals of bomb design, then options other than diplomacy might have to be explored, starting with the re-imposition of sanctions.
Driverless cars are a reality, not just in California and not just as test vehicles being run by Google. They are now legal in three states: California, Florida, and Nevada. Semi-autonomous vehicles are already the norm, incorporating capabilities like adaptive cruise control and braking, rear-collision prevention, and self-parking. All of the basic technical problems have been solved, although work is still to be done on problems like sensor reliability, robust GPS connections, and security against hacking. Current design concepts enable easy integration with existing driver-controlled vehicles, which will make possible a smooth transition as the percentage of driverless cars on the road rises steadily. Every major manufacturer has announced plans to market fully autonomous vehicles within the next few years, Volvo, for example, promises to have them in the showroom by 2018. The question is not “whether?”, but “when?”
And the answer to that question is, “as soon as humanly possible,” this rapid transition in transportation technology being among the foremost moral imperatives of the day. We must do this, and we must do it now, for moral reasons. Here are three such reasons.
1. We will save over one million lives per year.
Approximately 1.24 million people die every year, world-wide, from automobile accidents, with somewhere between 20 million and 50 million people suffering non-fatal injuries (WHO 2013a). The Campaign for Global Road Safety labels this an “epidemic” of “crisis proportions” (Watkins 2012). Can you name any other single technology or technological system that kills and injures at such a rate? Can you think of any even remotely comparable example of our having compromised human health and safety for the sake of convenience and economic gain?
But as driverless cars replace driver-controlled cars, we will reduce the rate of death and injury to near zero. This is because the single largest cause of death and injury from automobile accidents is driver impairment, whether through drunkenness, stupidity, sleep deprivation, road rage, inattention, or poor driver training. All of that goes away with the driverless car, as will contributing causes like limited human sensing capabilities. There will still be equipment failures, and so there will still be accidents, but equipment failure represents only a tiny fraction of the causes of automobile accidents. There are new risks, such as hacking, but there are technical ways to reduce such risks.
Thus, the most rapid possible transition to a transportation system built around autonomous vehicles will save one million lives and prevent as many as fifty million non-fatal injuries annually. And this transition entails only the most minimal economic cost, with no serious negative impact of any other kind. In my mind, then, a rapid transition to a transportation system designed around the driverless car is a moral imperative. Any delay whatsoever, whether on the part of designers, manufacturers, regulators, or consumers will be a moral failing on a monumental scale. If you have the technical capability to prevent so much death and suffering but do nothing or even drag your feet, then you have blood on your hands. I’m sorry to be so blunt, but I see no way around that conclusion.
2. The lives of the disabled will be enriched.
Consider first the blind. The World Health Organization estimates that there are 39 million blind people around the world (WHO 2013b). Since 90% of those people live in the developing world, not all of them have access even to adequate roads, nor can they afford a vehicle of any kind. But many of them do and can. The driverless car restores to the blind more or less total mobility under individual, independent control. Can you think of any other technical innovation that will, by itself, so dramatically empower the disabled and enhance the quality of their lives? I cannot. Add to the list the amputee just returned from Afghanistan, the brilliant mind trapped in a body crippled by cerebral palsy, your octagenarian grandparents, and your teenaged son on his way home from a party. Get the picture?
If you have the means to help so many people lead more fulfilling and more independent lives and you do nothing, then you have done a serious wrong.
3. Our failing cities will be revitalized.
Think now mainly of the United States. After the devolution of our manufacturing economy and the export of so many manufacturing jobs overseas, the single largest cause of the decline of American cities, especially mid-size cities in the industrial heartland, has been the exodus of the white middle class to the suburbs. And that exodus was driven, if you will, by the rapid rise in private automobile ownership, which made possible one’s working and living in widely separated locations. Once that transition was complete, with most of us dependent upon the private automobile for transportation, the commercial cores of our cities were destroyed as congestion and lack of access to parking pushed shops and restaurants out to the suburbs. Many people still drive to work in our cities, but the department stores, even the supermarkets and the pharmacies are gone. Once that commercial infrastructure goes, then even those who might otherwise want to live in town find it hard to do so.
The solution is at hand. Combine the driverless car with the zip car. As an alternative to the private ownership of autonomous cars, let people buy membership in a driverless zip car program. Pay a modest annual fee and a modest per-mile charge, perhaps also carry your own insurance. Then, whenever you need a ride, click the app on your mobile phone, the zip car takes you wherever you need to go, then hurries off to ferry the next passenger to another destination. When you are done with your shopping or your night on the town, click again and the driverless zip car shows up at the restaurant door. You don’t have to worry about parking. With that, the single largest impediment to the return of commercial business to our city centers is gone.
The impact will be differential. Megacities like New York, with good public transportation, will benefit less, though a big disincentive to my driving into Manhattan or midtown Chicago is, again, the problem of parking. But the impact on cities like South Bend could be enormous.
I happen to think that restoring our failing cities is a moral imperative, because more than just a flourishing business economy is implicated, like adequate funding for public schools, but about that we might disagree. Surely, though, you agree that, if it doesn’t rise to the level of a moral imperative, it would be at least a social good were we to make our cities thrive again.
So there you have three reasons why the most rapid possible transition to a transportation system based on the driverless car is a moral imperative. Indeed, it is one of the most compelling moral challenges of our day. If we have the means to save one million lives a year, and we do, then we must do all that we can as quickly as we can to bring about that change.
Yet many people resist the idea. To me, that’s a great puzzle. We are all now perfectly comfortable with air travel in totally autonomous aircraft that can and often do fly almost gate to gate entirely on autopilot. Yes, the human pilot is in the cabin to monitor the controls and deal with any problems that might arise, as will be the case with the “driver” in driverless cars, at least for the near term. But many of the most serious airplane accidents these days are due to human error. The recent Asiana Airlines crash upon landing at San Francisco in July was evidently due to pilot error. One of the most edifying recent examples is the crash of Air France flight 447 from Rio to Paris in June of 2009. A sensor malfunction due to ice crystals caused the autopilot to disengage as per its design specifications, but then the human crew reacted wrongly to turbulence, putting the aircraft into a stall. In this case, the aircraft probably would have performed better had the switch to manual not been designed into the system (BEA 2012). If we can safely fly thousands of aircraft and tens of thousands of passengers around the world every day on totally automated aircraft, we can surely do the same with automobiles.
BEA 2012. Final Report On the Accident on 1st June 2009 to the Airbus A330-203 Registered F-GZCP Operated by Air France Flight AF 447 Rio de Janeiro – Paris. Bureau d’Enquêtes et d’Analyses pour la sécurité de l’aviation civile. Paris.
Watkins, Kevin 2012. Safe and Sustainable Roads: An Agenda for Rio+20. The Campaign for Global Road Safety. http://www.makeroadssafe.org/publications/Documents/Rio_20_Report_lr.pdf
WHO 2013a. Global Status Report on Road Safety 2013: Supporting a Decade of Action. World Health Organization. http://www.who.int/iris/bitstream/10665/78256/1/9789241564564_eng.pdf
WHO 2013b. “Visual Impairment and Blindness.” Fact Sheet No. 282, updated October 2013. World Health Organization. http://www.who.int/mediacentre/factsheets/fs282/en/index.html
Sometime over the weekend of September 28-29, Mojtaba Ahmadi, a specialist in cyber-defense and the Commander of Iran’s Cyber War Headquarters, was found dead with two bullets to the heart. Nothing has been said officially, but it is widely suspected that Ahmadi was targeted for assassination, some pointing the finger of blame at Israel. The method of the attack, reportedly assassins on motorbikes, is reminiscent of earlier assassinations or attempted assassinations of five Iranian nuclear scientists going back to 2007, those attacks also widely assumed to have been the work of Israeli operatives.
Noteworthy is the fact that, as with those earlier assassinations, this latest attack is receiving scant attention in the mainstream press. Nor has it occasioned the kind of protest that one might have expected from the international scientific community. This silence is worrisome for several reasons.
Were Iran in a state of armed conflict with an adversary, as defined by the international law of armed conflict (ILOAC), and if one of its technical personnel were directly involved in weapons development, then that individual would be a legitimate target, as when the OSS targeted Werner Heisenberg for assassination in WWII owing to his role at the head of the German atomic bomb project. But such is not the case. Iran is not in a state of armed conflict with any potential adversary. That being so, the silence on the part of other governments and the lack of protest from NGOs, professional associations, and other stakeholders means that we are allowing a precedent to be set that could have the effect of legitimating such assassinations as part of customary law.
Were this to become accepted practice, then the consequences would be profound. It would then be perfectly legal for a targeted nation, such as Iran, to retaliate in kind with attacks targeted against technical personnel within countries reasonably deemed responsible for sponsoring the original attack. Thus, were it to emerge that the US had a hand in these events, even if only by way of logistical or intelligence support, then any US cyberwarfare specialist would become a legitimate target, as would be any US nuclear weapons technical personnel. Quite frankly, I worry that it is only a matter of time before Iran attempts precisely that, and the US being a softer target than Israel, I worry that it may happen here first.
Technical professional associations such as IEEE or the American Physical Society have, I think, a major responsibility to make this a public issue and to take a stand calling for a cessation of such attacks.
The alternative is to condone the globalization and domestication of the permanent state of undeclared conflict in which we seem to find ourselves today. Critics of US foreign and military policy might applaud this as just desserts for unwarranted meddling in the affairs of other nations. That is most definitely not my view, for I believe that bad actors have to be dealt with firmly by all legal means. My concern is that these targeted assassinations, while currently illegal, may become accepted practice. And I don’t want our children to grow up in the kind of world that would result.