Apocalyptomania – Why We Should Not Fear an AI Apocalypse

Elon Musk is wrong. In a speech to the 2017 annual meeting of the National Governors Association, Musk warned that artificial intelligence (AI) constitutes an “existential risk” to humankind. Musk has sounded this alarm before, as have other figures, including Stephen Hawking and Bill Gates. The argument is rarely spelled out in detail, but the basic idea is that, as futurist Ray Kurzweil has long been predicting, there will soon come a time, the “singularity,” when AI will outstrip human intelligence, and that, when that happens, super-smart AI will decide that humans are to be treated as pets, or that humans are expendable, or that, in the worst case scenario, humans represent a threat that must be exterminated. Musk argues that, given the magnitude of the risk, we must begin now to regulate the development of AI in ways that will guarantee human control.

From “Terminator 2: Judgment Day,” 1991.

That we should be thoughtful and prudent about how we develop and deploy AI is not controversial. But, ironically, Musk’s alarmist pronouncements make that task harder rather than easier. Let me explain why.

Start with this. Technology forecasting is a wicked hard problem. Our track record in predicting how technology will change human life is wretched. No one foresaw, in 1985, how the internet would radically transform our culture, our economy, or our political system. A famous 1937 report on “Technology Trends and National Policy,” commissioned by President Roosevelt, failed to anticipate nuclear energy, radar, antibiotics, jet aircraft, rocketry, space exploration, computers, microelectronics, and genetic engineering, even though the scientific and technical bases for nearly all of these developments were already in place in 1937. Kurzweil’s predictions about a coming AI singularity are based on the highly questionable assumption that the exponential growth with time in the density of transistors that could be crammed into an integrated circuit would be replicated in all other areas of computing, robotics, and AI. Even granting that memory capacity and computing speed have followed somewhat similar growth curves, can we extrapolate from those trends to the growing sophistication of AI? Is there even a measure of “sophistication” that we can plot on a graph?

If, instead, we look for guidance to the way AI is currently being developed by folks like Musk, himself, or rather his company, Tesla, what stands out is that, while rapid progress is being made, it is all in the form of domain specific AI. Tesla and Waymo are engineering ever better AI to control self-driving cars. Google is developing AI for machine translation. IBM is designing AI for medical diagnostics. And Microsoft is pioneering AI that can beat grand masters at the game of Go. No one is trying to build an all-encompassing, universal, AI, however much the fear mongers fantasize about such a future. Why not? For the simple reason that domain specific AI is what the market demands and what the prudent investment of research dollars dictates. Isn’t it more reasonable to extrapolate this trendline? Of course the AI will get better and better, but I see no reason to think that future AI won’t also be tailored to specific tasks. There is no efficiency or cost advantage to developing universal AI. God or evolution might have engineered human intelligence in a general form. But the fact that most humans are really bad at performing most tasks – like driving cars, or playing the piano, or slam-dunking a basketball – proves that specialized intelligence is almost always the better way to go.

Finally, an obsession with a fictional AI apocalypse frustrates rational thinking about our technological future, for two reasons. First, if we assign infinite negative value – existential risk, the extermination of all human life – to an imagined future, however slight the probability of that future, then that infinite risk swamps all other considerations in a rational assessment of risk and benefit. No matter what the promised benefits if things turn out well, we should not move forward if the risk, however unlikely, is the total annihilation of humankind. But that’s an absurd way to think about the future, because every innovation carries with it a tiny, tiny risk of some as yet unimagined cataclysmic consequences. [I have explored the errors of this kind of reasoning about apocalypse in another blog post on risk analysis and in an editorial on the influenza virus gain-of-function debate.]

From Karel Čapek’s 1921 play, “R.U.R.” (Rossumovi Univerzální Roboti [Rossum’s Universal Robots]).
Second, as we obsess about an AI apocalypse, we are distracted from the much more important, near-term, ethical and policy challenges of more sophisticated AI, whether those be in the domain of autonomous weapons, predictive policing, technological unemployment, or intrusive and pervasive technologies of surveillance. Our intelligence and moral energies are far better spent in grappling with such real, present-day problems with AI.

One is reminded of the fable of Chicken Little, who, when an acorn falls on his head, immediately concludes that the sky is falling. Chicken Little persuades Henny Penny and Ducky Lucky that apocalypse is near. Along comes Foxey Loxey, who offers them all shelter from impending doom in his lair. And then he eats them all alive.

On the Moral and Intellectual Bankruptcy of Risk Analysis: Garbage In, Garbage Out

Don Howard

For decades, risk analysis has been the main tool guiding policy in many domains, from environment and public health to workplace and transportation safety, and even to nuclear weapons. One estimates the costs and benefits from various courses of action and their conceivable outcomes, this typically in the form of human suffering and well-being, though other goods, like tactical or strategic advantage, might dominate in some contexts. One also estimates the likelihoods of those outcomes. One multiplies probability times cost or benefit and then sums over all outcomes to produce an expected utility for each course of action. The course of action that maximizes benefit and minimizes harm is recommended. In its most general form, we call this “cost-benefit analysis.” We call it “risk analysis” when our concern is mainly with the down-side consequences, such as the risk of a core meltdown at a nuclear power facility.

I have long been uneasy about the pseudo-rationalism of such analyses. An elaborate formal apparatus conveys the impression of rigor and theoretical sophistication, whereas the widely varying conclusions – one can “justify” almost any policy one chooses – suggest a high degree of subjectivity, if not outright, agenda-driven bias. But my recent entanglement in the debate about the risks and benefits of gain-of-function (GOF) experiments involving pathogens with pandemic potential (PPP) moves me to think and say more about why I am troubled by the risk analysis paradigm (Casadevall, Howard, Imperiale 2014).

The H1N1 Influenza Virus
The H1N1 Influenza Virus

The essential point is suggested by my subtitle: “Garbage In, Garbage Out.” Let’s think about each piece of a cost-benefit analysis in turn. Start with cost and benefit in the form of human suffering and well-being. The question is: “How does one measure pain and pleasure?” Is a full belly more pleasurable than warmth on a cold winter’s night? Is chronic pain worse than fear of torture? There are no objective hedonic and lupenic metrics. And whose pain or pleasure counts for more? Does the well-being of my immediate community or nation trump that of other peoples? Does the suffering of the living count more heavily than the suffering of future generations? Most would say that the unborn get some kind of vote. But, then, how can we estimate the numbers of affected individuals even just twenty years from now, let alone fifty, or one hundred years in the future. And if we include the welfare of too many generations, then our own, contemporary concerns disappear in the calculation.

Next, think about cataloging possible outcomes. Some consequences of our actions are obvious. Punch someone in anger and you are likely to cause pain and injury both to your victim and yourself. We could not function as moral agents were there not some predictability in nature, including human nature and society. But the obvious and near term consequences form a subset of measure zero within the set of all possible consequences of our actions. How can we even begin to guess the distant and long-term consequences of our actions? Forget chaos theory and the butterfly effect, though those are real worries. Who could have predicted in 1905, for example, that Einstein’s discovery that E = mc2 contained within it the potential for annihilating all higher organic life on earth? Who could have foreseen in the 1930s that the discovery of penicillin, while saving millions of lives in the near term, carried with it the threat of producing super-bacteria, resistant to all standard antibiotics, risking many more deaths than penicillin, itself, ever prevented? A risk analysis is only as good as one’s catalog of possible outcomes, and history teaches us that we do a very poor job of anticipating many of the most important.

Then think about estimating the probabilities of outcomes. Some of these estimates, such as estimating the probability of injury or death from driving a car five miles on a sunny day with light traffic, are robust because they are data driven. We have lots of data on accident rates with passenger vehicles. But when we turn to the exceptional and the unusual, there is little or no data to guide us, precisely because the events in question are so rare. We cannot even estimate reliably the risk of injury accidents from transporting oil by rail, because transporting oil by rail used to be uncommon, but now it is very common, and the scant evidence from past experience does not scale in any clear-cut way for the new oil transportation economy. Would pipeline transportation be better or worse? Who knows? When real data are lacking, one tries reasoning by analogy to other, relevantly similar practices. But who can define “relevantly similar”?

It is especially when one comes to extremely rare events, such as a global pandemic, that the whole business of making probability estimates collapses in confusion and disarray. By definition, there is no data on which to base estimates of the probabilities of one-of-a-kind events. Doing it by theory, instead of basing the estimate on data, is a non-starter, for in a vacuum of data there is also a vacuum of theory, theories requiring data for their validation. We are left with nothing but blind guessing.

Put the pieces of the calculation back together again. There are no objective ways of measuring human suffering and well-being. We cannot survey all of the possible outcomes of our actions. Our probability estimates are, in the really important cases, pure fiction. The result is that one can easily manipulate all three factors – measures of pain and pleasure, outcome catalogues, and probability estimates – to produce any result one wishes.

And therein lies both the moral and the intellectual bankruptcy of risk and cost-benefit analysis.

But it’s worse than that. It’s not just that such analyses can be manipulated to serve any end. There is also the problem of deliberate deception. The formal apparatus of risk and cost-benefit analysis – all those graphs and tables and numbers and formulas – creates a pretense of scientific rigor where there is none. Too often that is the point, to use the facade of mathematical precision in order to quash dissent and silence the skeptic.

Back to rare and catastrophic events, like a possible global pandemic produced by a GOF/PPP experiment gone awry. What number to assign to the suffering? However low one’s probability estimate – and, yes, the chances of such a pandemic are low – the catastrophic character of a pandemic gets a suffering score that sends the final risk estimate off the charts. But wait a minute. Didn’t we just say that we cannot anticipate all of the consequences of our actions? Isn’t it possible that any course of action, any innovation, any discovery could lead to a yet unforeseen catastrophe? Unlikely perhaps, but that doesn’t matter, because the consequences would be so dire as to overwhelm even the lowest of probability estimates. Best not to do anything. Which is, of course, an absurd conclusion.

This “apocalypse fallacy,” the invocation of possible catastrophic consequences that overwhelm the cost-benefit calculation, is an all-too-common trope in policy debates. Should nuclear weapons have been eliminated immediately upon the discovery that a “nuclear winter” was possible? There are good arguments for what’s now termed the “nuclear zero” option, but this is not one of them. Should the remote possibility of creating a mini-black hole that would swallow the earth have stopped the search for the Higgs boson at CERN? Well, sure, do some more calculations and theoretical modeling to fix limits on the probability, but don’t stop just because the probability remains finite.

So when the pundits tell you not to invest in new nuclear power generation technologies as the surest and quickest route to a green energy economy because there is a chance of a super-Fukushima nightmare, ignore them. They literally don’t know what they’re talking about. When a do-gooder urges the immediate, widespread use of an Ebola vaccine that has not undergone clinical trials, arguing that the chance of saving thousands, perhaps even millions of lives, outweighs any imaginable untoward consequences, ignore him. He literally does not know what he’s talking about. Does this mean that we should rush into nuclear power generation or that we should refuse the cry for an Ebola vaccine from Africa? Of course not, in both cases. What it means is that we should choose to do or not do those things for good reasons, not bad ones. And risk or cost-benefit arguments, especially in the case of rare eventualities, are always bad arguments.

It would be wrong to conclude from what I’ve just argued that I’m counseling our throwing caution to the wind as we do whatever we damned well please. No. The humble, prudential advice is still good, the advice that one think before acting, that one consider the possible consequences of one’s actions, and that one weigh the odds as best one can. It’s just that one must be aware and wary of the agenda-driven abuse of such moral reflection in the pseudo-rational form of risk and cost-benefit analysis.

That said, there is even still value in the intellectual regimen of risk and cost-benefit analysis, at least in the form of the obligation entailed by that regimen to be as thorough and as objective as one can in assaying the consequences of one’s actions, even if the exercise cannot be reduced to an algorithm. But that is just another way of saying that, to the moral virtue of prudence must be added the intellectual virtues (which are also moral virtues) of honesty and perseverance.

Acknowledgment

Sincere thanks to Arturo Casadevall and Michael Imperiale for conversations that sharpened and enriched my thinking about this issue.

Reference

Arturo Casadevall, Don Howard, and Michael J. Imperiale. 2014. “An Epistemological Perspective on the Value of Gain-of-Function Experiments Involving Pathogens with Pandemic Potential,” mBio. 5(5): . doi:10.1128/mBio.01875-14. (http://mbio.asm.org/content/5/5/e01875-14.full)