Apocalyptomania – Why We Should Not Fear an AI Apocalypse

Elon Musk is wrong. In a speech to the 2017 annual meeting of the National Governors Association, Musk warned that artificial intelligence (AI) constitutes an “existential risk” to humankind. Musk has sounded this alarm before, as have other figures, including Stephen Hawking and Bill Gates. The argument is rarely spelled out in detail, but the basic idea is that, as futurist Ray Kurzweil has long been predicting, there will soon come a time, the “singularity,” when AI will outstrip human intelligence, and that, when that happens, super-smart AI will decide that humans are to be treated as pets, or that humans are expendable, or that, in the worst case scenario, humans represent a threat that must be exterminated. Musk argues that, given the magnitude of the risk, we must begin now to regulate the development of AI in ways that will guarantee human control.

From “Terminator 2: Judgment Day,” 1991.

That we should be thoughtful and prudent about how we develop and deploy AI is not controversial. But, ironically, Musk’s alarmist pronouncements make that task harder rather than easier. Let me explain why.

Start with this. Technology forecasting is a wicked hard problem. Our track record in predicting how technology will change human life is wretched. No one foresaw, in 1985, how the internet would radically transform our culture, our economy, or our political system. A famous 1937 report on “Technology Trends and National Policy,” commissioned by President Roosevelt, failed to anticipate nuclear energy, radar, antibiotics, jet aircraft, rocketry, space exploration, computers, microelectronics, and genetic engineering, even though the scientific and technical bases for nearly all of these developments were already in place in 1937. Kurzweil’s predictions about a coming AI singularity are based on the highly questionable assumption that the exponential growth with time in the density of transistors that could be crammed into an integrated circuit would be replicated in all other areas of computing, robotics, and AI. Even granting that memory capacity and computing speed have followed somewhat similar growth curves, can we extrapolate from those trends to the growing sophistication of AI? Is there even a measure of “sophistication” that we can plot on a graph?

If, instead, we look for guidance to the way AI is currently being developed by folks like Musk, himself, or rather his company, Tesla, what stands out is that, while rapid progress is being made, it is all in the form of domain specific AI. Tesla and Waymo are engineering ever better AI to control self-driving cars. Google is developing AI for machine translation. IBM is designing AI for medical diagnostics. And Microsoft is pioneering AI that can beat grand masters at the game of Go. No one is trying to build an all-encompassing, universal, AI, however much the fear mongers fantasize about such a future. Why not? For the simple reason that domain specific AI is what the market demands and what the prudent investment of research dollars dictates. Isn’t it more reasonable to extrapolate this trendline? Of course the AI will get better and better, but I see no reason to think that future AI won’t also be tailored to specific tasks. There is no efficiency or cost advantage to developing universal AI. God or evolution might have engineered human intelligence in a general form. But the fact that most humans are really bad at performing most tasks – like driving cars, or playing the piano, or slam-dunking a basketball – proves that specialized intelligence is almost always the better way to go.

Finally, an obsession with a fictional AI apocalypse frustrates rational thinking about our technological future, for two reasons. First, if we assign infinite negative value – existential risk, the extermination of all human life – to an imagined future, however slight the probability of that future, then that infinite risk swamps all other considerations in a rational assessment of risk and benefit. No matter what the promised benefits if things turn out well, we should not move forward if the risk, however unlikely, is the total annihilation of humankind. But that’s an absurd way to think about the future, because every innovation carries with it a tiny, tiny risk of some as yet unimagined cataclysmic consequences. [I have explored the errors of this kind of reasoning about apocalypse in another blog post on risk analysis and in an editorial on the influenza virus gain-of-function debate.]

From Karel Čapek’s 1921 play, “R.U.R.” (Rossumovi Univerzální Roboti [Rossum’s Universal Robots]).
Second, as we obsess about an AI apocalypse, we are distracted from the much more important, near-term, ethical and policy challenges of more sophisticated AI, whether those be in the domain of autonomous weapons, predictive policing, technological unemployment, or intrusive and pervasive technologies of surveillance. Our intelligence and moral energies are far better spent in grappling with such real, present-day problems with AI.

One is reminded of the fable of Chicken Little, who, when an acorn falls on his head, immediately concludes that the sky is falling. Chicken Little persuades Henny Penny and Ducky Lucky that apocalypse is near. Along comes Foxey Loxey, who offers them all shelter from impending doom in his lair. And then he eats them all alive.

On the Moral and Intellectual Bankruptcy of Risk Analysis: Garbage In, Garbage Out

Don Howard

For decades, risk analysis has been the main tool guiding policy in many domains, from environment and public health to workplace and transportation safety, and even to nuclear weapons. One estimates the costs and benefits from various courses of action and their conceivable outcomes, this typically in the form of human suffering and well-being, though other goods, like tactical or strategic advantage, might dominate in some contexts. One also estimates the likelihoods of those outcomes. One multiplies probability times cost or benefit and then sums over all outcomes to produce an expected utility for each course of action. The course of action that maximizes benefit and minimizes harm is recommended. In its most general form, we call this “cost-benefit analysis.” We call it “risk analysis” when our concern is mainly with the down-side consequences, such as the risk of a core meltdown at a nuclear power facility.

I have long been uneasy about the pseudo-rationalism of such analyses. An elaborate formal apparatus conveys the impression of rigor and theoretical sophistication, whereas the widely varying conclusions – one can “justify” almost any policy one chooses – suggest a high degree of subjectivity, if not outright, agenda-driven bias. But my recent entanglement in the debate about the risks and benefits of gain-of-function (GOF) experiments involving pathogens with pandemic potential (PPP) moves me to think and say more about why I am troubled by the risk analysis paradigm (Casadevall, Howard, Imperiale 2014).

The H1N1 Influenza Virus
The H1N1 Influenza Virus

The essential point is suggested by my subtitle: “Garbage In, Garbage Out.” Let’s think about each piece of a cost-benefit analysis in turn. Start with cost and benefit in the form of human suffering and well-being. The question is: “How does one measure pain and pleasure?” Is a full belly more pleasurable than warmth on a cold winter’s night? Is chronic pain worse than fear of torture? There are no objective hedonic and lupenic metrics. And whose pain or pleasure counts for more? Does the well-being of my immediate community or nation trump that of other peoples? Does the suffering of the living count more heavily than the suffering of future generations? Most would say that the unborn get some kind of vote. But, then, how can we estimate the numbers of affected individuals even just twenty years from now, let alone fifty, or one hundred years in the future. And if we include the welfare of too many generations, then our own, contemporary concerns disappear in the calculation.

Next, think about cataloging possible outcomes. Some consequences of our actions are obvious. Punch someone in anger and you are likely to cause pain and injury both to your victim and yourself. We could not function as moral agents were there not some predictability in nature, including human nature and society. But the obvious and near term consequences form a subset of measure zero within the set of all possible consequences of our actions. How can we even begin to guess the distant and long-term consequences of our actions? Forget chaos theory and the butterfly effect, though those are real worries. Who could have predicted in 1905, for example, that Einstein’s discovery that E = mc2 contained within it the potential for annihilating all higher organic life on earth? Who could have foreseen in the 1930s that the discovery of penicillin, while saving millions of lives in the near term, carried with it the threat of producing super-bacteria, resistant to all standard antibiotics, risking many more deaths than penicillin, itself, ever prevented? A risk analysis is only as good as one’s catalog of possible outcomes, and history teaches us that we do a very poor job of anticipating many of the most important.

Then think about estimating the probabilities of outcomes. Some of these estimates, such as estimating the probability of injury or death from driving a car five miles on a sunny day with light traffic, are robust because they are data driven. We have lots of data on accident rates with passenger vehicles. But when we turn to the exceptional and the unusual, there is little or no data to guide us, precisely because the events in question are so rare. We cannot even estimate reliably the risk of injury accidents from transporting oil by rail, because transporting oil by rail used to be uncommon, but now it is very common, and the scant evidence from past experience does not scale in any clear-cut way for the new oil transportation economy. Would pipeline transportation be better or worse? Who knows? When real data are lacking, one tries reasoning by analogy to other, relevantly similar practices. But who can define “relevantly similar”?

It is especially when one comes to extremely rare events, such as a global pandemic, that the whole business of making probability estimates collapses in confusion and disarray. By definition, there is no data on which to base estimates of the probabilities of one-of-a-kind events. Doing it by theory, instead of basing the estimate on data, is a non-starter, for in a vacuum of data there is also a vacuum of theory, theories requiring data for their validation. We are left with nothing but blind guessing.

Put the pieces of the calculation back together again. There are no objective ways of measuring human suffering and well-being. We cannot survey all of the possible outcomes of our actions. Our probability estimates are, in the really important cases, pure fiction. The result is that one can easily manipulate all three factors – measures of pain and pleasure, outcome catalogues, and probability estimates – to produce any result one wishes.

And therein lies both the moral and the intellectual bankruptcy of risk and cost-benefit analysis.

But it’s worse than that. It’s not just that such analyses can be manipulated to serve any end. There is also the problem of deliberate deception. The formal apparatus of risk and cost-benefit analysis – all those graphs and tables and numbers and formulas – creates a pretense of scientific rigor where there is none. Too often that is the point, to use the facade of mathematical precision in order to quash dissent and silence the skeptic.

Back to rare and catastrophic events, like a possible global pandemic produced by a GOF/PPP experiment gone awry. What number to assign to the suffering? However low one’s probability estimate – and, yes, the chances of such a pandemic are low – the catastrophic character of a pandemic gets a suffering score that sends the final risk estimate off the charts. But wait a minute. Didn’t we just say that we cannot anticipate all of the consequences of our actions? Isn’t it possible that any course of action, any innovation, any discovery could lead to a yet unforeseen catastrophe? Unlikely perhaps, but that doesn’t matter, because the consequences would be so dire as to overwhelm even the lowest of probability estimates. Best not to do anything. Which is, of course, an absurd conclusion.

This “apocalypse fallacy,” the invocation of possible catastrophic consequences that overwhelm the cost-benefit calculation, is an all-too-common trope in policy debates. Should nuclear weapons have been eliminated immediately upon the discovery that a “nuclear winter” was possible? There are good arguments for what’s now termed the “nuclear zero” option, but this is not one of them. Should the remote possibility of creating a mini-black hole that would swallow the earth have stopped the search for the Higgs boson at CERN? Well, sure, do some more calculations and theoretical modeling to fix limits on the probability, but don’t stop just because the probability remains finite.

So when the pundits tell you not to invest in new nuclear power generation technologies as the surest and quickest route to a green energy economy because there is a chance of a super-Fukushima nightmare, ignore them. They literally don’t know what they’re talking about. When a do-gooder urges the immediate, widespread use of an Ebola vaccine that has not undergone clinical trials, arguing that the chance of saving thousands, perhaps even millions of lives, outweighs any imaginable untoward consequences, ignore him. He literally does not know what he’s talking about. Does this mean that we should rush into nuclear power generation or that we should refuse the cry for an Ebola vaccine from Africa? Of course not, in both cases. What it means is that we should choose to do or not do those things for good reasons, not bad ones. And risk or cost-benefit arguments, especially in the case of rare eventualities, are always bad arguments.

It would be wrong to conclude from what I’ve just argued that I’m counseling our throwing caution to the wind as we do whatever we damned well please. No. The humble, prudential advice is still good, the advice that one think before acting, that one consider the possible consequences of one’s actions, and that one weigh the odds as best one can. It’s just that one must be aware and wary of the agenda-driven abuse of such moral reflection in the pseudo-rational form of risk and cost-benefit analysis.

That said, there is even still value in the intellectual regimen of risk and cost-benefit analysis, at least in the form of the obligation entailed by that regimen to be as thorough and as objective as one can in assaying the consequences of one’s actions, even if the exercise cannot be reduced to an algorithm. But that is just another way of saying that, to the moral virtue of prudence must be added the intellectual virtues (which are also moral virtues) of honesty and perseverance.

Acknowledgment

Sincere thanks to Arturo Casadevall and Michael Imperiale for conversations that sharpened and enriched my thinking about this issue.

Reference

Arturo Casadevall, Don Howard, and Michael J. Imperiale. 2014. “An Epistemological Perspective on the Value of Gain-of-Function Experiments Involving Pathogens with Pandemic Potential,” mBio. 5(5): . doi:10.1128/mBio.01875-14. (http://mbio.asm.org/content/5/5/e01875-14.full)

Robots on the Road: The Moral Imperative of the Driverless Car

Don Howard

Driverless cars are a reality, not just in California and not just as test vehicles being run by Google. They are now legal in three states: California, Florida, and Nevada. Semi-autonomous vehicles are already the norm, incorporating capabilities like adaptive cruise control and braking, rear-collision prevention, and self-parking. All of the basic technical problems have been solved, although work is still to be done on problems like sensor reliability, robust GPS connections, and security against hacking. Current design concepts enable easy integration with existing driver-controlled vehicles, which will make possible a smooth transition as the percentage of driverless cars on the road rises steadily. Every major manufacturer has announced plans to market fully autonomous vehicles within the next few years, Volvo, for example, promises to have them in the showroom by 2018. The question is not “whether?”, but “when?”

And the answer to that question is, “as soon as humanly possible,” this rapid transition in transportation technology being among the foremost moral imperatives of the day. We must do this, and we must do it now, for moral reasons. Here are three such reasons.

1. We will save over one million lives per year.

Approximately 1.24 million people die every year, world-wide, from automobile accidents, with somewhere between 20 million and 50 million people suffering non-fatal injuries (WHO 2013a). The Campaign for Global Road Safety labels this an “epidemic” of “crisis proportions” (Watkins 2012). Can you name any other single technology or technological system that kills and injures at such a rate? Can you think of any even remotely comparable example of our having compromised human health and safety for the sake of convenience and economic gain?

Accident_2010But as driverless cars replace driver-controlled cars, we will reduce the rate of death and injury to near zero. This is because the single largest cause of death and injury from automobile accidents is driver impairment, whether through drunkenness, stupidity, sleep deprivation, road rage, inattention, or poor driver training. All of that goes away with the driverless car, as will contributing causes like limited human sensing capabilities. There will still be equipment failures, and so there will still be accidents, but equipment failure represents only a tiny fraction of the causes of automobile accidents. There are new risks, such as hacking, but there are technical ways to reduce such risks.

Thus, the most rapid possible transition to a transportation system built around autonomous vehicles will save one million lives and prevent as many as fifty million non-fatal injuries annually. And this transition entails only the most minimal economic cost, with no serious negative impact of any other kind. In my mind, then, a rapid transition to a transportation system designed around the driverless car is a moral imperative. Any delay whatsoever, whether on the part of designers, manufacturers, regulators, or consumers will be a moral failing on a monumental scale. If you have the technical capability to prevent so much death and suffering but do nothing or even drag your feet, then you have blood on your hands. I’m sorry to be so blunt, but I see no way around that conclusion.

2. The lives of the disabled will be enriched.

Consider first the blind. The World Health Organization estimates that there are 39 million blind people around the world (WHO 2013b). Since 90% of those people live in the developing world, not all of them have access even to adequate roads, nor can they afford a vehicle of any kind. But many of them do and can. The driverless car restores to the blind more or less total mobility under individual, independent control. Can you think of any other technical innovation that will, by itself, so dramatically empower the disabled and enhance the quality of their lives? I cannot. Add to the list the amputee just returned from Afghanistan, the brilliant mind trapped in a body crippled by cerebral palsy, your octagenarian grandparents, and your teenaged son on his way home from a party. Get the picture?

If you have the means to help so many people lead more fulfilling and more independent lives and you do nothing, then you have done a serious wrong.

3. Our failing cities will be revitalized.

Think now mainly of the United States. After the devolution of our manufacturing economy and the export of so many manufacturing jobs overseas, the single largest cause of the decline of American cities, especially mid-size cities in the industrial heartland, has been the exodus of the white middle class to the suburbs. And that exodus was driven, if you will, by the rapid rise in private automobile ownership, which made possible one’s working and living in widely separated locations. Once that transition was complete, with most of us dependent upon the private automobile for transportation, the commercial cores of our cities were destroyed as congestion and lack of access to parking pushed shops and restaurants out to the suburbs. Many people still drive to work in our cities, but the department stores, even the supermarkets and the pharmacies are gone. Once that commercial infrastructure goes, then even those who might otherwise want to live in town find it hard to do so.

The solution is at hand. Combine the driverless car with the zip car. As an alternative to the private ownership of autonomous cars, let people buy membership in a driverless zip car program. Pay a modest annual fee and a modest per-mile charge, perhaps also carry your own insurance. Then, whenever you need a ride, click the app on your mobile phone, the zip car takes you wherever you need to go, then hurries off to ferry the next passenger to another destination. When you are done with your shopping or your night on the town, click again and the driverless zip car shows up at the restaurant door. You don’t have to worry about parking. With that, the single largest impediment to the return of commercial business to our city centers is gone.

The impact will be differential. Megacities like New York, with good public transportation, will benefit less, though a big disincentive to my driving into Manhattan or midtown Chicago is, again, the problem of parking. But the impact on cities like South Bend could be enormous.

I happen to think that restoring our failing cities is a moral imperative, because more than just a flourishing business economy is implicated, like adequate funding for public schools, but about that we might disagree. Surely, though, you agree that, if it doesn’t rise to the level of a moral imperative, it would be at least a social good were we to make our cities thrive again.

So there you have three reasons why the most rapid possible transition to a transportation system based on the driverless car is a moral imperative. Indeed, it is one of the most compelling moral challenges of our day. If we have the means to save one million lives a year, and we do, then we must do all that we can as quickly as we can to bring about that change.

Yet many people resist the idea. To me, that’s a great puzzle. We are all now perfectly comfortable with air travel in totally autonomous aircraft that can and often do fly almost gate to gate entirely on autopilot. Yes, the human pilot is in the cabin to monitor the controls and deal with any problems that might arise, as will be the case with the “driver” in driverless cars, at least for the near term. But many of the most serious airplane accidents these days are due to human error. The recent Asiana Airlines crash upon landing at San Francisco in July was evidently due to pilot error. One of the most edifying recent examples is the crash of Air France flight 447 from Rio to Paris in June of 2009. A sensor malfunction due to ice crystals caused the autopilot to disengage as per its design specifications, but then the human crew reacted wrongly to turbulence, putting the aircraft into a stall. In this case, the aircraft probably would have performed better had the switch to manual not been designed into the system (BEA 2012). If we can safely fly thousands of aircraft and tens of thousands of passengers around the world every day on totally automated aircraft, we can surely do the same with automobiles.

And if we can do it, then we must.

Acknowledgement:

Many thanks to Mark P. Mills (http://www.forbes.com/sites/markpmills/)
for helpful and stimulating conversation about the issues addressed here.

References:

BEA 2012. Final Report On the Accident on 1st June 2009 to the Airbus A330-203 Registered F-GZCP Operated by Air France Flight AF 447 Rio de Janeiro – Paris. Bureau d’Enquêtes et d’Analyses pour la sécurité de l’aviation civile. Paris.

Watkins, Kevin 2012. Safe and Sustainable Roads: An Agenda for Rio+20. The Campaign for Global Road Safety. http://www.makeroadssafe.org/publications/Documents/Rio_20_Report_lr.pdf

WHO 2013a. Global Status Report on Road Safety 2013: Supporting a Decade of Action. World Health Organization. http://www.who.int/iris/bitstream/10665/78256/1/9789241564564_eng.pdf

WHO 2013b. “Visual Impairment and Blindness.” Fact Sheet No. 282, updated October 2013. World Health Organization. http://www.who.int/mediacentre/factsheets/fs282/en/index.html

“I Sing the Body Electric”

Don Howard

(Originally written for presentation as part of a panel discussion on “Machine/Human Interface” at the 2013 Fall conference, “Fearfully and Wonderfully Made: The Body and Human Identity,” Notre Dame Center for Ethics and Culture, 8 November 2013.)

Our topic today is supposed to be the “machine/human interface.” But I’m not going to talk about that, at least not under that description. Why not? The main reason, to be elaborated in a moment, is that the metaphor of the “interface” entails assumptions about the technology of biomechanical and bioelectric engineering that are already surprisingly obsolete. And therein lies a lesson of paramount importance for those of us interested in technoethics, namely, that the pace of technological change is such as often to leave us plodding humanists arguing about the problems of yesterday, not the problems of tomorrow. Some see here a tragic irony of modernity, that moral reflection cannot, perhaps as a matter of principle, keep pace with technical change. We plead the excuse that careful and thorough philosophical and theological reflection take time. But I don’t buy that. Engineering problems are just as hard as philosophical ones. The difference is that the engineers hunker down and do the work, whereas we humanists are a lazy bunch. And we definitely don’t spend enough time reading the technical literature if our goal is to see over the horizon.

Biocompatible nanoscale wiring embedded in synthetic tissue.
Biocompatible nanoscale wiring embedded in synthetic tissue.

Back to the issue of the moment. What’s wrong with the “interface” metaphor? It’s that it assumes a spatially localized mechanism and a spatially localized part of a human that meet or join in a topologically simple way, in a plane or a plug and socket, perhaps like a usb port in one’s temple. We all remember Commander Data’s data port, which looked suspiciously like a 1990s-vintage avionics connector. There are machine/human or machine/animal interfaces of that kind already. They are known, collectively, as “brain-computer-interfaces” or BCIs, and they have already made possible some remarkable feats, such as partial restoration of hearing in the deaf, direct brain control of a prosthesis, implanting false memories in a rat, and downloading a rat’s memory of how to press a lever to get food and then uploading the memory after the original memory has been chemically destroyed. And there will be more such.

The problem for us, today, is that plugs, and ports, and all such interfaces are already an inelegant technology that represents no more than a transitional form, one that will soon seem as quaint as a crank starter for an automobile, a dial on a telephone, or broadcast television. What the future will be could have been glimpsed in an announcement from just over a year ago. A joint MIT, Harvard, and Boston Children’s Hospital research team led by Robert Langer, Charles Lieber, and Daniel Kohane developed a technique for growing synthetic biological tissue on a substrate containing biocompatible, nanoscale wires, the wiring eventually becoming a permanent part of the fully-grown tissue (Tian et al. 2012). This announcement came seven weeks after the announcement in London of the first ever successful implantation of a synthetic organ, a fully-functional trachea grown from the patient’s own stem cells, work led by the pioneering researcher, Paolo Macchiarini (Melnick 2012). Taken together, these two announcements opened a window on a world that will be remarkably different from the one we inhabit today.

The near-term professed aim of the work on nanoscale wiring implanted in synthetic tissue is to provide sensing and remote adjustment capabilities with implants. But the mind quickly runs to far more exotic scenarios. Wouldn’t you like full-color, video tattoos, ones that you can turn off for a day in the office and turn on for a night of clubbing, all thanks to grafted, synthetic nanowired skin? Or what about vastly enhanced control capabilities for a synthetic heart the pumping rate and capacity of which could be fine-tuned to changing demands and environmental circumstances, with actuators in the heart responding to data from sensors in the lung and limbs? And if we can implant wiring, then, in principle, we can turn the body or any part of it into a computer.

With that the boundary between human and machine dissolves. The human is a synthetic machine, all the way down to the sub-cellular level. And the synthetic machine is, itself, literally, a living organism. No plugs, ports, and sockets. No interfaces, except in the most abstract, conceptual sense. The natural and the artificial merge in a seamlessly integrated whole. I am Watson; Deep Blue is me.

Here lies the really important challenge from the AI and robotics side to received notions of the body and human identity, namely, the deep integration of computing and electronics as a functional part of the human body, essential in ever more cases and ways to the maintenance of life and the healthy functioning of the person.
Such extreme, deep integration of computing and electronics with the human body surely elicits in most people a sense that we have crossed a boundary that shouldn’t be crossed. But explaining how and why is not easy. After all, most of us have no problem with prosthetic limbs, even those directly actuated by the brain, nor with pace makers, cochlear implants, or any of the other now long domesticated, implantable, artificial, electronic devices that we use to enhance and prolong life. Should we think differently about merely shrinking the scale of the implants and increasing the computing power? “Proceed with caution” is good advice with almost all technical innovations. But “do not enter” seems more the sentiment of many when first confronted by the prospect of such enhanced human-electronic integration. Why?

One guess is that boundaries are important for defining personhood, the skin being the first and most salient. Self is what lies within; other is without. The topologically simple “interface” allows us still to preserve a notion of boundedness, even if some of the boundaries are wholly under the skin, as with a pacemaker. But the boundedness of the person is at risk with integrated nanoscale electronics.

Control is surely another important issue implicated by enhanced human-electronic integration. One of the main points of the new research is precisely to afford greater capabilities for control from the outside. The aim, at present, is therapeutic, as with our current abilities to recharge and reprogram a pacemaker via RF signals. But anxieties about loss of control already arise with such devices, as witness Dick Cheney’s turning off the wi-fi capability in his implanted defibrillator. Integrated nanoscale electronics brings with it the technical possibility of much more extensive and intrusive interventions running the gamut from malicious hacking to sinister social and psychological manipulation.

Integrity might name another aspect of personhood put at risk by the dissolution of the machine-human distinction. But it is harder to explain in non-metaphorical terms wherein this integrity consists – “oneness” and “wholeness” are just synonyms, not explicanda – and, perhaps for that reason, it is harder to say exactly how integrated nanoscale electronics threatens the integrity of the human person. After all, the reason why such technology is novel and important is, precisely, that it is so deeply and thoroughly integrated with the body. A machine-human hybrid wouldn’t be less integrated, it would just be differently integrated. And it can’t be that bodily and personal integrity are threatened by the mere incorporation of something alien within the body, for then a hip replacement or an organ transplant would equally threaten human integrity, as would a cheese sandwich.

A blurring or transgressing of bodily boundaries and a loss of personal control are both very definitely threatened by one of the more noteworthy technical possibilities deriving from integrated nanoscale electronics, which is that wired bodies can be put into direct communication with one another all the way down at the cellular level and below. If my doctor can get real-time data about the performance of an implanted, wired organ and can reprogram some of its functions, then it’s only a short step to my becoming part of a network of linked human computers. The technical infrastructure for creating the Borg Collective has arrived. You will be assimilated. Resistance is futile. Were this our future, it would entail a radical transformation in the concept of human personhood, one dense with implications for psychology, philosophy, theology, and even the law.

Or would it? We are already, in a sense, spatially extended and socially entangled persons. I am who I am thanks in no small measure to the pattern of my relationships with others. Today those relationships are mediated by words and pheromones. Should adding Bluetooth make a big difference? This might be one of those situations in which a difference of degree becomes a difference in kind, for RF networking down to the nanoscale would bring with it dramatically enhanced capabilities for extensive, real-time, coordination.

On the other hand, science in an entirely different domain has recently forced us to think about the possibility that the human person really is and always has been socially networked, not an atomic individual, and this at a very basic, biological level. Study of what is termed the “human microbiome,” the microbial ecosystem that each of us hosts, has made many surprising new discoveries. For one thing, we now understand that there are vastly more microbial genes contained within and upon our bodies than somatic genes. In that sense, I am, from a genetic point of view, much more than just my “own” DNA, so much so that some thinkers now argue that the human person should be viewed not as an individual, but as a collective. Moreover, we are learning that our microbes are crucial to much more than just digestion. They play a vital role in things like mood regulation, recent work pointing to connections between, say, depression and our gut bacteria colonies, microbial purges and transplants now being suggested as therapies for psychological disorders. This is interesting because we tend to think of mood and state of mind as being much more intimately related to personhood than the accident of the foodstuffs passing through our bodies. There is new evidence that our microbes play an essential role in immune response. One study released just a couple of days ago suggested a role for gut bacteria in cases of severe rheumatoid arthritis, for example (Scher et al. 2013). This is interesting because the immune system is deeply implicated in any discussion of the self-other distinction.

Most relevant to the foregoing discussion, however, is new evidence that our regularly exchanging microbes when we sneeze, shake hands, and share work surfaces does much more than communicate disease. It establishes enduring, shared, microbial communities among people who more regularly group together, from families, friends, and office mates to church groups and neighborhoods. And some researchers think that this sharing of microbial communities plays a crucial role in our subtle, only half-conscious sense of wellness and belonging when we are with our family and friends rather than total strangers. Indeed, the definition of “stranger” might now have to be “one with whom I share comparatively few microbial types.” In other words, my being as part of my essence a socially networked individual might already occur down at the microbial level. If so, that is important, because it means that purely natural, as opposed to artificial, circumstances already put serious pressure on the notion of the self as something wholly contained within one’s skin.

We started with my challenging the notion of the “interface” as the most helpful metaphor for understanding the ever more sophisticated interminglings of computers and biological humans that are now within our technical reach. We talked about new technologies for growing artificial human tissue with embedded, nanoscale, biocompatible wiring, which implies a deep integration of electronics and computing of a kind that annihilates the distinction between human and the machine, perhaps also the distinction between the natural and the artificial. And we ended with a vision of such wired persons becoming thereby members of highly interconnected social networks in which the bandwidth available for those interconnections is such as perhaps to make obsolete the notion of the atomic individual.

We face a new world. It simply won’t do to stamp our feet and just say “no.” The technology will move forward at best only a little slowed down by fretting and harangue from the humanists. The important question is not “whether?”, but “how?” Philosophers, theologians, and thoughtful people of every kind, including scientists and engineers, must be part of that conversation.

References

Melnick, Meredith (2012). “Cancer Patient Gets World’s First Artificial Trachea.” Time Magazine, July 8, 2012. http://healthland.time.com/2011/07/08/cancer-patient-gets-worlds-first-artificial-trachea/

Scher, J. U. et al. (2013). “Expansion of Intestinal Prevotella copri Correlates with Enhanced Susceptibility to Arthritis. eLife 2 (0): e01202 DOI: 10.7554/eLife.01202#sthash.b3jK5FW4.dpuf

Tian, Bozhi et al. (2012). “Macroporous Nanowire Nanoelectronic Scaffolds for Synthetic Tissues.” Nature Materials 11, 986-994.

Science in the Crosshairs

Don Howard

Sometime over the weekend of September 28-29, Mojtaba Ahmadi, a specialist in cyber-defense and the Commander of Iran’s Cyber War Headquarters, was found dead with two bullets to the heart. Nothing has been said officially, but it is widely suspected that Ahmadi was targeted for assassination, some pointing the finger of blame at Israel. The method of the attack, reportedly assassins on motorbikes, is reminiscent of earlier assassinations or attempted assassinations of five Iranian nuclear scientists going back to 2007, those attacks also widely assumed to have been the work of Israeli operatives.

Noteworthy is the fact that, as with those earlier assassinations, this latest attack is receiving scant attention in the mainstream press. Nor has it occasioned the kind of protest that one might have expected from the international scientific community. This silence is worrisome for several reasons.

Were Iran in a state of armed conflict with an adversary, as defined by the international law of armed conflict (ILOAC), and if one of its technical personnel were directly involved in weapons development, then that individual would be a legitimate target, as when the OSS targeted Werner Heisenberg for assassination in WWII owing to his role at the head of the German atomic bomb project. But such is not the case. Iran is not in a state of armed conflict with any potential adversary. That being so, the silence on the part of other governments and the lack of protest from NGOs, professional associations, and other stakeholders means that we are allowing a precedent to be set that could have the effect of legitimating such assassinations as part of customary law.

Were this to become accepted practice, then the consequences would be profound. It would then be perfectly legal for a targeted nation, such as Iran, to retaliate in kind with attacks targeted against technical personnel within countries reasonably deemed responsible for sponsoring the original attack. Thus, were it to emerge that the US had a hand in these events, even if only by way of logistical or intelligence support, then any US cyberwarfare specialist would become a legitimate target, as would be any US nuclear weapons technical personnel. Quite frankly, I worry that it is only a matter of time before Iran attempts precisely that, and the US being a softer target than Israel, I worry that it may happen here first.

Technical professional associations such as IEEE or the American Physical Society have, I think, a major responsibility to make this a public issue and to take a stand calling for a cessation of such attacks.

The alternative is to condone the globalization and domestication of the permanent state of undeclared conflict in which we seem to find ourselves today. Critics of US foreign and military policy might applaud this as just desserts for unwarranted meddling in the affairs of other nations. That is most definitely not my view, for I believe that bad actors have to be dealt with firmly by all legal means. My concern is that these targeted assassinations, while currently illegal, may become accepted practice. And I don’t want our children to grow up in the kind of world that would result.