On the Pseudoproblem of Interdisciplinarity

Don Howard

This essay is dedicated to two extraordinary individuals whose leadership made possible the growth of institutions fostering interdisciplinarity, institutions crucial to my career:

Frederick B. Dutton (1906-1995), chemist, science educator, and founding dean of Lyman Briggs College at Michigan State University, 1967.
John D. (“Jack”) Reilly (1942-2014) engineer, businessman, and founding donor of the Reilly Center for Science, Technology, and Values at the University of Notre Dame, 1985.

From the beginning of my life in the academy, back in the 1960s, I have heard again, and again, and again the complaint that the modern university and other institutions of research and intellection erect too many barriers to inter-, trans-, and cross-disciplinary interaction. Specialization and fragmentation are portrayed as the cause of a great cultural crisis. It is said that they encourage the development of science and technology bereft of value and of philosophy and theology ignorant of the way the world really works. We are warned that they have engendered a deep spiritual crisis of modernity as the human soul, itself, is fractured. It is argued that breaching disciplinary walls is necessary for solving many of the problems that humankind faces, like anthropogenic climate change, the threat of artificial intelligence run amok, and endemic poverty and disease in less developed parts of the world, but that the “silo” structure of the modern academy stands in the way.

On the other hand, from the beginning of my life in the academy, I have been deeply puzzled by all of this wailing and gnashing of teeth. Those in this chorus of lament seem to inhabit an intellectual and institutional landscape remarkably different from the one in which I learned and now live. Of course I’ve encountered obstacles to collaboration across boundaries, but the world as I see it is one in which those obstacles are usually little more than annoyances, impediments easily overcome with a bit of effort. The world as I see it is one in which transgressing boundaries is commonplace and often richly rewarded. I’m left wondering how my experience can have been so different from that of the complainers.

Fred Dutton 2
Frederick B. Dutton (1906-1995). Founding Dean of Michigan State’s Lyman Briggs College.

Let me begin by acknowledging that mine might be an unusual perspective. My home discipline of the history and philosophy of science is radically interdisciplinary in construction and function, and has been so since its inception more than one-hundred and fifty years ago. My still more local niches of the philosophical foundations of physics and technology ethics are, likewise, radically interdisciplinary, and have been so from the very beginning. My first degree was in physics, pursued within the designedly interdisciplinary, undergraduate, residential science studies college, Lyman Briggs College, at Michigan State University. My postgraduate degrees are both in philosophy, from a philosophy department at Boston University where, in the 1970s, advanced course work in the sciences was strongly reinforced and where three of my philosophy faculty had cross appointments in physics, one of them, Robert Cohen, having chaired both departments. I live today between the worlds of physics and philosophy. My tenure is in philosophy, but I am a Fellow of the American Physical Society, where I have held and hold important leadership responsibilities. And I have directed, at Notre Dame, both the History and Philosophy of Science Graduate Program and the Reilly Center for Science, Technology, and Values, the name of which bespeaks the interdisciplinary ambitions with which it was built thirty years ago and which it has achieved, many times over.

Jack Reilly (1942-2014). Founding donor of Notre Dame’s Reilly Center for Science, Technology, and Values

Well and good, you say, but surely yours is an exceptional case. To which I respond: No, it is not. Remember that I am, among other things, a historian of science. When I survey the history of the map of the disciplines from the founding of the modern university in the nineteenth century to the present, what I see is not a static but a highly dynamic landscape, with lots of seismic and tectonic activity. Disciplines come and disciplines go. Some disciplines bifurcate or trifurcate. Philosophy, psychology, and pedagogy were commonly one department in the late-nineteenth century. Some disciplines merge or birth hybrid offspring. The great revolution in the biosciences in the twentieth century came about through the creation of wholly new fields, like biophysics, biochemistry, and molecular biology. Especially at the allegedly impermeable boundaries of the disciplines, lots of smart, creative, entrepreneurial types crafted and today still craft exciting, new, intellectual formations, such as digital humanities, network analysis, bioinformatics, and big data analytics, which latter is reshaping everything from genomics to national security and medical discovery. Just last fall, I learned of a new field of “biomechatronics” – a synthesis of biomechanics, robotics, prosthetics, and artificial intelligence – with its own new center at MIT. Here at my own university, I have watched a civil engineering department become a Department of Civil and Environmental Engineering and Earth Science. I have witnessed the creation of remarkable new, purposely interdisciplinary centers, such as the Wireless Institute, the Environmental Change Initiative, the Energy Center, the Center for Nano Science and Technology, and the Advanced Diagnostics and Therapeutics Initiative. Nor is this a uniquely Notre Dame phenomenon, some special fruit of our being a Catholic institution. No, it is the norm at all of the better institutions. Thus, at the University of South Carolina, two of my philosophy friends have served as assistant director of USC’s world-class Nano Center. And, more than a few years ago, Arizona State University simply blew up the old departmental structure, replacing it with topically-focused “Schools” of this and that, which explains how a sociologist can be the director of ASU’s Center for Nanotechnology.

Within each of these new formations a new disciplinarity emerges, of course. But that is right and good, for the word, “discipline,” denotes both an institutional structure and standards of rigor and quality within a field. It’s a good thing that we don’t give the amateurs a vote. There are better and worse ways of knowledge making – we philosophers of science have spent decades articulating that point. While most opinions deserve our respect, and while “outsiders” can sometimes reshape a whole field (think of Luis Alvarez, iridium, and the Cretaceous extinction), that is the exception, not the norm. Those willing to do the hard work of mastering techniques and knowledge bases should be and are welcome, as when my Doktorvater, Abner Shimony, added to his Yale philosophy Ph.D. a Princeton physics Ph.D. under the direction of Eugene Wigner and went on to create the exciting and hugely important field of experimental tests of Bell’s theorem, straddling the division between experimental physics and philosophy.

But right there is the key insight. Hard work. It takes hard work. I know a theologian who has co-authored world-class experimental physics papers, and a student of Schrödinger’s who went on to be one of the world’s most important voices on science and theology. What they had in common was that they devoted years to mastering the other relevant discipline before daring to think and work on both sides of the fence. As it happens, I also know some world-famous physicists who have caused only embarrassment when they tried to refashion themselves as theologians, and world famous theologians who caused equal embarrassment when they pretended to find in contemporary physics the explanation of theological mysteries. And the problem in those cases was, precisely, that the individuals in question didn’t do the hard work to master the other field.

Years ago I was fond of joking that the call for interdisciplinarity was really just a plea to be allowed to do badly in two fields what one perhaps couldn’t even do well in one. That might be a slightly uncharitable way to put the point, because we rightly celebrate interests that stretch beyond one’s home domain and we rightly encourage dialogue of all kinds. Moreover, we rightly strive to create more flexible and accommodating administrative structures, as with the Arizona State experiment. But the real problem of interdisciplinarity is, in most cases, that of a lack of effort or of talent, a failure to do what needs to be done to earn the respect of one’s colleagues in other fields, respect born out of study and demonstrated achievement. I’m sorry to be so harsh, but too many of the complainers are just lazy dilettantes. Hard working, smart folk see barriers as just bumps in the road on the way to the construction of richly interdisciplinary research careers, educational programs, professional associations, and whatever else is needed to get the job done. Confronted by a befuddled dean or a reluctant provost, they don’t stop, they accelerate.

History teaches us another lesson. It teaches us that what always plays the leading role in disciplinary change are the problems, themselves. Many of the most interesting problems grow up at the interfaces between different fields. Thus, as I explain to my students, the quantum revolution had its start at the end of the nineteenth century, when theoretical physicists began to pay attention to exciting new work on precision measurements in industrial labs. It was the engineers and the materials scientists whose work first alerted the theorists to the problem of anomalous specific heats and to the curious features of the black-body radiation spectrum. In Germany, in the 1870s, the government created the Physikalisch-Technische Reichsanstalt [Imperial Physical-Technical Institute] specifically as a space in which such collaborations between industrial and academic scientists and engineers could flourish. That was a very smart move. And it teaches us that nimble and flexible administrative structures are needed in order to make it possible for the problems to play the leading role. “Aha!” say the whiners, “that’s just the point. University administrations are inflexible.” Well, if that’s so, then please explain how it’s possible that, ever since the birth of the modern university, all of the wonderful experiments in boundary busting adduced in this short essay (and many more besides) could have occurred. They occurred when university presidents, agency directors, and program managers rightly said to people proposing new centers and labs, “convince me,” and then the champions of the new did the hard work to do just that.

Adapted from remarks delivered at the conference: Transcending Orthodoxies: “Re-examining Academic Freedom in Religiously-Affiliated Colleges and Universities,” University of Notre Dame, October 29-November 1, 2015.

On the Moral and Intellectual Bankruptcy of Risk Analysis: Garbage In, Garbage Out

Don Howard

For decades, risk analysis has been the main tool guiding policy in many domains, from environment and public health to workplace and transportation safety, and even to nuclear weapons. One estimates the costs and benefits from various courses of action and their conceivable outcomes, this typically in the form of human suffering and well-being, though other goods, like tactical or strategic advantage, might dominate in some contexts. One also estimates the likelihoods of those outcomes. One multiplies probability times cost or benefit and then sums over all outcomes to produce an expected utility for each course of action. The course of action that maximizes benefit and minimizes harm is recommended. In its most general form, we call this “cost-benefit analysis.” We call it “risk analysis” when our concern is mainly with the down-side consequences, such as the risk of a core meltdown at a nuclear power facility.

I have long been uneasy about the pseudo-rationalism of such analyses. An elaborate formal apparatus conveys the impression of rigor and theoretical sophistication, whereas the widely varying conclusions – one can “justify” almost any policy one chooses – suggest a high degree of subjectivity, if not outright, agenda-driven bias. But my recent entanglement in the debate about the risks and benefits of gain-of-function (GOF) experiments involving pathogens with pandemic potential (PPP) moves me to think and say more about why I am troubled by the risk analysis paradigm (Casadevall, Howard, Imperiale 2014).

The H1N1 Influenza Virus
The H1N1 Influenza Virus

The essential point is suggested by my subtitle: “Garbage In, Garbage Out.” Let’s think about each piece of a cost-benefit analysis in turn. Start with cost and benefit in the form of human suffering and well-being. The question is: “How does one measure pain and pleasure?” Is a full belly more pleasurable than warmth on a cold winter’s night? Is chronic pain worse than fear of torture? There are no objective hedonic and lupenic metrics. And whose pain or pleasure counts for more? Does the well-being of my immediate community or nation trump that of other peoples? Does the suffering of the living count more heavily than the suffering of future generations? Most would say that the unborn get some kind of vote. But, then, how can we estimate the numbers of affected individuals even just twenty years from now, let alone fifty, or one hundred years in the future. And if we include the welfare of too many generations, then our own, contemporary concerns disappear in the calculation.

Next, think about cataloging possible outcomes. Some consequences of our actions are obvious. Punch someone in anger and you are likely to cause pain and injury both to your victim and yourself. We could not function as moral agents were there not some predictability in nature, including human nature and society. But the obvious and near term consequences form a subset of measure zero within the set of all possible consequences of our actions. How can we even begin to guess the distant and long-term consequences of our actions? Forget chaos theory and the butterfly effect, though those are real worries. Who could have predicted in 1905, for example, that Einstein’s discovery that E = mc2 contained within it the potential for annihilating all higher organic life on earth? Who could have foreseen in the 1930s that the discovery of penicillin, while saving millions of lives in the near term, carried with it the threat of producing super-bacteria, resistant to all standard antibiotics, risking many more deaths than penicillin, itself, ever prevented? A risk analysis is only as good as one’s catalog of possible outcomes, and history teaches us that we do a very poor job of anticipating many of the most important.

Then think about estimating the probabilities of outcomes. Some of these estimates, such as estimating the probability of injury or death from driving a car five miles on a sunny day with light traffic, are robust because they are data driven. We have lots of data on accident rates with passenger vehicles. But when we turn to the exceptional and the unusual, there is little or no data to guide us, precisely because the events in question are so rare. We cannot even estimate reliably the risk of injury accidents from transporting oil by rail, because transporting oil by rail used to be uncommon, but now it is very common, and the scant evidence from past experience does not scale in any clear-cut way for the new oil transportation economy. Would pipeline transportation be better or worse? Who knows? When real data are lacking, one tries reasoning by analogy to other, relevantly similar practices. But who can define “relevantly similar”?

It is especially when one comes to extremely rare events, such as a global pandemic, that the whole business of making probability estimates collapses in confusion and disarray. By definition, there is no data on which to base estimates of the probabilities of one-of-a-kind events. Doing it by theory, instead of basing the estimate on data, is a non-starter, for in a vacuum of data there is also a vacuum of theory, theories requiring data for their validation. We are left with nothing but blind guessing.

Put the pieces of the calculation back together again. There are no objective ways of measuring human suffering and well-being. We cannot survey all of the possible outcomes of our actions. Our probability estimates are, in the really important cases, pure fiction. The result is that one can easily manipulate all three factors – measures of pain and pleasure, outcome catalogues, and probability estimates – to produce any result one wishes.

And therein lies both the moral and the intellectual bankruptcy of risk and cost-benefit analysis.

But it’s worse than that. It’s not just that such analyses can be manipulated to serve any end. There is also the problem of deliberate deception. The formal apparatus of risk and cost-benefit analysis – all those graphs and tables and numbers and formulas – creates a pretense of scientific rigor where there is none. Too often that is the point, to use the facade of mathematical precision in order to quash dissent and silence the skeptic.

Back to rare and catastrophic events, like a possible global pandemic produced by a GOF/PPP experiment gone awry. What number to assign to the suffering? However low one’s probability estimate – and, yes, the chances of such a pandemic are low – the catastrophic character of a pandemic gets a suffering score that sends the final risk estimate off the charts. But wait a minute. Didn’t we just say that we cannot anticipate all of the consequences of our actions? Isn’t it possible that any course of action, any innovation, any discovery could lead to a yet unforeseen catastrophe? Unlikely perhaps, but that doesn’t matter, because the consequences would be so dire as to overwhelm even the lowest of probability estimates. Best not to do anything. Which is, of course, an absurd conclusion.

This “apocalypse fallacy,” the invocation of possible catastrophic consequences that overwhelm the cost-benefit calculation, is an all-too-common trope in policy debates. Should nuclear weapons have been eliminated immediately upon the discovery that a “nuclear winter” was possible? There are good arguments for what’s now termed the “nuclear zero” option, but this is not one of them. Should the remote possibility of creating a mini-black hole that would swallow the earth have stopped the search for the Higgs boson at CERN? Well, sure, do some more calculations and theoretical modeling to fix limits on the probability, but don’t stop just because the probability remains finite.

So when the pundits tell you not to invest in new nuclear power generation technologies as the surest and quickest route to a green energy economy because there is a chance of a super-Fukushima nightmare, ignore them. They literally don’t know what they’re talking about. When a do-gooder urges the immediate, widespread use of an Ebola vaccine that has not undergone clinical trials, arguing that the chance of saving thousands, perhaps even millions of lives, outweighs any imaginable untoward consequences, ignore him. He literally does not know what he’s talking about. Does this mean that we should rush into nuclear power generation or that we should refuse the cry for an Ebola vaccine from Africa? Of course not, in both cases. What it means is that we should choose to do or not do those things for good reasons, not bad ones. And risk or cost-benefit arguments, especially in the case of rare eventualities, are always bad arguments.

It would be wrong to conclude from what I’ve just argued that I’m counseling our throwing caution to the wind as we do whatever we damned well please. No. The humble, prudential advice is still good, the advice that one think before acting, that one consider the possible consequences of one’s actions, and that one weigh the odds as best one can. It’s just that one must be aware and wary of the agenda-driven abuse of such moral reflection in the pseudo-rational form of risk and cost-benefit analysis.

That said, there is even still value in the intellectual regimen of risk and cost-benefit analysis, at least in the form of the obligation entailed by that regimen to be as thorough and as objective as one can in assaying the consequences of one’s actions, even if the exercise cannot be reduced to an algorithm. But that is just another way of saying that, to the moral virtue of prudence must be added the intellectual virtues (which are also moral virtues) of honesty and perseverance.


Sincere thanks to Arturo Casadevall and Michael Imperiale for conversations that sharpened and enriched my thinking about this issue.


Arturo Casadevall, Don Howard, and Michael J. Imperiale. 2014. “An Epistemological Perspective on the Value of Gain-of-Function Experiments Involving Pathogens with Pandemic Potential,” mBio. 5(5): . doi:10.1128/mBio.01875-14. (http://mbio.asm.org/content/5/5/e01875-14.full)

The Scientist qua Scientist Has a Duty to Advocate and Act

Don Howard

The new AAAS web site on climate change, “What We Know,” asserts: “As scientists, it is not our role to tell people what they should do or must believe about the rising threat of climate change. But we consider it to be our responsibility as professionals to ensure, to the best of our ability, that people understand what we know.” Am I the only one dismayed by this strong disavowal of any responsibility on the part of climate scientists beyond informing the public? Of course I understand the complicated politics of climate change and the complicated political environment in which an organization like AAAS operates. Still, I think that this is an evasion of responsibility.

Contrast the AAAS stance with the so-called “Franck Report,” a remarkable document drawn up by refugee German physicist James Franck and colleagues at the University of Chicago’s “Metallurgical Laboratory” (part of the Manhattan Project) in the spring of 1945 in a vain effort to dissuade the US government from using the atomic bomb in a surprise attack on a civilian target. They started from the premise that the scientist qua scientist has a responsibility to advise and advocate, not just inform, arguing that their technical expertise entailed an obligation to act:

“The scientists on this project do not presume to speak authoritatively on problems of national and international policy. However, we found ourselves, by the force of events, during the last five years, in the position of a small group of citizens cognizant of a grave danger for the safety of this country as well as for the future of all the other nations, of which the rest of mankind is unaware. We therefore feel it is our duty to urge that the political problems, arising from the mastering of nuclear power, be recognized in all their gravity, and that appropriate steps be taken for their study and the preparation of necessary decisions.”

James Franck. Director of the Manhattan Project's Metallurgical Laboratory at the University of Chicago and primary author of the "Frank Report."
James Franck. Director of the Manhattan Project’s Metallurgical Laboratory at the University of Chicago and primary author of the “Franck Report.”

I have long thought that the Franck Report is a model for how the scientist’s citizen responsibility should be understood. At the time, the view among the signatories to the Franck Report stood in stark contrast to J. Robert Oppenheimer’s definition of the scientist’s responsibility being only to provide technical answers to technical questions. Oppenheimer wrote: “We didn’t think that being scientists especially qualified us as to how to answer this question of how the bombs should be used” (Jungk 1958, 186).


J. Robert Oppenheimer Director of the Manhattan Project
J. Robert Oppenheimer
Director of the Manhattan Project

The key argument advanced by Franck and colleagues was, again, that it was precisely their distinctive technical expertise that entailed a moral “duty . . . to urge that the political problems . . . be recognized in all their gravity.” Of course they also urged their colleagues to inform the public so as to enable broader citizen participation in the debate about atomic weapons, a sentiment that eventuated in the creation of the Federation of American Scientists and the Bulletin of the Atomic Scientists. The key point, however, was the link between distinctive expertise and the obligation to act. Obvious institutional and professional pressures rightly enforce a boundary between science and advocacy in the scientist’s day-to-day work. Even the cause of political advocacy requires a solid empirical and logical foundation for that action. But that there might be extraordinary circumstances in which the boundary between science and advocacy must be crossed seems equally obvious. And one is hard pressed to find principled reasons for objecting to that conclusion. Surely there is no easy argument leading from scientific objectivity to a disavowal of any such obligations.

Much of the Franck report was written by Eugene Rabinowitch, who went on to become a major figure in the Pugwash movement, the leader of which, Joseph Rotblat, was awarded the 1995 Nobel Peace Prize for his exemplary efforts in promoting international communication and understanding among nuclear scientists from around the world during the worst of the Cold War. The seemingly omnipresent Leo Szilard also played a significant role in drafting the report, and since 1974 the American Physical Society has given an annual Leo Szilard Lectureship Award to honor physicists who “promote the use of physics to benefit society.” Is it ironic that the 2007 winner was NASA atmospheric physicist James E. Hansen who has become controversial in the climate science community precisely because he decided to urge action on climate change?

That distinctive expertise entails an obligation to act is, in other settings, a principle to which we all assent. An EMT, even when off duty, is expected to help a heart attack victim precisely because he or she has knowledge, skills, and experience not common among the general public. Why should we not think about scientists and engineers as intellectual first responders?

Physicists, at least, seem to have assimilated within their professional culture a clear understanding that specialist expertise sometimes entails an obligation to take political action. That fact will, no doubt, surprise many who stereotype physics as the paradigm of a morally and politically disengaged discipline. There are many examples from other disciplines of scientists who have gone so far as to risk their careers to speak out in service to a higher good, including climate scientists like Michael Mann, who recently defended the scientist’s obligation to speak up in a blunt op-ed in the New York Times, “If You See Something, Say Something”). The question remains, why, nonetheless, the technical community has, for the most part, followed the lead of Oppenheimer, not Franck, when, in fact, our very identity as scientists does, sometimes, entail a moral obligation “to tell people what they should do” about the most compelling problems confronting our nation and our world.


Jungk, Robert (1958). Brighter than a Thousand Suns: A Personal History of the Atomic Scientists. New York: Harcourt, Brace and Company.

Science in the Crosshairs

Don Howard

Sometime over the weekend of September 28-29, Mojtaba Ahmadi, a specialist in cyber-defense and the Commander of Iran’s Cyber War Headquarters, was found dead with two bullets to the heart. Nothing has been said officially, but it is widely suspected that Ahmadi was targeted for assassination, some pointing the finger of blame at Israel. The method of the attack, reportedly assassins on motorbikes, is reminiscent of earlier assassinations or attempted assassinations of five Iranian nuclear scientists going back to 2007, those attacks also widely assumed to have been the work of Israeli operatives.

Noteworthy is the fact that, as with those earlier assassinations, this latest attack is receiving scant attention in the mainstream press. Nor has it occasioned the kind of protest that one might have expected from the international scientific community. This silence is worrisome for several reasons.

Were Iran in a state of armed conflict with an adversary, as defined by the international law of armed conflict (ILOAC), and if one of its technical personnel were directly involved in weapons development, then that individual would be a legitimate target, as when the OSS targeted Werner Heisenberg for assassination in WWII owing to his role at the head of the German atomic bomb project. But such is not the case. Iran is not in a state of armed conflict with any potential adversary. That being so, the silence on the part of other governments and the lack of protest from NGOs, professional associations, and other stakeholders means that we are allowing a precedent to be set that could have the effect of legitimating such assassinations as part of customary law.

Were this to become accepted practice, then the consequences would be profound. It would then be perfectly legal for a targeted nation, such as Iran, to retaliate in kind with attacks targeted against technical personnel within countries reasonably deemed responsible for sponsoring the original attack. Thus, were it to emerge that the US had a hand in these events, even if only by way of logistical or intelligence support, then any US cyberwarfare specialist would become a legitimate target, as would be any US nuclear weapons technical personnel. Quite frankly, I worry that it is only a matter of time before Iran attempts precisely that, and the US being a softer target than Israel, I worry that it may happen here first.

Technical professional associations such as IEEE or the American Physical Society have, I think, a major responsibility to make this a public issue and to take a stand calling for a cessation of such attacks.

The alternative is to condone the globalization and domestication of the permanent state of undeclared conflict in which we seem to find ourselves today. Critics of US foreign and military policy might applaud this as just desserts for unwarranted meddling in the affairs of other nations. That is most definitely not my view, for I believe that bad actors have to be dealt with firmly by all legal means. My concern is that these targeted assassinations, while currently illegal, may become accepted practice. And I don’t want our children to grow up in the kind of world that would result.

Science and Values – 1. The Challenge for the Philosopher

Don Howard

Science, by which I mean also the technologies that flow from and inform it, is a form of social practice. It has evolved distinct institutions and a distinct sociology. It has accumulated and refined an array of formal techniques and instrumental means for knowledge production and certification. That it is also socially embedded, affected by and affecting every aspect of human life, is a trivial truth. The only question, albeit a large one, is, “How?” By what means, in which respects, and to what extent does science change our world and does the world change science? Some changes are obvious, as with the accelerating transformation of material culture effected by science, and changes in our understanding of self, the worlds our selves inhabit, their relation to one another, and the relation of both to nature and spirit. Other changes, and the manner of the change, are less so, as with the content and modes of production of scientific knowledge. Does it make a difference when science is done in a democracy? Does it make a difference when research is funded by the private sector rather than the state? Is science neutral, objective, and above the fray? Understanding how science affects and is affected by its surround is necessary if we wish to effect intelligent control over science and the part of human life that it touches, which is well nigh the whole of the human experience.

Philosophers of science are supposed to understand the structure, methods, and interpretation of science. But apart from modest progress on the formal side and a few helpful insights in the foundations of some individual sciences, philosophy’s record from the early twentieth century has been, until late, rather spotty. In the main, when it comes to all but the more formal questions, philosophers of science have handed the task to their colleagues in history and sociology. History has given good service. Fans of technical history of science have been a tad disappointed in recent decades, but otherwise the history of science is a thriving field, with an expanding scope and a healthy plurality of approaches. Historians have taught us much about how science works and how it lives in its many contexts. But history remains, for the most part, a descriptive, narrative, or hermeneutic enterprise, deliberately eschewing critique and normativity. We may argue about how good a job the sociologists have done since the advent of the “strong programme” (“strong” = context shapes the content of science, not just its aims and institutions) some thirty plus years ago. Instead let’s thank them for forcing everyone to take the question of context seriously and for unsettling our lazy assumptions about the distinctive superiority of science among other social practices, its objectivity, and its social detachment. Subversion of prejudice is a form of critique, but sociology of science, like history of science, remains a largely descriptive, not critical enterprise.

Which brings us back to the challenge for the philosophers of science, my native tribe. Until late, we have struggled to say much that is helpful about the embedding of science in society because we were in thrall to an ideology of value neutrality and the social detachment of science, wrongly think these to be necessary conditions for scientific objectivity. We used to credit logical positivism for this deep insight into objectivity, citing Hans Reichenbach’s distinction between the “context of justification” and the “context of discovery,” the latter being the dustbin into which history, sociology, and all interesting questions about context were cast, the former being the sandbox in which elite philosophers of science alone were allowed to play. Now we regard such dogma not as insight, but as blindness, and the newer historiography explains it as not just the conclusion of a bad argument but as the discipline’s defensive response to political persecution before (Hitler) and after (Joe McCarthy) the Second World War. Weak inductive evidence for the new historiography is afforded by the fact that, curiously, philosophers of science began to overcome their fear of values talk at about the same time that the Berlin Wall came down.

Today, one is happy to report, everyone is eager to get on the science and values bandwagon. There are conferences, books, anthologies, special issues of journals, albeit, as yet, no prizes. Philosophers of science are eager to learn about science policy. They now invite historians and sociologists to their meetings, and they try hard to be respectful, even as they struggle to figure out exactly how empirical evidence bearing on the actual practice of science is supposed to inform their philosophers’ questions. But that is precisely the problem, for what the philosophy of science still lacks are tools for theorizing the manner and consequences of the social embedding of science.

This is not for want of trying. Our feminist colleagues have been at it for thirty or more years. They have taught us a lot about episodes where science has been more deeply affected by its social embedding – read now its “gendering” – than many of us had or wanted to think. Among them there is a proliferation of analytical frameworks, from feminist empiricism to standpoint theory, difference feminism, and postmodernist feminism, each of which has taught us new ways to query once-settled pieties. Phil Kitcher is probably the most prominent philosopher of science otherwise to have taken the plunge, borrowing ideas from John Rawls to think about the place of science in democracy while holding onto what some think are rather shopworn notions of truth and realism (perhaps also a shopworn notion of democracy). Most interesting to me are those projects that mine the past for fresh insights on science, values, and social embedding, as with Heather Douglas’s re-reading of Richard Rudner, Tom Uebel’s rehabilitation of Otto Neurath, and Matt Brown’s resuscitation of John Dewey (more on all of which anon). New theoretical ideas emerge, thus, from attentive history that is more than mere antiquarianism and rational reconstruction.

Lots of commotion. Still we lack, by my lights, the kinds of theoretical tools needed to answer the “How?” question posed above: “By what means, in which respects, and to what extent does science change our world and does the world change science?” We need a theory of science that integrates the history, philosophy, anthropology, psychology, sociology, and even biology of science and scientists into a comprehensive project. In its critical and reformist aspects this theory of science must learn to be normative not just after the fashion of the inductive logician but also in the way of the political theorist and the moral theorist. Promotion of the common good should be the guiding principle. And it would be fun it if could even be a bit utopian.

The next post will set us on our way with a more specific list of necessary conditions for the possibility of such a theory of science.