Scientific autonomy, objectivity, and the value-free ideal
Heather E. Douglas, in Science, Policy, and the Value-Free Ideal argues that notions of scientific autonomy and a scientific ideal of being isolated from questions of value (political or otherwise) are mistaken, and that this idea of science without regard to value questions (apart from epistemic virtues) is itself a contributing factor to such consequences. She attributes blame for this value-free ideal of science to post-1940 philosophy of science, though the idea of scientific autonomy appears to me to have roots much further back, including in Galileo’s “Letter to Castelli” and "Letter to the Grand Duchess Christina" and John Tyndall’s 1874 Belfast Address, which were more concerned to argue that religion should not intrude into the domain of science rather than the reverse. (As I noted in a previous post about Galileo, he did not carve out complete autonomy for natural philosophy from theology, only for those things which can be demonstrated or proven, which he argued that scripture could not contradict--and where it apparently does, scripture must be interpreted allegorically.)
Douglas describes a “topography of values” in the categories of cognitive, ethical, and social values, and distinguishes direct and indirect roles for them. Within the “cognitive” category go values pertaining to our ability to understand evidence, such as simplicity, parsimony, fruitfulness, coherence, generality, and explanatory power, but excluding truth-linked epistemic virtues such as internal consistency and predictive competency or adequacy, which she identifies not as values but as minimal negative conditions that theories must necessarily meet. Ethical values and social values are overlapping categories, the former concerned with what’s good or right and the latter with what a particular society values, such as “justice, privacy, freedom, social stability, or innovation” (Douglas, p. 92). Her distinction between a direct and indirect role is that the former means that values can act directly as reasons for decisions, versus indirectly as a factor in decision-making where evidence is uncertain.
Douglas argues that values can legitimately play a direct role in certain phases of science, such as problem selection, selection of methodology, and in the policy-making arena, but should be restricted to an indirect role in phases such as data collection and analysis and drawing conclusions from evidence. She identifies some exceptions, however--problem selection and method selection can’t legitimately be guided by values in a way that undermines the science by forcing a pre-determined conclusion (e.g., by selecting a method that is guaranteed to be misleading), and a direct role for ethical values can surface in later stages by discovering that research is causing harm.
Her picture of science is one where values cannot directly intrude between the collection of data and the inference of the facts from that data, but the space between evidence and fact claims is somewhat more complex than she describes. There is the inference by a scientist of a fact from the evidence, the communication of that fact to other scientists, the publication of that fact in the scientific literature, and its communication to the general public and policy makers. All but the first of these are not purely epistemic, but are also forms of conduct. It seems to me that there is, in fact, a potential direct role for ethical values, at the very least, for each such type of conduct, in particular circumstances, which could merit withholding of the fact claim. For example, a scientist in Nazi Germany could behave ethically by withholding information about how to build an atomic bomb.
Douglas argues that the motivation for the value-free ideal is as a mechanism for preserving scientific objectivity; she therefore gives an account of objectivity that comports with her account of science with values. She identifies seven types of objectivity that are relevant in three different domains (plus one she rejects), all of which have to do with a shared ground for trust. First, within the domain of human interactions with the world, are “manipulable objectivity,” or the ability to repeatably and reliably make interventions in nature that give the same result, and “convergent objectivity,” or having supporting evidence for a conclusion from multiple independent lines of evidence. Second, in the realm of individual thought processes, she identifies “detached objectivity”--a scientific disinterest, freedom from bias, and eschewing the use of values in place of evidence. There’s also “value-free objectivity,” the notion behind the value-free ideal, which she rejects. And there’s “value-neutral objectivity,” or leaving personal views aside in, e.g., conducting a review of the literature in a field and identifying possible sets of explanations, or taking a "centrist" or "balanced" view of potentially relevant values. Finally, in the domain of social processes, Douglas identifies “procedural objectivity,” where use of the same procedures produces the same results regardless of who engages in the procedure, and “intersubjectivity” in two senses--“concordant objectivity,” agreement in judgments between different people, and “interactive objectivity,” agreement as the result of argument and deliberation.
Douglas writes clearly and concisely, and makes a strong case for the significance of values within science as well as in its application to public policy. Though she limits her discussion to natural science (and focuses on scientific discovery rather than fields of science that involve the production of new materials, an area where more direct use of values is likely appropriate), her account could likely be extended with the introduction of a bit more complexity. While I don’t think she has identified all or even the primary causes of the “science wars,” which she discusses at the beginning of her book, I think her account is more useful in adjudicating the “sound science”/“junk science” debate that she also discusses, as well as identifying a number of ways in which science isn’t and shouldn’t be autonomous from other areas of society.
[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Judd A. for his comments.]