“Taking European Knowledge Society Seriously,” a 2007 “Report by the Expert Group on Science and Governance to the Science, Economy and Society Directorate, Directorate-General for Research” of the European Commission, spends much of its third chapter criticizing this division and the idea that risk assessment can be performed in a value-free way. Some of the Report’s objections are similar to those made by Heather Douglas in her book Science, Policy, and the Value-Free Ideal, and her analysis of a topography of values is complementary to the Report. The selection of what counts as input into the risk assessment process, for example, is a value-laden decision that is analogous to Douglas’ discussion of problem selection. Health and safety concerns are commonly paramount, but other potential risks--to environment, to economy, to social institutions--may be minimized, dismissed, or ignored. Selection of methods of measurement also can implicitly involve values, as also is observed by Douglas. The Report notes, “health can be measured alternatively as frequency or mode of death or injury, disease morbidity, or quality of life,” and questions arise as to how to aggregate and weight different populations, compare humans to nonhumans, and future generations to present generations.
In practice, scientists tend to recognize questions of these sorts, as well as that they are value-laden. This can lead to the process being bogged down by scientists wanting policy-makers to answer value questions before they perform their risk assessment, while policy-makers insist that they just want the scientific facts of the matter before making any value-based decisions. Because science is a powerful justification for policy, it’s in the interest of the policy-maker to push as much as possible to the science side of the equation. We see this occur in Congress, which tends to pass broad-brush statutes which “do something” about a problem but push all the details to regulatory agencies, so that Congress can take credit for action but blame the regulatory agencies if it doesn’t work as expected. We see it in judicial decisions, where the courts tend to be extremely deferential to science. And we see it within regulatory agencies themselves, as when EPA Administrator Carol Browner went from saying first that “The question is not one of science, the question is one of judgment” (Dec. 1996, upon initially proposing ozone standards) to “I think it is not a question of judgment, I think it is a question of science” (March 1997, about those same standards). The former position is subject to challenge in ways that the latter is not.
In reality, any thorough system of risk management needs to be iterative and involve both scientific judgments about facts and political decisions that take into account values, taking care not to use values in a way to achieve predetermined conclusions, but to recognize what sets of interests and concerns are of significance. This doesn’t preclude the standardization of methods of quantification and assessment, it just means that they need to be able to evolve in response to feedback, as well as to begin from a state where values are explicitly used in identifying what facts need to be assessed.
[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Tim K. for his comments.]