Sunday, May 02, 2010

Politics and science in risk assessment

There’s a widespread recognition that public policy should be informed by both scientifically verifiable factual information and by social values.  It’s commonly assumed that science should provide the facts for policy-makers, and the policy-makers should then use those facts and social and political values of the citizens they represent to make policy.  This division between fact and value is institutionalized in processes such as a division between risk assessment performed by scientists concerned solely with the facts and subsequent risk management that also involves values, performed in the sphere of politics.  This neat division, however, doesn’t actually work that well in practice.

“Taking European Knowledge Society Seriously,” a 2007 “Report by the Expert Group on Science and Governance to the Science, Economy and Society Directorate, Directorate-General for Research” of the European Commission, spends much of its third chapter criticizing this division and the idea that risk assessment can be performed in a value-free way.  Some of the Report’s objections are similar to those made by Heather Douglas in her book Science, Policy, and the Value-Free Ideal, and her analysis of a topography of values is complementary to the Report.  The selection of what counts as input into the risk assessment process, for example, is a value-laden decision that is analogous to Douglas’ discussion of problem selection.  Health and safety concerns are commonly paramount, but other potential risks--to environment, to economy, to social institutions--may be minimized, dismissed, or ignored.  Selection of methods of measurement also can implicitly involve values, as also is observed by Douglas.  The Report notes, “health can be measured alternatively as frequency or mode of death or injury, disease morbidity, or quality of life,” and questions arise as to how to aggregate and weight different populations, compare humans to nonhumans, and future generations to present generations.

In practice, scientists tend to recognize questions of these sorts, as well as that they are value-laden.  This can lead to the process being bogged down by scientists wanting policy-makers to answer value questions before they perform their risk assessment, while policy-makers insist that they just want the scientific facts of the matter before making any value-based decisions.  Because science is a powerful justification for policy, it’s in the interest of the policy-maker to push as much as possible to the science side of the equation.  We see this occur in Congress, which tends to pass broad-brush statutes which “do something” about a problem but push all the details to regulatory agencies, so that Congress can take credit for action but blame the regulatory agencies if it doesn’t work as expected.  We see it in judicial decisions, where the courts tend to be extremely deferential to science.  And we see it within regulatory agencies themselves, as when EPA Administrator Carol Browner went from saying first that “The question is not one of science, the question is one of judgment” (Dec. 1996, upon initially proposing ozone standards) to “I think it is not a question of judgment, I think it is a question of science” (March 1997, about those same standards).  The former position is subject to challenge in ways that the latter is not.

In reality, any thorough system of risk management needs to be iterative and involve both scientific judgments about facts and political decisions that take into account values, taking care not to use values in a way to achieve predetermined conclusions, but to recognize what sets of interests and concerns are of significance.  This doesn’t preclude the standardization of methods of quantification and assessment, it just means that they need to be able to evolve in response to feedback, as well as to begin from a state where values are explicitly used in identifying what facts need to be assessed.

[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Tim K. for his comments.]

2 comments:

  1. Perhaps the problem is with the term itself. "Risk Assessment" implies, even mandates, a value system. How does one determine what is a risk and how risky it is without referring to the "pro/con" checklist in their heads?

    Science should be called to assess possible outcomes, intended or unintended then those outcomes can be evaluated. I absolutely agree it must be iterative and must evolve based on feedback. The question is, who's feedback?

    Clearly some outcomes can be easily judged. The possible outcome of "the death of all living creatures" falls (almost without question) into the "cons" side of the chart. But how does science avoid the risk [pun intended] of dismissing relevant outcomes as unimportant or harmless based on their own value system?

    More interestingly, what if the scientist is right? For example, let's say society, in general, has overwhelmingly decided "the death of all living things" is not really a problem and the scientist finds that their latest solar panel technology will result in the DOALT. Knowing that this outcome will be ignored by or oblivious to all living things, should he alter his results? Should she sabotage the project? Are they right to work to protect society or humanity from itself?

    Granted, the hyperbolic nature of that scenario can seem ridiculous so what about a drug which has a high success rate of leukemia remission, lowers cholesterol, and will almost certainly cause renal failure. Johnny Double-Double with Cheese is likely going to opt for a salad and some exercise but the blood cancer patient may relish a life of dialysis. The level of acceptable risk here is subjective and there lies the core problem. Who is objective enough to subjectively assess each person's risk? Outside of each patient, I struggle to find an answer.

    In either scenario, should science make that judgment call or just produce scientific data and technology to be interpreted and implemented by policy makers?

    I tend to think (perhaps misguidedly) that scientists, in general, approach their vocation with reverence and the earnestness required to responsibly pursue whatever the science may lead them to but the weakness is the same. Humans have to make this assessment and odds are we'll do a poor job. So what's left? Scientists, politicians, referrendum? Skynet?

    ReplyDelete
  2. Fred:

    Thanks for the comment. I agree that there is always a value system implied in risk assessment. I can't think of a better approach then having transparent standards of what counts, which are then subject to critique about what's being left out, including speculation about possible "unknown unknowns" that we don't even know about.

    Defining what counts as acceptable risk seems to me to fit on the political side of the divide, just as it fits on the business side of an information security risk assessment.

    The trend in science and technology studies (STS), exemplified in the European Commission Report, is towards involving more views of the general public in the process in some manner, with lots of different proposals on how that might be accomplished. Mark Brown's book, which I criticized in a prior blog post, gave an account of how to get a diverse set of views represented in scientific advisory panels without doing things like big focus groups.

    I think I'm OK with sabotaging Skynet, if you've really got good evidence that it's going to lead to a mass death scenario.

    ReplyDelete