Wednesday, February 24, 2010

Science as performance

The success of science in the public sphere is determined not just by the quality of research but by the ability to persuade. Stephen Hilgartner’s Science on Stage: Expert Advice as Public Drama uses a theatrical metaphor, drawing on the work of Erving Goffman, to shed light on and explain the outcomes associated with three successive reports on diet and nutrition issued by the National Academies of Science, one of which was widely criticized by scientists, one of which was criticized by food industry groups, and one of which was never published. They differed in “backstage” features such as how they coordinated their work and what sources they drew upon, in “onstage” features such as the composition of experts on their committees and how they communicated their results, and how they responded to criticism.

The kinds of features and techniques that Hilgartner identifies as used to enhance perceptions of credibility--features of rhetoric and performance--are the sorts of features relied upon by con artists. If there is no way to distinguish such features as used by con artists from those used by genuine practitioners, if all purported experts are on equal footing and only the on-stage performances are visible, then we have a bit of a problem. All purported experts of comparable performing ability are on equal footing, and we may as well flip coins to distinguish between them. But part of a performance includes the propositional content of the performance--the arguments and evidence deployed--and these are evaluated not just on aesthetic grounds but with respect to logical coherence and compatibility with what the audience already knows. Further, the performance itself includes an interaction with the audience that strains the stage metaphor. Hilgartner describes this as members of the audience themselves taking the stage, yet audience members in his metaphor also interact with each other, individually and in groups, through complex webs of social relationships.

The problem of expert-layman interaction is that the layman in most cases lacks the interactional expertise to even be able to communicate about the details of the evidence supporting a scientific position, and must rely upon other markers of credibility which may be rhetorical flourishes. This is the problem of Plato’s “Charmides,” in which Socrates asserts that only a genuine doctor can distinguish a sufficiently persuasive quack from a genuine doctor. A similar position is endorsed by philosopher John Hardwig, in his paper “Epistemic Dependence,” (PDF) and by law professor Scott Brewer in “Scientific Expert Testimony and Intellectual Due Process,” which points out that the problem faces judges and juries. There are some features which enable successful distinctions between genuine and fake experts in at least the more extreme circumstances--examination of track records, credentials, evaluations by other experts or meta-experts (e.g., experts in methods used across multiple domains, such as logic and mathematics). Brewer enumerates four strategies of nonexperts in evaluating expert claims: (1) “substantive second-guessing,” (2) “using general canons of rational evidentiary support,” (3) “evaluating demeanor,” and (4) “evaluating credentials.” Of these, only (3) is an examination of the merely surface appearances of the performance (which is not to say that it can’t be a reliable, though fallible, mechanism). But when the evaluation is directed not at distinguishing genuine expert from fake, but conflicting claims between two genuine experts, the nonexpert may be stuck in a situation where none of these is effective and only time (if anything) will tell--but in some domains, such as the legal arena, a decision may need to be reached much more quickly than a resolution might become available.

One novel suggestion for institutionalizing a form of expertise that fits into Hilgartner’s metaphor is philosopher Don Ihde’s proposal of “science critics”, in which individuals with at least interactional expertise within the domain they criticize serve a role similar to art and literary critics in evaluating a performance, including its content and not just its rhetorical flourishes.

[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. The Hardwig and Brewer articles are both reprinted in Evan Selinger and Robert P. Crease, editors, The Philosophy of Expertise. NY: Columbia University Press, 2006, along with an excellent paper I didn't mention above, Alvin I. Goldman's "Experts: Which Ones Should You Trust?" (PDF). The term "interactional expertise" comes from Harry M. Collins and Robert Evans, "The Third Wave of Science Studies: Studies of Expertise and Experience," also reprinted in the Selinger & Crease volume; a case study of such expertise is in Steven Epstein's Impure Science: AIDS, Activism, and the Politics of Knowledge, Berkeley: University of California Press, 1996. Thanks to Tim K. for his comments on the above.]


Eamon Knight said... which individuals with at least interactional expertise within the domain they criticize serve a role similar to art and literary critics in evaluating a performance, including its content and not just its rhetorical flourishes.

Eg: Carl Zimmer, perhaps? As well as many of the better science bloggers.

Jim Lippard said...

Eamon: I think you're right that a role very much like what Ihde describes is already filled at least by some of the better science bloggers and science writers; science writing in general tends to be a bit more descriptive and promotional than critical, however.

The Open Peer Commentary of the _Behavioral and Brain Sciences_ journal is another example, and one that's cross-disciplinary and often *very* critical.