Wednesday, February 24, 2010

Science as performance

The success of science in the public sphere is determined not just by the quality of research but by the ability to persuade. Stephen Hilgartner’s Science on Stage: Expert Advice as Public Drama uses a theatrical metaphor, drawing on the work of Erving Goffman, to shed light on and explain the outcomes associated with three successive reports on diet and nutrition issued by the National Academies of Science, one of which was widely criticized by scientists, one of which was criticized by food industry groups, and one of which was never published. They differed in “backstage” features such as how they coordinated their work and what sources they drew upon, in “onstage” features such as the composition of experts on their committees and how they communicated their results, and how they responded to criticism.

The kinds of features and techniques that Hilgartner identifies as used to enhance perceptions of credibility--features of rhetoric and performance--are the sorts of features relied upon by con artists. If there is no way to distinguish such features as used by con artists from those used by genuine practitioners, if all purported experts are on equal footing and only the on-stage performances are visible, then we have a bit of a problem. All purported experts of comparable performing ability are on equal footing, and we may as well flip coins to distinguish between them. But part of a performance includes the propositional content of the performance--the arguments and evidence deployed--and these are evaluated not just on aesthetic grounds but with respect to logical coherence and compatibility with what the audience already knows. Further, the performance itself includes an interaction with the audience that strains the stage metaphor. Hilgartner describes this as members of the audience themselves taking the stage, yet audience members in his metaphor also interact with each other, individually and in groups, through complex webs of social relationships.

The problem of expert-layman interaction is that the layman in most cases lacks the interactional expertise to even be able to communicate about the details of the evidence supporting a scientific position, and must rely upon other markers of credibility which may be rhetorical flourishes. This is the problem of Plato’s “Charmides,” in which Socrates asserts that only a genuine doctor can distinguish a sufficiently persuasive quack from a genuine doctor. A similar position is endorsed by philosopher John Hardwig, in his paper “Epistemic Dependence,” (PDF) and by law professor Scott Brewer in “Scientific Expert Testimony and Intellectual Due Process,” which points out that the problem faces judges and juries. There are some features which enable successful distinctions between genuine and fake experts in at least the more extreme circumstances--examination of track records, credentials, evaluations by other experts or meta-experts (e.g., experts in methods used across multiple domains, such as logic and mathematics). Brewer enumerates four strategies of nonexperts in evaluating expert claims: (1) “substantive second-guessing,” (2) “using general canons of rational evidentiary support,” (3) “evaluating demeanor,” and (4) “evaluating credentials.” Of these, only (3) is an examination of the merely surface appearances of the performance (which is not to say that it can’t be a reliable, though fallible, mechanism). But when the evaluation is directed not at distinguishing genuine expert from fake, but conflicting claims between two genuine experts, the nonexpert may be stuck in a situation where none of these is effective and only time (if anything) will tell--but in some domains, such as the legal arena, a decision may need to be reached much more quickly than a resolution might become available.

One novel suggestion for institutionalizing a form of expertise that fits into Hilgartner’s metaphor is philosopher Don Ihde’s proposal of “science critics”, in which individuals with at least interactional expertise within the domain they criticize serve a role similar to art and literary critics in evaluating a performance, including its content and not just its rhetorical flourishes.

[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. The Hardwig and Brewer articles are both reprinted in Evan Selinger and Robert P. Crease, editors, The Philosophy of Expertise. NY: Columbia University Press, 2006, along with an excellent paper I didn't mention above, Alvin I. Goldman's "Experts: Which Ones Should You Trust?" (PDF). The term "interactional expertise" comes from Harry M. Collins and Robert Evans, "The Third Wave of Science Studies: Studies of Expertise and Experience," also reprinted in the Selinger & Crease volume; a case study of such expertise is in Steven Epstein's Impure Science: AIDS, Activism, and the Politics of Knowledge, Berkeley: University of California Press, 1996. Thanks to Tim K. for his comments on the above.]

Monday, February 22, 2010

Is knowledge drowning in a flood of information?

There have long been worries that the mass media are producing a “dumbing down” of American political culture, reducing political understanding to sound bites and spin. The Internet has been blamed for information overload, and, like MTV in prior decades, for a reduction in attention span as the text-based web became the multimedia web, and cell phones have become a more common tool for its use. Similar worries have been expressed about public understanding of science. Nicholas Carr has asked the question, “Is Google Making Us Stupid?”

Yaron Ezrahi’s “Science and the political imagination in contemporary democracies” (a chapter in Sheila Jasanoff's States of Knowledge: The Co-Production of Science and Social Order) argues that the post-Enlightenment synthesis of scientific knowledge and politics in democratic societies is in decline, on the basis of a transition of public discourse into easily consumed, bite-sized chunks of vividly depicted information that he calls “outformation.” Where, prior to the Enlightenment, authority had more of a religious basis and the ideal for knowledge was “wisdom”--which Ezrahi sees as a mix of the “cognitive, moral, social, philosophical, and practical” which is privileged, unteachable, and a matter of faith, the Enlightenment brought systematized, scientific knowledge to the fore. Such knowledge was formalized, objective, universal, impersonal, and teachable--with effort. When that scientific knowledge is made more widely usable, “stripped of its theoretical, formal, logical and mathematical layers” into a “think knowledge” that is context-dependent and localized, it becomes “information.” And finally, when information is further stripped of its context and design for use for a particular purpose, yet augmented with “rich and frequently intense” representations that include “cognitive, emotional, aesthetic, and other dimensions of experience,” it becomes “outformation.”

According to Ezrahi, such “outformations” mix references to objective and subjective reality, and they become “shared references in the context of public discourse and action.” They are taken to be legitimated and authoritative despite lacking any necessary grounding in “observations, experiments, and logic.” He describes this shift as a shift from a high-cost political reality to a low-cost political reality, where “cost” is a measure of the recipient’s ability to consume it rather than the consequences to the polity of its consumption and use as the basis for political participation. This shift, he says, “reflects the diminished propensity of contemporary publics to invest personal or group resources in understanding and shaping politics and the management of public affairs.”

But, I wonder, is this another case of reflecting on “good old days” that never existed? While new media have made new forms of communication possible, was there really a time when the general public was fully invested in “understanding and shaping politics” and not responding to simplifications and slogans? And is it really the case, as Ezrahi argues, that while information can be processed and reconstructed into knowledge, the same is not possible for outformations? Some of us do still read books, and for us, Google may not be “making us stupid,” but rather providing a supplement that allows us to quickly search a vast web of interconnected bits of information that can be assembled into knowledge, inspired by a piece of “outformation.”

[A slightly different version of the above was written as a comment on Ezrahi's article for my Human and Social Dimensions of Science and Technology core seminar. Although I wrote about new media, it is apparent that Ezrahi was writing primarily about television and radio, where "outformation" seems to be more prevalent than information. Thanks to Judd A. for his comments on the above.]

UPDATE (April 19, 2010): Part of the above is translated into Italian, with commentary from Ugo Bardi of the University of Florence, at his blog.

Saturday, February 20, 2010

Seeing like a slime mold

Land reforms instituted in Vietnam under French rule, in India under the British, and in rural czarist Russia introduced simplified rights of ownership and standardized measurements of size and shape that were primarily for the benefit of the state, e.g., for tax purposes. James C. Scott’s Seeing as a State: How Certain Schemes to Improve the Human Condition Have Failed gives these and numerous other examples of ways in which standardization and simplification have been used by the state to make legible and control resources (and people) within its borders. He recounts cases of how the imposition of such standardization often fails or at least has unintended negative consequences, such as his example of German scientific forestry’s introduction of a monoculture of Norway spruce or Scotch pine designed to maximize lumber production, but which led to die-offs a century later. (The monoculture problem of reduced resilience/increased vulnerability is one which has been recognized in an information security context, as well, e.g., in Dan Geer et al.'s paper on Microsoft monoculture that got him fired from @stake and his more recent work.)

Scott’s examples of state-imposed uniformity should not, however, be misconstrued to infer that any case of uniformity is state-imposed, or that such regularities, even if state-imposed, don't have underlying natural constraints. Formalized institutions of property registration and title have appeared in the crevices between states, for example in the squatter community of Kowloon Walled City that existed from 1947-1993 on a piece of the Kowloon peninsula that was claimed by both China and Britain, yet governed by neither. While the institutions of Kowloon Walled City may have been patterned after those familiar to its residents from the outside world, they were internally imposed rather than by a state.

Patterns of highway network design present another apparent counterexample. Scott discusses the design of highways around Paris as being designed by the state to intentionally route traffic through Paris, as well as to allow for military and law enforcement activity within the city in order to put down insurrections. But motorway patterns in the UK appear to have a more organic structure, as a recent experiment with slime molds oddly confirmed. Two researchers at the University of West of England constructed a map of the UK out of agar, putting clumps of oat flakes at the locations of the nine most populous cities. They then introduced a slime mold colony to the mix, and in many cases it extruded tendrils to feed on the oat flakes creating patterns which aligned with the existing motorway design, with some variations. A similar experiment with a map of cities around Tokyo duplicated the Tokyo railway network, slime-mold style. The similarity between transportation networks and evolved biological systems for transporting blood and sap may simply be because they are efficient and resilient solutions.

These examples, while not refuting Scott’s point about frequent failures in top-down imposition of order, suggest that it may be possible for states to achieve success in certain projects by facilitating bottom-up development of ordered structures. The state often imposes an order that has already been developed via some other means--e.g., electrical standards were set up by industry bodies before being codified, IETF standards for IP which don't have the force of law yet are globally implemented. In other cases, states may ratify an emerging order by, e.g., preempting a diversity of state rules with a set that have been demonstrated to be successful, though that runs the risk of turning into a case like Scott describes, if there are local reasons for the diversity.

[A slightly different version of the above was written as a comment on the first two chapters of Scott's book for my Human and Social Dimensions of Science and Technology core seminar. I've ordered a copy of the book since I found the first two chapters to be both lucidly written and extremely interesting. Thanks to Gretchen G. for her comments that I've used to improve (I hope) the above.]

UPDATE (April 25, 2010): Nature 407:470 features "Intelligence: Maze-solving by an amoeboid organism."

Rom Houben not communicating; blogger suppresses the evidence

It has now been demonstrated, as no surprise to skeptics, that Rom Houben was not communicating via facilitated communication, a discredited method by which facilitators have typed for autistic children. A proper test was conducted by Dr. Steven Laureys with the help of the Belgian Skeptics, and it was found that the communications were coming from the facilitator, not from Houben.

A blogger who was a vociferous critic of James Randi and Arthur Caplan for pointing out that facilitated communication is a bogus technique and who had attempted to use Houben's case to argue that Terri Schiavo also may have been conscious is not only unwilling to admit he was wrong, but is deleting comments that point to the results of this new test. I had posted a comment along the lines of "Dr. Laureys performed additional tests with Houben and the facilitator and found that, in fact, the communications were coming from the facilitator, not Houben" with a link to the Neurologica blog; this blogger called that "spam" (on the basis of my posting a similar comment on another blog, perhaps) and "highly misleading" (on the basis of nothing).

As I've said all along, this doesn't mean that Houben isn't "locked in" and conscious, but facilitated communication provides no evidence that he is.

(Previously, previously.)

Friday, February 19, 2010

Another lottery tragedy

From CNN:
A Florida woman has been charged with first-degree murder in connection with the death of a lottery millionaire whose body was found buried under fresh concrete, authorities said.

Dorice Donegan Moore, 37, was arrested last week on charges of accessory after the fact regarding a first-degree murder in the death of Abraham Shakespeare, 43, said Hillsborough County Sheriff David Gee. She remains in the Hillsborough County Jail, he said.

Moore befriended Shakespeare after he won a $31 million Florida lottery prize in 2006 and was named a person of interest in the case after Shakespeare disappeared, authorities said.

Tuesday, February 09, 2010

Where is the global climate model without AGW?

One of the regular critics of creationism on the Usenet talk.origins newsgroup (where the wonderful Talk Origins Archive FAQs were originally developed) was a guy who posted under the name "Dr. Pepper." His posts would always include the same request--"Please state the scientific theory of creationism." It was a request that was rarely responded to, and never adequately answered, because there is no scientific theory of creationism.

A parallel question for those who are skeptical about anthropogenic climate change is to ask for a global climate model that more accurately reflects temperature changes over the last century than those used by the IPCC, without including the effect of human emissions of greenhouse gases. For comparison, here's a review of the 23 models which contributed to the IPCC AR4 assessment. While these models are clearly not perfect, shouldn't those who deny anthropogenic global warming be able to do better?