Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Thursday, April 22, 2010

Haven't we already been nonmodern?

Being modern, argues Bruno Latour in We Have Never Been Modern (1993, Harvard Univ. Press), involves drawing a sharp distinction between “nature” and “culture,” through a process of “purification” that separates everything into one or the other of these categories. It also involves breaking with the past: “Modernization consists in continually exiting from an obscure age that mingled the needs of society with scientific truth, in order to enter into a new age that will finally distinguish clearly what belongs to atemporal nature and what comes from humans, what depends on things and what belongs to signs” (p. 71).

But hold on a moment--who actually advocates that kind of a sharp division between nature and culture, without acknowledging that human beings and their cultures are themselves a part of the natural order of things? As the 1991 Love and Rockets song, “No New Tale to Tell,” said: “You cannot go against nature / because when you do / go against nature / it’s part of nature, too.” Trying to divide the contents of the universe into a sharp dichotomy often yields a fuzzy edge, if not outright paradox. While Latour is right to object to such a sharp distinction (or separation) and to argue for a recognition that much of the world consists of “hybrids” that include natural and cultural aspects (true of both material objects and ideas), I’m not convinced that he’s correctly diagnosed a genuine malady when he writes that “Moderns ... refuse to conceptualize quasi-objects as such. In their eyes, hybrids present the horror that must be avoided at all costs by a ceaseless, even maniacal purification” (p. 112).

Latour writes that anthropologists do not study modern cultures in the manner that they study premodern cultures. For premoderns, an ethnographer will generate “a single narrative that weaves together the way people regard the heavens and their ancestors, the way they build houses and the way they grow yams or manioc or rice, the way they construct their government and their cosmology,” but that this is not done for modern societies because “our fabric is no longer seamless” (p. 7). True, but the real problem for such ethnography is not that we don’t have such a unified picture of the world (and we don’t) but that we have massive complexity and specialization--a complexity which Latour implicitly recognizes (pp. 100-101) but doesn’t draw out as a reason.

The argument that Latour makes in the book builds upon this initial division of nature and culture by the process of “purification” with a second division between “works of purification” and “works of translation,” “translation” being a four-step process of his advocated framework of actor-network theory that he actually doesn’t discuss much in this book. He proposes that the “modern constitution” contains “works of translation”--networks of hybrid quasi-objects--as a hidden and unrecognized layer that needs to be made explicit in order to be “nonmodern” (p. 138) or “amodern” (p. 90) and avoid the paradoxes of modernity (or other problems of anti-modernity, pre-modernity, and post-modernity).

His attempt to draw the big picture is interesting and often frustrating, as when he makes unargued-for claims that appear to be false, e.g., “as concepts, ‘local’ and ‘global’ work well for surfaces and geometry, but very badly for networks and topology’” (p. 119); “the West may believe that universal gravitation is universal even in the absence of any instrument, any calculation, any decoding, any laboratory ... but these are respectable beliefs that comparative anthropology is no longer obliged to share” (p. 120; also p. 24); speaking of “time” being reversible where he apparently means “change” or perhaps “progress” (p. 73); his putting “universality” and “rationality” on a list of values of moderns to be rejected (p. 135). I’m not sure how it makes sense to deny the possibility of universal generalizations while putting forth a proposed framework for the understanding of everything.

My favorite parts of the book were his recounting of Steven Shapin and Simon Schaffer’s Leviathan and the Air Pump (pp. 15-29) and his critique of that project, and his summary of objections to postmodernism (p. 90). Latour is correct, I think, in his critique that those who try to explain the results of science solely in terms of social factors are making a mistake that privileges “social” over “natural” in the same way that attempting to explain them without any regard to social factors privileges “natural” over “social.” He writes to the postmodernists (p. 90):

“Are you not fed up at finding yourselves forever locked into language alone, or imprisoned in social representations alone, as so many social scientists would like you to be? We want to gain access to things themselves, not only their phenomena. The real is not remote; rather, it is accessible in all the objects mobilized throughout the world. Doesn’t external reality abound right here among us?”

In a commentary on this post, Gretchen G. observed that we do regularly engage in the process of "purification" about our concepts and attitudes towards propositions in order to make day-to-day decisions--and I think she's right.  We do regard things as scientific or not scientific, plausible or not plausible, true or false, even while we recognize that there may be fuzzy edges and indeterminate cases.  And we tend not to like the fuzzy cases, and to want to put them into one category or the other.  In some cases, this may be merely an epistemological problem of our human (and Humean) predicament where there is a fact of the matter; in others, our very categories may themselves be fuzzy and not fit reality ("carve nature at its joints").

[A slightly different version of the above was written for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Gretchen G. for her comments.  An entertaining critique of Latour's earlier book Science in Action is Olga Amsterdamska's "Surely You're Joking, Monsieur Latour!", Science, Technology, and Human Values vol. 15, no. 4 (1990): 495-504.]

Tuesday, April 20, 2010

Translating local knowledge into state-legible science

James Scott’s Seeing Like a State (about which I've blogged previously) talks about how the state imposes standards in order to make features legible, countable, regulatable, and taxable. J. Stephen Lansing’s Perfect Order: Recognizing Complexity in Bali describes a case where the reverse happened. When Bali tried to impose a top-down system of scientifically designed order--a system of water management--on Balinese rice farmers, in the name of modernization in the early 1970s, the result was a brief increase in productivity followed by disaster. Rather than lead to more efficient use of water and continued improved crop yields, it produced pest outbreaks which destroyed crops. An investment of $55 million in Romijn gates to control water flow in irrigation canals had the opposite of the intended effect. Farmers removed the gates or lifted them out of the water and left them to rust, upsetting the consultants and officials behind the project. Pesticides delivered to farmers resulted in brown leafhoppers becoming resistant to pesticides, and supplied fertilizers washed into the rivers and killed coral reefs at the mouths of the rivers.

Lansing was part of a team sponsored by the National Science Foundation in 1983 that evaluated the Balinese farmers’ traditional water management system to understand how it worked. The farmers of each village belong to subaks, or organizations that manage rice terraces and irrigation systems, which are referred to in Balinese writings going back at least a thousand years. Lansing notes that “Between them, the village and subak assemblies govern most aspects of a farmer’s social, economic, and spiritual life.”

Lansing’s team found that the Balinese system of water temples, religious ritual, and irrigation managed by the subaks would synchronize fallow periods of contiguous segments of terraces, so that long segments could be kept flooded after harvest, killing pests by depriving them of habitat. But their attempt and that of the farmers to persuade the government to allow the traditional system to continue fell upon deaf ears, and the modernization scheme continued to be pushed.

In 1987, Lansing worked with James Kremer to develop a computer model of the Balinese water temple system, and ran a simulation using historical rainfall data. This translation of the traditional system into scientific explanation showed that the traditional system was more effective than the modernized system, and government officials were persuaded to allow and encourage a return to the traditional system.

The Balinese system of farming is an example of how local knowledge can develop and become embedded in a “premodern” society by mechanisms other than conscious and intentional scientific investigation (in this case, probably more like a form of evolution), and be invisible to the state until it is specifically studied. It’s also a case where the religious aspects of the traditional system may have contributed to its dismissal by the modern experts.

What I find of particular interest here is to what extent the local knowledge was simply embedded into the practices, and not known by any of the participants--were they just doing what they've "always" done (with practices that have evolved over the last 1,000 years), in a circumstance where the system as a whole "knows," but no individual had an understanding until Lansing and Kremer built and tested a model of what they were doing?

[A slightly different version of the above was written for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Brenda T. for her comments.  More on Lansing's work in Bali may be found online here.]

Monday, April 19, 2010

Is the general public really that ignorant? Public understanding of science vs. civic epistemology

Studies of the public understanding of science generally produce results that show a disturbingly high level of ignorance.  When asked to agree or disagree with the statement that “ordinary tomatoes do not contain genes, while genetically modified tomatoes do,” only 36% of Europeans answered correctly in 2002 (and only 35% in 1999 and 1996, Eurobarometer Biotechnology Quiz).  Those in the U.S. did better with this question, with 45% getting it right; Canada and the Netherlands got the highest level of correct answers (52% and 51%, respectively).  Tests of similar statements, such as “Electrons are smaller than atoms,” “The earliest human beings lived at the same time as the dinosaurs,” and “How long does it take the Earth to go around the Sun: one day, one month, or one year,” all yield similarly low levels of correct responses.

Public understanding of science research shows individuals surveyed to be remarkably ignorant of particular facts about science, but is that the right measure of how science is understood and used by the public at large?  Such surveys ask about disconnected facts independent from a context in which they might be used, and measure only an individual’s personal knowledge. If, instead, those surveyed were asked who among their friends would they rely upon to obtain the answer to such a question, or how would they go about finding a reliable answer to the question, the results might prove to be quite different.

Context can be quite important. In the Wason selection task, individuals are shown four cards labeled, respectively, “E”, “K,” “4,” and “7,” and are asked which cards they would need to turn over in order to test the rule, “If a card has a vowel on one side, then it has an even number on the other side.” Test subjects do very well at recognizing that the “E” card needs to be turned over (corresponding to the logical rule of modus ponens), but very poorly at recognizing that the “7,” rather than the “4,” needs to be turned over to find out if the rule holds (i.e., they engage in the fallacy of affirming the consequent rather than use the logical rule of modus tollens). But if, instead of letters and numbers, a scenario with more context is constructed, subjects perform much more reliably. In one variant, subjects were told to imagine that they are post office workers sorting letters, and looking to find those which do not comply with a regulation that requires an additional 10 lire of postage on sealed envelopes. They are then presented with four envelopes (two face down, one opened and one sealed, and two face up, one with a 50-lire stamp and one with a 40-lire stamp) and asked to test the rule “If a letter is sealed, then it has a 50-lire stamp on it.” Subjects then recognize that they need to turn over the sealed face-down envelope and the 40-lire stamped envelope, despite its logical equivalent to the original selection task that they perform poorly on.

Sheila Jasanoff, in Designs on Nature, argues that measures of the public understanding of science are not particularly relevant to how democracies actually use science. Instead, she devotes chapter 10 of her book to an alternative approach, “civic epistemology,” which is a qualitative framework for understanding the methods and practices of a community’s generation and use of knowledge.  She offers six dimensions of civic epistemologies:
(1) the dominant participatory styles of public knowledge-making; (2) the methods of ensuring accountability; (3) the practices of public demonstration; (4) the preferred registers of objectivity; (5) the accepted bases of expertise; and (6) the visibility of expert bodies.  (p. 259)
She offers the following table of comparison on these six dimensions for the U.S., Britain, and Germany:

United States
Contentious
Britain
Communitarian
Germany
Consensus-seeking
1 Pluralist, interest-based Embodied, service-based Corporatist, institution-based
2 Assumptions of distrust; Legal Assumptions of trust; Relational Assumption of trust; Role-based
3 Sociotechnical experiments Empirical science Expert rationality
4 Formal, numerical, reasoned Consultative, negotiated Negotiated, reasoned
5 Professional skills Experience Training, skills, experience
6 Transparent Variable Nontransparent

She argues that this multi-dimensional approach provides a meaningful way of evaluating the courses of scientific policy disputes regarding biotech that she describes in the prior chapters of the book, while simply looking at national data on public understanding of science with regard to those controversies offers little explanation.  The nature of those controversies didn’t involve just disconnected facts, or simple misunderstandings of science, but also involved interests and values expressed through various kinds of political participation.

Public understanding of science surveys do provide an indicator of what individuals know that may be relevant to public policy on education, but it is at best a very indirect and incomplete measure of what is generally accepted in a population, and even less informative about how institutional structures and processes use scientific information.  The social structures in modern democracies are responsive to other values beyond the epistemic, and may in some cases amplify rational or radical ignorance of a population, but they may more frequently moderate and mitigate such ignorance.

Sources:
  • Eurobarometer Biotechnology Quiz results from Jasanoff, Designs on Nature, 2005, Princeton University Press, p. 87.
  • U.S., Canada, Netherlands survey results from Thomas J. Hoban slide in Gary Marchant’s “Law, Science, and Technology” class lecture on public participation in science (Nov. 16, 2009).
  • Wason task description from John R. Anderson, Cognitive Psychology and Its Implications, Second Edition, 1985, W.H. Freeman and Company, pp. 268-269.
[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Brenda T. for her comments.]

Tuesday, April 06, 2010

Against "coloring book" history of science

It's a bad misconception about evolution that it proceeds in a linear progression of one successfully evolving species after another displacing its immediate ancestors.  Such a conception of human history is equally mistaken, and is often criticized with terms such as "Whiggish history" or "determinism" with a variety of adjectives (technological, social, cultural, historical).  That includes the history of science, where the first version we often hear is one that has been rationally reconstructed by looking back at the successes and putting them into a linear narrative.  Oh, there are usually a few errors thrown in, but they're usually fit into the linear narrative as challenges that are overcome by the improvement of theories.

The reality is a lot messier, and getting into the details makes it clear that not only is a Whiggish history of science mistaken, but that science doesn't proceed through the algorithmic application of "the scientific method," and in fact that there is no such thing as "the scientific method."  Rather, there is a diverse set of methods that are themselves evolving in various ways, and sometimes not only do methods which are fully endorsed as rational and scientific produce erroneous results, sometimes methods which have no such endorsement and are even demonstrably irrational fortuitously produce correct results.  For example, Johannes Kepler was a neo-pythagorean number mystic who correctly produced his second law of planetary motion by taking an incorrect version of the law based on his intuitions and deriving the correct version from it by way of a mathematical argument that contained an error.  Although he fortuitously got the right answer and receives credit for devising it, he was not justified in believing it to be true on the basis of his erroneous proof.  With his first law, by contrast, he followed an almost perfectly textbook version of the hypothetico-deductive model of scientific method of formulating hypotheses and testing them against Tycho Brahe's data.

The history of the scientific revolution includes numerous instances of new developments occurring piecemeal, with many prior erroneous notions being retained.  Copernicus retained not only perfectly circular orbits and celestial spheres, but still needed to add epicycles to get his theory any where close to the predictive accuracy of the Ptolemaic models in use.  Galileo insisted on retaining perfect circles and insisting that circular motion was natural motion, refusing to consider Kepler's elliptical orbits.  There seems to be a good case for "path dependence" in science.  Even the most revolutionary changes are actually building on bits and pieces that have come before--and sometimes rediscovering work that had already been done before, like Galileo's derivation of the uniform acceleration of falling bodies that had already been done by Nicole Oresme and the Oxford calculators.  And the social and cultural environment--not just the scientific history--has an effect on what kinds of hypotheses are considered and accepted.

This conservativity of scientific change is a double-edged sword.  On the one hand, it suggests that we're not likely to see claims that purport to radically overthrow existing theory (that "everything we know is wrong") succeed--even if they happen to be correct.  And given that there are many more ways to go wrong than to go right, such radical revisions are very likely not to be correct.  Even where new theories are correct in some of their more radical claims (e.g., like Copernicus' heliocentric model, or Wegener's continental drift), it often requires other pieces to fall into place before they become accepted (and before it becomes rational to accept them).  On the other hand, this also means that we're likely to be blinded to new possibilities by what we already accept that seems to work well enough, even though it may be an inaccurate description of the world that is merely predictively successful.  "Consensus science" at any given time probably includes lots of claims that aren't true.

My inference from this is that we need both visionaries and skeptics, and a division of cognitive labor that's largely conservative, but with tolerance for diversity and a few radicals generating the crazy hypotheses that may turn out to be true.  The critique of evidence-based medicine made by Kimball Atwood and Steven Novella--that it fails to consider prior plausibility of hypotheses to be tested--is a good one that recognizes the unlikelihood of radical hypotheses to be correct, and thus that huge amounts of money shouldn't be spent to generate and test them.  (Their point is actually stronger than that, since most of the "radical hypotheses" in question are not really radical or novel, but are based on already discredited views of how the world works.)  But that critique shouldn't be taken to exclude anyone from engaging in the generation and test of hypotheses that don't appear to have a plausible mechanism, because there is ample precedent for new phenomena being discovered before the mechanisms that explain them.

I think there's a tendency among skeptics to talk about science as though it's a unified discipline, with a singular methodology, that makes continuous progress, and where the consensus at any moment is the most appropriate thing to believe.  The history of science suggests, on the other hand, that it's composed of multiple disciplines, with multiple methods, that proceeds in fits and starts, that has dead-ends, that sometimes rediscovers correct-but-ignored past discoveries, and is both fallible and influenced by cultural context.  At any given time, some theories are not only well-established but unified well with others across disciplines, while others don't fit comfortably well with others, or may be idealized models that have predictive efficacy but seem unlikely to be accurate descriptions of reality in their details.  To insist on an overly rationalistic and ahistorical model is not just out-of-date history and philosophy of science, it's a "coloring book" oversimplification.  While that may be useful for introducing ideas about science to children, it's not something we should continue to hold to as adults.

Friday, April 02, 2010

Scientific autonomy, objectivity, and the value-free ideal

It has been argued by many that science, politics, and religion are distinct subjects that should be kept separate, in at least one direction if not both.  Stephen Jay Gould argued that science and religion have non-overlapping areas of authority (NOMA, or non-overlapping magisteria), with the former concerned about how questions and the latter with why questions, and that conflicts between them won’t occur if they stick to their own domain.  Between science and politics, most have little problem with science informing politics, but a big problem with political manipulation of science.  Failure to properly maintain the boundaries leads to junk science, politicized science, scientism, science wars, and other objectionable consequences.

Heather E. Douglas, in Science, Policy, and the Value-Free Ideal argues that notions of scientific autonomy and a scientific ideal of being isolated from questions of value (political or otherwise) are mistaken, and that this idea of science without regard to value questions (apart from epistemic virtues) is itself a contributing factor to such consequences.  She attributes blame for this value-free ideal of science to post-1940 philosophy of science, though the idea of scientific autonomy appears to me to have roots much further back, including in Galileo’s “Letter to Castelli” and "Letter to the Grand Duchess Christina" and John Tyndall’s 1874 Belfast Address, which were more concerned to argue that religion should not intrude into the domain of science rather than the reverse.  (As I noted in a previous post about Galileo, he did not carve out complete autonomy for natural philosophy from theology, only for those things which can be demonstrated or proven, which he argued that scripture could not contradict--and where it apparently does, scripture must be interpreted allegorically.)

Douglas describes a “topography of values” in the categories of cognitive, ethical, and social values, and distinguishes direct and indirect roles for them.  Within the “cognitive” category go values pertaining to our ability to understand evidence, such as simplicity, parsimony, fruitfulness, coherence, generality, and explanatory power, but excluding truth-linked epistemic virtues such as internal consistency and predictive competency or adequacy, which she identifies not as values but as minimal negative conditions that theories must necessarily meet.  Ethical values and social values are overlapping categories, the former concerned with what’s good or right and the latter with what a particular society values, such as “justice, privacy, freedom, social stability, or innovation” (Douglas, p. 92).  Her distinction between a direct and indirect role is that the former means that values can act directly as reasons for decisions, versus indirectly as a factor in decision-making where evidence is uncertain.

Douglas argues that values can legitimately play a direct role in certain phases of science, such as problem selection, selection of methodology, and in the policy-making arena, but should be restricted to an indirect role in phases such as data collection and analysis and drawing conclusions from evidence.  She identifies some exceptions, however--problem selection and method selection can’t legitimately be guided by values in a way that undermines the science by forcing a pre-determined conclusion (e.g., by selecting a method that is guaranteed to be misleading), and a direct role for ethical values can surface in later stages by discovering that research is causing harm.

Her picture of science is one where values cannot directly intrude between the collection of data and the inference of the facts from that data, but the space between evidence and fact claims is somewhat more complex than she describes.  There is the inference by a scientist of a fact from the evidence, the communication of that fact to other scientists, the publication of that fact in the scientific literature, and its communication to the general public and policy makers.  All but the first of these are not purely epistemic, but are also forms of conduct.  It seems to me that there is, in fact, a potential direct role for ethical values, at the very least, for each such type of conduct, in particular circumstances, which could merit withholding of the fact claim.  For example, a scientist in Nazi Germany could behave ethically by withholding information about how to build an atomic bomb.

Douglas argues that the motivation for the value-free ideal is as a mechanism for preserving scientific objectivity; she therefore gives an account of objectivity that comports with her account of science with values.  She identifies seven types of objectivity that are relevant in three different domains (plus one she rejects), all of which have to do with a shared ground for trust.  First, within the domain of human interactions with the world, are “manipulable objectivity,” or the ability to repeatably and reliably make interventions in nature that give the same result, and “convergent objectivity,” or having supporting evidence for a conclusion from multiple independent lines of evidence.  Second, in the realm of individual thought processes, she identifies “detached objectivity”--a scientific disinterest, freedom from bias, and eschewing the use of values in place of evidence.  There’s also “value-free objectivity,” the notion behind the value-free ideal, which she rejects.  And there’s “value-neutral objectivity,” or leaving personal views aside in, e.g., conducting a review of the literature in a field and identifying possible sets of explanations, or taking a "centrist" or "balanced" view of potentially relevant values.  Finally, in the domain of social processes, Douglas identifies “procedural objectivity,” where use of the same procedures produces the same results regardless of who engages in the procedure, and “intersubjectivity” in two senses--“concordant objectivity,” agreement in judgments between different people, and “interactive objectivity,” agreement as the result of argument and deliberation.

Douglas writes clearly and concisely, and makes a strong case for the significance of values within science as well as in its application to public policy.  Though she limits her discussion to natural science (and focuses on scientific discovery rather than fields of science that involve the production of new materials, an area where more direct use of values is likely appropriate), her account could likely be extended with the introduction of a bit more complexity.  While I don’t think she has identified all or even the primary causes of the “science wars,” which she discusses at the beginning of her book, I think her account is more useful in adjudicating the “sound science”/“junk science” debate that she also discusses, as well as identifying a number of ways in which science isn’t and shouldn’t be autonomous from other areas of society.

[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Judd A. for his comments.]

Thursday, April 01, 2010

Galileo on the relation between science and religion

Galileo’s view of natural philosophy (science) is that it is the study of the book of nature,” “written in mathematical language” (Finocchiaro 2008, p. 183), as contrasted with theology, the study of the book of Holy Scripture and revelation.  Galileo endorses the idea that theology is the “queen” of the “subordinate sciences” (Finocchiaro 2008, p. 124), by which he means not that theology trumps science in any and all matters.  He distinguishes two senses of theology being “preeminent and worthy of the title of queen”: (1) That “whatever is taught in all the other sciences is found explained and demonstrated in it [theology] by means of more excellent methods and of more sublime principles,” [Note added 12/14/2012: which he rejects] and (2) That theology deals with the most important issues, “the loftiest divine contemplations” about “the gaining of eternal bliss,” but “does not come down to the lower and humbler speculations of the inferior sciences ... it does not bother with them inasmuch as they are irrelevant to salvation” [Note added 12/14/2012: which he affirms] (quotations from Finocchiaro 2008, pp. 124-125).  Where Holy Scripture makes reference to facts about nature, they may be open to allegorical interpretation rather than literal interpretation, unless their literal truth is somehow necessary to the account of “the gaining of eternal bliss.”

Galileo further distinguishes two types of claims about science:  (1) “propositions about nature which are truly demonstrated” and (2) “others which are simply taught” (Finocchiaro 2008, p. 126).  The role of the theologian with regard to the former category is “to show that they are not contrary to Holy Scripture,” e.g., by providing an interpretation of Holy Scripture compatible with the proposition; with regard to the latter, if it contradicts Holy Scripture, it must be considered false and demonstrations of the same sought (Finocchiaro 2008, p. 126).  Presumably, if in the course of attempting to demonstrate that a proposition in the second category is false, it is instead demonstrated to be true, it then must be considered to be part of the former category.  Galileo’s discussion allows that theological condemnation of a physical proposition may be acceptable if it is shown not to be conclusively demonstrated (Finnochiaro 2008, p. 126), rather than a more stringent standard that it must be conclusively demonstrated to be false, which, given his own lack of conclusive evidence for heliocentrism, could be considered a loophole allowing him to be hoist with his own petard.

Galileo also distinguishes between what is apparent to experts vs. the layman (Finnochiaro 2008, p. 131), denying that popular consensus is a measure of truth, but regarding that this distinction is what lies behind claims made in Holy Scripture about physical propositions that are not literally true.  With regard to the theological expertise of the Church Fathers, their consensus on a physical proposition is not sufficient to make it an article of faith unless such consensus is upon “conclusions which the Fathers discussed and inspected with great diligence and debated on both sides of the issue and for which they then all agreed to reject one side and hold the other” (Finnochiaro 2008, p. 133).  Or, in a contemporary (for Galileo) context, the theologians of the day could have a comparably weighted position on claims about nature if they “first hear the experiments, observations, reasons, and demonstrations of philosophers and astronomers on both sides of the question, and then they would be able to determine with certainty whatever divine inspiration will communicate to them” (Finnochiaro 2008, p. 135).

Galileo’s conception of science that leads him to take this position appears to be drawn from what Peter Dear (1990, p. 664), drawing upon Thomas Kuhn (1977), calls “the quantitative, ‘classical’ mathematical sciences” or the “mixed mathematical sciences,” identifying this as a predominantly Catholic conception of science, as contrasted with experimental science developed in Protestant England.  The former conception is one in which laws of nature can be recognized through idealized thought experiments based on limited (or no) actual observations, but demonstrated conclusively by means of rational argument.  This seems to be the general mode of Galileo’s work.  Dear argues that this notion of natural law allows for a conception of the “ordinary course of nature” which can be violated by an observed miraculous event, which comports with a Catholic view that miracles continue to occur in the world.

By contrast, the experimentalist views of Francis Bacon and Robert Boyle involve inductively inferring natural laws on the basis of observations, in which case observing something to occur makes it part of nature that must be accounted for in the generalized law--a view under which a miracle seems to be ruled out at the outset, which was not a problem for Protestants who considered the “age of miracles” to be over (Dear 1990, pp. 682-683).  Dear argues that for the British experimentalists, authentication of an experimental result was in some ways like the authentication of a miracle for the Catholics--requiring appropriately trustworthy observations--but that instead of verifying a violation of the “ordinary course of nature,” it verified what the “ordinary course of nature” itself was (Dear 1990, p. 680).  Where the Catholics like Galileo and Pascal derived conclusions about particulars from universal laws recognized by observation, reasoning, and mathematical demonstration, the Protestants like Bacon and Boyle constructed universal laws by inductive generalization from observations of particulars, and were notably critical of failing to perform a sufficient number of experiments before coming to conclusions (McMullin 1990, p. 821), and put forth standards for hypotheses and experimental method (McMullin 1990, p. 823; Shapin & Schaffer 1985, pp. 25ff & pp. 56-59).  The English experimentalist tradition, arising at a time of political and religious confusion after the English Civil War and the collapse of the English state church, was perhaps an attempt to establish an independent authority for science.  By the 19th century, there were explicit (and successful) attempts to separate science from religious authority and create a professionalized class of scientists (e.g., as Gieryn 1983, pp. 784-787 writes about John Tyndall).

The English experimentalists followed the medieval scholastics (Pasnau, forthcoming) in adopting a notion of “moral certainty” for “the highest degree of probabilistic assurance” for conclusions adopted from experiments (Shapin 1994, pp. 208-209).  This falls short of the Aristotelian conception of knowledge, yet is stronger than mere opinion.  They also placed importance on public demonstration in front of appropriately knowledgeable witnesses--with both the credibility of experimenter and witness being relevant to the credibility of the result.  Where on Galileo’s conception expertise appears to be primarily a function of possessing rational faculties and knowledge, on the experimentalist account there is importance to skill in application of method and to the moral trustworthiness of the participants as a factor in vouching for the observational results.  In the Galilean approach, trustworthiness appears to be less relevant as a consequence of actual observation being less relevant--though Galileo does, from time to time, make remarks about observations refuting Aristotle, e.g., in “Two New Sciences” where he criticizes Aristotle’s claims about falling bodies (Finnochiaro 2008, pp. 301, 303).

The classic Aristotelian picture of science is similar to the Galilean approach, in that observation and data collection is done for the purpose of recognizing first principles and deriving demonstrations by reason from those first principles.  What constitutes knowledge is what can be known conclusively from such first principles and what is derived by necessary connection from them; whatever doesn’t meet that standard is mere opinion (Posterior Analytics, Book I, Ch. 33; McKeon 1941, p. 156).  The Aristotelian picture doesn’t include any particular deference to theology; any discipline could could potentially yield knowledge so long as there were recognizable first principles. The role of observation isn’t to come up with fallible inductive generalizations, but to recognize identifiable universal and necessary features from their particular instantiations (Lennox 2006).  This discussion is all about theoretical knowledge (episteme) rather than practical knowledge (tekne), the latter of which is about contingent facts about everyday things that can change.  Richard Parry (2007) points out an apparent tension in Aristotle between knowledge of mathematics and knowledge of the natural world on account of his statement that “the minute accuracy of mathematics is not to be demanded in all cases, but only in the case of things which have no matter.  Hence its method is not that of natural science; for presumably the whole of nature has matter” (Metaphysics, Book II, Ch. 3, McKeon 1941, p. 715).

The Galilean picture differs from the Aristotelian in its greater use of mathematics (geometry)--McMullin writes that Galileo had “a mathematicism ... more radical than Plato’s” (1990, pp. 822-823) and by its inclusion of the second book, that of revelation and Holy Scripture, as a source of knowledge.  But while the second book is one which can trump mere opinion--anything that isn’t conclusively demonstrated and thus fails to meet Aristotle’s understanding of knowledge--it must be held compatible with anything that does meet those standards.

References
  • Peter Dear (1990) “Miracles, Experiments, and the Ordinary Course of Nature,” ISIS 81:663-683.
  • Maurice A. Finocchiaro, editor/translator (2008) The Essential Galileo.  Indianapolis: Hackett Publishing Company.
  • Thomas Gieryn (1983) “Boundary Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists,” American Sociological Review 48(6, December):781-795.
  • Thomas Kuhn (1957) The Copernican Revolution: Planetary Astronomy in the Development of Western Thought.  Cambridge, Mass.: Harvard University Press.
  • Thomas Kuhn (1977) The Essential Tension.  Chicago: The University of Chicago Press.
    Lennox, James (2006) “Aristotle’s Biology,” Stanford Encyclopedia of Philosophy, online at http://plato.stanford.edu/entries/aristotle-biology/, accessed March 18, 2010.
  • Richard McKeon (1941) The Basic Works of Aristotle. New York: Random House.
  • Ernan McMullin (1990) “The Development of Philosophy of Science 1600-1900,” in Olby et al. (1990), pp. 816-837.
  • R.C. Olby, G.N. Cantor, J.R.R. Christie, and M.J.S. Hodge (1990) Companion to the History of Science.  London: Routledge.
  • Parry, Richard (2007) “Episteme and Techne,” Stanford Encyclopedia of Philosophy, online at http://plato.stanford.edu/entries/episteme-techne/, accessed March 18, 2010.
  • Robert Pasnau (forthcoming) “Medieval Social Epistemology: Scienta for Mere Mortals,” Episteme, forthcoming special issue on history of social epistemology.  Online at http://philpapers.org/rec/PASMSE, accessed March 18, 2010. 
  • Steven Shapin and Simon Schaffer (1985) Leviathan and the Air Pump: Hobbes, Boyle, and the Experimental Life.  Princeton, N.J.: Princeton University Press.
  • Steven Shapin (1994) A Social History of Truth: Civility and Science in Seventeenth-Century England. Chicago: The University of Chicago Press.
[The above is slightly modified from one of my answers on a midterm exam.  My professor observed that another consideration on the difference between Catholic and Protestant natural philosophers is that theological voluntarism, more prevalent among Protestants, can suggest that laws of nature are opaque to human beings except through inductive experience.  NOTE ADDED 13 April 2010: After reading a couple of chapters of Margaret Osler's Divine Will and the Mechanical Philosophy: Gassendi and Descartes on Contingency and Necessity in the Created World (2005, Cambridge University Press), I'd add Pierre Gassendi to the experimentalist/inductivist side of the ledger, despite his being a Catholic--he was a theological voluntarist.]

Thursday, March 11, 2010

Representation, realism, and relativism

The popular view of the “science wars” of the 1990s is that it involved scientists and philosophers criticizing social scientists for making and accepting absurd claims as a result of an extreme relativistic view about scientific knowledge. Such absurd claims included claims like “the natural world in no way constrains what is believed to be,” “the natural world has a small or nonexistent role in the construction of scientific knowledge,” and “the natural world must be treated as though it did not affect our perception of it” (all due to Harry Collins, quoted in Yves Gingras’ scathingly critical review of his book (PDF), Gravity’s Shadow: The Search for Gravitational Waves). Another example was Bruno Latour’s claim that it was impossible for Ramses II to have died of tuberculosis because the tuberculosis bacillus was not discovered until 1882. This critical popular view is right as far as it goes--those claims are absurd--but the popular view of science also tends toward an overly rationalistic and naively realistic conception of scientific knowledge that fails to account for social factors that influence science as actually practiced by scientists and scientific institutions. The natural world and our social context both play a role in the production of scientific knowledge.

Mark B. Brown’s Science in Democracy: Expertise, Institutions, and Representation tries to steer a middle course between extremes, but periodically veers too far in the relativist direction. Early on, in a brief discussion of the idea of scientific representations corresponding to reality, he writes (p. 6): “Emphasizing the practical dimensions of science need not impugn the truth of scientific representations, as critics of science studies often assume ...” But he almost immediately seems to retract this when he writes that “science is not a mirror of nature” (p. 7) and, in one of several unreferenced and unargued-for claims appealing to science studies that occur in the book, that “constructivist science studies does undermine the standard image of science as an objective mirror of nature” (p. 16). Perhaps he merely means that scientific representations are imperfect and fallible, for he does periodically make further attempts to steer a middle course, such as when he quotes Latour: “Either they went on being relativists even about the settled parts of science--which made them look ridiculous; or they continued being realists even about the warm uncertain parts--and they made fools of themselves” (p. 183). It’s surely reasonable to take an instrumentalist approach to scientific theories that aren’t well established, are somewhat isolated from the rest of our knowledge, or are highly theoretical, but also to take a realist approach to theories that are well established with evidence from multiple domains and have remained stable while being regularly put to the test. The evidence that we have today for a heliocentric solar system, for common ancestry of species, and for the position and basic functions of organs in the human body is of such strength that it is unlikely that we will see that knowledge completely overthrown in a future scientific revolution. But Brown favorably quotes Latour: “Even the shape of humans, our very body, is composed to a great extent of sociotechnical negotiations and artifacts.” (p. 171) Our bodies are not “composed” of “sociotechnical negotiations and artifacts”--this is either a mistaken use of the word “composed” (instead of perhaps “the consequence of”) or a use-mention error (referring to “our very body” instead of our idea of our body).

In Ch. 6, in a section titled “Realism and Relativism” that begins with a reference to the “science wars,” he follows the pragmatist philosopher John Dewey in order to “help resolve some of the misunderstandings and disagreements among today’s science warriors” such as that “STS scholars seem to endorse a radical form of relativism, according to which scientific accounts of reality are no more true than those of witchcraft, astrology, or common sense” (p. 156). Given that Brown has already followed Dewey’s understanding of scientific practice as continuous with common sense (pp.151-152), it’s somewhat odd to see it listed with witchcraft and astrology in that list--though perhaps in this context it’s not meant as the sort of critical common sense Dewey described, but more like folk theories that are undermined or refuted by science.

Brown seems to endorse Dewey’s view that “reality is the world encountered through successful intervention” and favorably quotes philosopher Ian Hacking that “We shall count as real what we can use to intervene in the world to affect something else, or what the world can use to affect us” (pp. 156-157), but he subsequently drops the second half of Hacking’s statement when he writes “If science is understood in terms of the capacity to direct change, knowing cannot be conceived on the model of observation.” Such an understanding may capture experimental sciences, but not observational or historical sciences, an objection Brown attributes to Bertrand Russell, who “pointed out in his review of Dewey’s Logic that knowledge of a star could not be said to affect the star” (p. 158). Brown, however, follows Latour and maintains that “the work of representation ... always transforms what it represents” (p. 177). Brown defends this by engaging in a use-mention error, the failure to properly distinguish between the use of an expression and talking about the expression, when he writes that stars as objects of knowledge are newly created objects (p. 158, more below). Such an error is extremely easy to make when talking about social facts, where representations are themselves partly constitutive of the facts, such as in talk about knowledge or language.

Brown writes that “People today experience the star as known, differently than before ... The star as an object of knowledge is thus indeed a new object” (p. 158). But this is unnecessary given the second half of Hacking’s statement, since we can observe and measure stars--they have impact upon us. Brown does then talk about impact on us, but only by the representation, not the represented: “...this new object causes existential changes in the knower. With the advent of the star as a known object, people actually experience it differently. This knowledge should supplement and not displace whatever aesthetic or religious experiences people continue to have of the star, thus making their experiences richer and more fulfilling” (p. 158). There may certainly be augmented experience with additional knowledge, which may not change the perceptual component of the experience, but I wonder what the Brown’s basis is for the normative claim that religious experiences in particular shouldn’t be displaced--if those religious experiences are based on claims that have been falsified, such as an Aristotelian conception of the universe, then why shouldn’t they be displaced? But perhaps here I’m making the use-mention error, and Brown doesn’t mean that religious interpretations shouldn’t be displaced, only experiences that are labeled as “religious” shouldn’t be displaced.

A few other quibbles:

Brown writes that “all thought relies on language” (p. 56). If this is the case, then nonhuman animals that have no language cannot have thoughts. (My commenter suggested that all sentient beings have language, and even included plants in that category. I think the proposal that sentience requires language is at least plausible, though I wouldn’t put many nonhuman animals or any plants into that category--perhaps chimps, whales, and dolphins. Some sorts of “language” extend beyond that category, such as the dance of honeybees that seems to code distance and direction information, but I interpreted Brown’s claim to refer to human language with syntax, semantics, generative capacity, etc., and to mean that one can’t have non-linguistic thoughts in the form of, say, pictorial imagery, without language. I.e., that even such thoughts require a “language of thought,” to use Jerry Fodor’s expression.)

Brown endorses Harry Collins’ idea of the “experimenter’s regress,” without noting that his evidence for the existence of such a phenomenon is disputed (Allan Franklin, “How to Avoid the Experimenters’ Regress,” Studies in History and Philosophy of Science 25(3, 1994): 463-491). (Franklin also discusses this in the entry on "Experiment in Physics" at the Stanford Encyclopedia of Philosophy.)

Brown contrasts Harry Collins and Robert Evans with Hobbes on the nature of expertise: The former see “expertise as a ‘real and substantive’ attribute of individuals” while “For Hobbes, in contrast, what matters is whether the claims of reason are accepted by the relevant audience.” (p. 116). Brown sides with Hobbes, but this is to make a similar mistake to that Richard Rorty made when claiming that truth is what you can get away with, which is false by its own definition--since philosophers didn’t let him get away with it. This definition doesn’t allow for the existence of a successful fake expert or con artist, but we know that such persons exist from examples that have been exposed. Under this definition, such persons were experts until they were unmasked.

Brown’s application of Hobbes’ views on political representation to nature is less problematic when he discusses the political representation of environmental interests (pp. 128-131) than when he discusses scientific representations of nature (pp. 131-132). The whole discussion might have been clearer had it taken account of John Searle’s account of social facts (in The Construction of Social Reality).

Brown writes that “Just as recent work in science studies has shown that science is not made scientifically ...” (p. 140), without argument or reference.

He apparently endorses a version of Dewey’s distinction between public and private actions with private being “those interactions that do not affect anyone beyond those engaged in the interaction; interactions that have consequences beyond those so engaged he calls public” (p. 141). This distinction is probably not tenable since the indirect consequences of even actions that we’d consider private can ultimately affect others, such as a decision to have or not to have children.

On p. 159, Brown attributes the origin of the concept of evolution to “theories of culture, such as those of Vico and Comte” rather than Darwin, but neither of them had theories of evolution by natural selection comparable to Darwin’s innovation; concepts of evolutionary change go back at least to the pre-Socratic philosophers like the Epicureans and Stoics. (Darwin didn't invent natural selection, either, but he was the first to put all the pieces together and recognize that evolution by natural selection could serve a productive as well as a conservative role.)

[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Brenda T. for her comments. It should be noted that the above really doesn't address the main arguments of the book, which are about the meaning of political representation and representation in science, and an argument about proper democratic representation in science policy.]

Wednesday, February 24, 2010

Science as performance

The success of science in the public sphere is determined not just by the quality of research but by the ability to persuade. Stephen Hilgartner’s Science on Stage: Expert Advice as Public Drama uses a theatrical metaphor, drawing on the work of Erving Goffman, to shed light on and explain the outcomes associated with three successive reports on diet and nutrition issued by the National Academies of Science, one of which was widely criticized by scientists, one of which was criticized by food industry groups, and one of which was never published. They differed in “backstage” features such as how they coordinated their work and what sources they drew upon, in “onstage” features such as the composition of experts on their committees and how they communicated their results, and how they responded to criticism.

The kinds of features and techniques that Hilgartner identifies as used to enhance perceptions of credibility--features of rhetoric and performance--are the sorts of features relied upon by con artists. If there is no way to distinguish such features as used by con artists from those used by genuine practitioners, if all purported experts are on equal footing and only the on-stage performances are visible, then we have a bit of a problem. All purported experts of comparable performing ability are on equal footing, and we may as well flip coins to distinguish between them. But part of a performance includes the propositional content of the performance--the arguments and evidence deployed--and these are evaluated not just on aesthetic grounds but with respect to logical coherence and compatibility with what the audience already knows. Further, the performance itself includes an interaction with the audience that strains the stage metaphor. Hilgartner describes this as members of the audience themselves taking the stage, yet audience members in his metaphor also interact with each other, individually and in groups, through complex webs of social relationships.

The problem of expert-layman interaction is that the layman in most cases lacks the interactional expertise to even be able to communicate about the details of the evidence supporting a scientific position, and must rely upon other markers of credibility which may be rhetorical flourishes. This is the problem of Plato’s “Charmides,” in which Socrates asserts that only a genuine doctor can distinguish a sufficiently persuasive quack from a genuine doctor. A similar position is endorsed by philosopher John Hardwig, in his paper “Epistemic Dependence,” (PDF) and by law professor Scott Brewer in “Scientific Expert Testimony and Intellectual Due Process,” which points out that the problem faces judges and juries. There are some features which enable successful distinctions between genuine and fake experts in at least the more extreme circumstances--examination of track records, credentials, evaluations by other experts or meta-experts (e.g., experts in methods used across multiple domains, such as logic and mathematics). Brewer enumerates four strategies of nonexperts in evaluating expert claims: (1) “substantive second-guessing,” (2) “using general canons of rational evidentiary support,” (3) “evaluating demeanor,” and (4) “evaluating credentials.” Of these, only (3) is an examination of the merely surface appearances of the performance (which is not to say that it can’t be a reliable, though fallible, mechanism). But when the evaluation is directed not at distinguishing genuine expert from fake, but conflicting claims between two genuine experts, the nonexpert may be stuck in a situation where none of these is effective and only time (if anything) will tell--but in some domains, such as the legal arena, a decision may need to be reached much more quickly than a resolution might become available.

One novel suggestion for institutionalizing a form of expertise that fits into Hilgartner’s metaphor is philosopher Don Ihde’s proposal of “science critics”, in which individuals with at least interactional expertise within the domain they criticize serve a role similar to art and literary critics in evaluating a performance, including its content and not just its rhetorical flourishes.

[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. The Hardwig and Brewer articles are both reprinted in Evan Selinger and Robert P. Crease, editors, The Philosophy of Expertise. NY: Columbia University Press, 2006, along with an excellent paper I didn't mention above, Alvin I. Goldman's "Experts: Which Ones Should You Trust?" (PDF). The term "interactional expertise" comes from Harry M. Collins and Robert Evans, "The Third Wave of Science Studies: Studies of Expertise and Experience," also reprinted in the Selinger & Crease volume; a case study of such expertise is in Steven Epstein's Impure Science: AIDS, Activism, and the Politics of Knowledge, Berkeley: University of California Press, 1996. Thanks to Tim K. for his comments on the above.]

Monday, February 22, 2010

Is knowledge drowning in a flood of information?

There have long been worries that the mass media are producing a “dumbing down” of American political culture, reducing political understanding to sound bites and spin. The Internet has been blamed for information overload, and, like MTV in prior decades, for a reduction in attention span as the text-based web became the multimedia web, and cell phones have become a more common tool for its use. Similar worries have been expressed about public understanding of science. Nicholas Carr has asked the question, “Is Google Making Us Stupid?”

Yaron Ezrahi’s “Science and the political imagination in contemporary democracies” (a chapter in Sheila Jasanoff's States of Knowledge: The Co-Production of Science and Social Order) argues that the post-Enlightenment synthesis of scientific knowledge and politics in democratic societies is in decline, on the basis of a transition of public discourse into easily consumed, bite-sized chunks of vividly depicted information that he calls “outformation.” Where, prior to the Enlightenment, authority had more of a religious basis and the ideal for knowledge was “wisdom”--which Ezrahi sees as a mix of the “cognitive, moral, social, philosophical, and practical” which is privileged, unteachable, and a matter of faith, the Enlightenment brought systematized, scientific knowledge to the fore. Such knowledge was formalized, objective, universal, impersonal, and teachable--with effort. When that scientific knowledge is made more widely usable, “stripped of its theoretical, formal, logical and mathematical layers” into a “think knowledge” that is context-dependent and localized, it becomes “information.” And finally, when information is further stripped of its context and design for use for a particular purpose, yet augmented with “rich and frequently intense” representations that include “cognitive, emotional, aesthetic, and other dimensions of experience,” it becomes “outformation.”

According to Ezrahi, such “outformations” mix references to objective and subjective reality, and they become “shared references in the context of public discourse and action.” They are taken to be legitimated and authoritative despite lacking any necessary grounding in “observations, experiments, and logic.” He describes this shift as a shift from a high-cost political reality to a low-cost political reality, where “cost” is a measure of the recipient’s ability to consume it rather than the consequences to the polity of its consumption and use as the basis for political participation. This shift, he says, “reflects the diminished propensity of contemporary publics to invest personal or group resources in understanding and shaping politics and the management of public affairs.”

But, I wonder, is this another case of reflecting on “good old days” that never existed? While new media have made new forms of communication possible, was there really a time when the general public was fully invested in “understanding and shaping politics” and not responding to simplifications and slogans? And is it really the case, as Ezrahi argues, that while information can be processed and reconstructed into knowledge, the same is not possible for outformations? Some of us do still read books, and for us, Google may not be “making us stupid,” but rather providing a supplement that allows us to quickly search a vast web of interconnected bits of information that can be assembled into knowledge, inspired by a piece of “outformation.”

[A slightly different version of the above was written as a comment on Ezrahi's article for my Human and Social Dimensions of Science and Technology core seminar. Although I wrote about new media, it is apparent that Ezrahi was writing primarily about television and radio, where "outformation" seems to be more prevalent than information. Thanks to Judd A. for his comments on the above.]

UPDATE (April 19, 2010): Part of the above is translated into Italian, with commentary from Ugo Bardi of the University of Florence, at his blog.

Saturday, February 20, 2010

Seeing like a slime mold

Land reforms instituted in Vietnam under French rule, in India under the British, and in rural czarist Russia introduced simplified rights of ownership and standardized measurements of size and shape that were primarily for the benefit of the state, e.g., for tax purposes. James C. Scott’s Seeing as a State: How Certain Schemes to Improve the Human Condition Have Failed gives these and numerous other examples of ways in which standardization and simplification have been used by the state to make legible and control resources (and people) within its borders. He recounts cases of how the imposition of such standardization often fails or at least has unintended negative consequences, such as his example of German scientific forestry’s introduction of a monoculture of Norway spruce or Scotch pine designed to maximize lumber production, but which led to die-offs a century later. (The monoculture problem of reduced resilience/increased vulnerability is one which has been recognized in an information security context, as well, e.g., in Dan Geer et al.'s paper on Microsoft monoculture that got him fired from @stake and his more recent work.)

Scott’s examples of state-imposed uniformity should not, however, be misconstrued to infer that any case of uniformity is state-imposed, or that such regularities, even if state-imposed, don't have underlying natural constraints. Formalized institutions of property registration and title have appeared in the crevices between states, for example in the squatter community of Kowloon Walled City that existed from 1947-1993 on a piece of the Kowloon peninsula that was claimed by both China and Britain, yet governed by neither. While the institutions of Kowloon Walled City may have been patterned after those familiar to its residents from the outside world, they were internally imposed rather than by a state.

Patterns of highway network design present another apparent counterexample. Scott discusses the design of highways around Paris as being designed by the state to intentionally route traffic through Paris, as well as to allow for military and law enforcement activity within the city in order to put down insurrections. But motorway patterns in the UK appear to have a more organic structure, as a recent experiment with slime molds oddly confirmed. Two researchers at the University of West of England constructed a map of the UK out of agar, putting clumps of oat flakes at the locations of the nine most populous cities. They then introduced a slime mold colony to the mix, and in many cases it extruded tendrils to feed on the oat flakes creating patterns which aligned with the existing motorway design, with some variations. A similar experiment with a map of cities around Tokyo duplicated the Tokyo railway network, slime-mold style. The similarity between transportation networks and evolved biological systems for transporting blood and sap may simply be because they are efficient and resilient solutions.

These examples, while not refuting Scott’s point about frequent failures in top-down imposition of order, suggest that it may be possible for states to achieve success in certain projects by facilitating bottom-up development of ordered structures. The state often imposes an order that has already been developed via some other means--e.g., electrical standards were set up by industry bodies before being codified, IETF standards for IP which don't have the force of law yet are globally implemented. In other cases, states may ratify an emerging order by, e.g., preempting a diversity of state rules with a set that have been demonstrated to be successful, though that runs the risk of turning into a case like Scott describes, if there are local reasons for the diversity.

[A slightly different version of the above was written as a comment on the first two chapters of Scott's book for my Human and Social Dimensions of Science and Technology core seminar. I've ordered a copy of the book since I found the first two chapters to be both lucidly written and extremely interesting. Thanks to Gretchen G. for her comments that I've used to improve (I hope) the above.]

UPDATE (April 25, 2010): Nature 407:470 features "Intelligence: Maze-solving by an amoeboid organism."

Tuesday, February 09, 2010

Where is the global climate model without AGW?

One of the regular critics of creationism on the Usenet talk.origins newsgroup (where the wonderful Talk Origins Archive FAQs were originally developed) was a guy who posted under the name "Dr. Pepper." His posts would always include the same request--"Please state the scientific theory of creationism." It was a request that was rarely responded to, and never adequately answered, because there is no scientific theory of creationism.

A parallel question for those who are skeptical about anthropogenic climate change is to ask for a global climate model that more accurately reflects temperature changes over the last century than those used by the IPCC, without including the effect of human emissions of greenhouse gases. For comparison, here's a review of the 23 models which contributed to the IPCC AR4 assessment. While these models are clearly not perfect, shouldn't those who deny anthropogenic global warming be able to do better?

Friday, November 27, 2009

Bad news for agnostics?

While past studies have shown religious believers to be happier than nonbelievers, some new analysis shows that it's not quite so simple. Luke Galen has found that the convinced non-religious are also quite happy, but people who are uncertain are the ones who are dissatisfied. Adam Okulicz-Kozaryn has analyzed data from the World Values Survey and found some more interesting details:
  • Religious people are both happier and unhappier. While a higher percentage of religious people report themselves as extremely happy than convinced nonbelievers, a higher percentage of religious people also report themselves as extremely unhappy.
  • Those who attend religious services and belong to religious organizations tend to be happier. And that's whether or not they believe--in fact among that group, those with the stronger belief tend to be unhappier. So it's the social aspect, not the doctrine, that promotes happiness. And this is further supported by:
  • The more religious a country is, the happier believers are, and vice versa. In religious countries, believers are happier; in nonreligious countries, nonbelievers are happier.
See more at the Epiphenom blog.

(Cross-posted to the Secular Outpost.)

Thursday, November 26, 2009

Why not put Rom Houben's facilitated communication to the test?

I've posted comments about the reasons to be skeptical about Rom Houben's facilitated communication at a number of blogs, where the response of some seems to be that there is no point of such testing. The reasons for not testing have included (1) that the videos are a "straw man"; (2) that criticisms from a stage magician and a philosopher/bioethicist are not worthy of attention; and (3) the testimony from Dr. Laureys, the facilitator Mrs. Wouters, and Houben's family is much stronger evidence than what we can see in the videos, and that Dr. Laureys says he already conducted a single-blind test which showed that the communication came from Houben, not the facilitator, and to reject that is irrational hyper-skepticism that assumes they are lying.

The first argument makes no sense to me. The videos clearly show the facilitator rapidly typing away with Houben's finger even while he's looking away or has his eyes closed, which is by itself a very strong reason to be skeptical, especially in light of the past record of facilitated communication. The second argument is not only ad hominem, but further refuted by similar analysis by a neuroscientist. The last argument is a bit better, but wrongly assumes that the only alternative is that the doctor and family are lying. Facilitated communication isn't a matter of conscious fraud, it's a matter of self-deception of the facilitator (enhanced by the expectations and reactions of the family). Given the possibility of unconscious cuing of the facilitator by the doctor, as well as his own vested interest in a positive result, the test he described doing is still far from sufficient to overcome the evidence plainly displayed in the videos.

Unfortunately, there is a very strong incentive to believe on the part of the doctor, the facilitator, and the family. To find that the communications are coming from the facilitator would be emotionally devastating, and detrimental to the doctor's credibility. To test further is to risk a huge potential loss of what has apparently been gained, and I suspect it's unlikely that we'll see it happen.

But look at it from Houben's own perspective--further testing is absolutely in his own best interests. For if the facilitator is the one doing the communicating, not him, then he is being further exploited for the satisfaction of his doctor, facilitator, and family, not for his own benefit. He's not being treated respectfully or as an end, rather than as a means. If he is, in fact, minimally conscious as the brain scans suggest, then speaking on his behalf without his genuine input is doing him even greater harm.

If you reject the idea that an hour or so of Houben's time should be used to do a conclusive, double-blind test to see whether the communications are coming from him or from the facilitator, is it because you want to believe, rather than to know? There is clear possible harm to Hoeben from not doing such a test. There is no harm to Hoeben from such a test, though there's clearly the risk of painfully dissolving an illusion for the doctor, facilitator, and family. But Hoeben's interests should be placed above that risk.

(Previously on Houben, a post with many links and references.)

UPDATE (February 15, 2010): Houben has been put to the test, and it turns out the communications were, in fact, coming from the facilitator.

UPDATE (February 20, 2010): David Gorski at the Science-Based Medicine blog has a bit more from the Belgian Skeptics, who were involved in the test.

Tuesday, November 24, 2009

What would be more horrifying than "locked-in" syndrome?


Numerous mass media outlets and blogs are reporting on the misdiagnosis of Rom Houben of being comatose for 23 years when he was really conscious, according to Belgian neurologist Steven Laureys, who has claimed for years to be able to treat patients allegedly in a persistent vegetative state with electric shocks and find that they were really in a minimally conscious state. Videos of Houben show him allegedly communicating via a keyboard which is pressed by a single finger on one hand--but his hand is being held by a facilitator, and he's not even looking at the keyboard. Some still photos show the facilitator looking intently at the keyboard, while Houben's eyes are closed.

James Randi observes that this looks just like the self-deception of Facilitated Communication that was promoted as a way to communicate with severely autistic people, and Marshall Brain at How Stuff Works seconds that conclusion.

I think it's a bit too fast to conclude that Houben's not conscious--brain scans could indeed have provided good evidence that he is. But what would be worse than having "locked-in syndrome"? Having somebody else purporting to speak for you with ideomotor-driven Facilitated Communication, while you were helpless to do anything about it.

I'd like to see some double-blind tests of Houben, where he's asked questions about events that occur when the facilitator isn't present, as well as fMRI results during the process of facilitation (since there are brain activation differences between active and passive activities, which have been used to study such things as the perception of involuntariness during hypnosis--it shows features of both active and passive movement). I'd also like to see further opinion on Laureys methodology and diagnosis--it seems he has significant self-interest in promoting this case.

UPDATE: Brandon Keim at Wired Science has finally asked the questions that those who have reported this in the mainstream media should have been asking.

Here's a 2001 review of the scientific literature on facilitated communication.

UPDATE: The video on this story shows the facilitator typing for him while his eyes are closed and he appears to be asleep.

UPDATE: A Times Online story claims that Houben's facilitator, Linda Wouters, spent the last three years working with Houben to learn to feel tiny muscle movements in his finger, and that Dr. Laureys did tests to validate the technique:

The spectacle is so incredible that even Steven Laureys, the neurologist who discovered Mr Houben’s potential, had doubts about its authenticity. He decided to put it to the test.

“I showed him objects when I was alone with him in the room and then, later, with his aide, he was able to give the right answers,” Professor Laureys said. “It is true.”

and

Mr Houben’s “rebirth” took many painstaking months. “We asked him to try and blink but he couldn’t; we asked him to move his cheek but he couldn’t; we asked him to move his hand and he couldn’t,” Mrs Wouters said.

“Eventually, someone noticed that when we talked to him he moved his toe so we started to try and communicate using his toe to press a button.”

It was a breakthrough but much more was to come when a fellow speech therapist discovered that it was possible to discern minuscule movements in his right forefinger.

Mrs Wouters, 42, was assigned to Mr Houben and they began to learn the communication technique that he is now using to write a book about his life and thoughts. “I thought it was a miracle — it actually worked,” she said.

The method involves taking Mr Houben by the elbow and the right hand while he is seated at a specially adapted computer and feeling for minute twitches in his forefinger as his hand is guided over the letters of the alphabet. Mrs Wouters said that she could feel him recoil slightly if the letter was wrong. After three years of practice the words now come tumbling out, she said.

This still seems hard to rationalize with the video footage of the typing occurring while he's apparently asleep. Mrs. Wouters admits the possibility of "tak[ing] over" for him:
“The tension increases and I feel he wants to go so I move his hand along the screen and if it is a mistake he pulls back. As a facilitator, you have to be very careful that you do not take over. You have to follow him.”
UPDATE (November 25, 2009): Neurologist Steven Novella has weighed in. He suggests that Houben may have recovered some brain function and be conscious, but that the facilitated communication in the videos is positively bogus.
I've noted on the discussion page of Dr. Steven Laureys' Wikipedia entry that the paper in BMC Neurology that purportedly included Houben as a subject claims that all patients in the study were in a minimally conscious state (MCS) but had been misdiagnosed as being in a persistent vegetative state (PVS). The criteria of the study say that those that recovered and emerged from MCS were excluded, which seems at odds with claims that Houben's brain function is "almost normal." A story in Nature 443, 132-133 (14 September 2006) by Mike Hopkin, "'Vegetative' patient shows signs of conscious thought," which quotes Laureys, is about a different patient, in a persistent vegetative state, who showed some signs minimal consciousness. When asked to visualize herself playing tennis, for example, she showed corresponding brain activity. But, as that article noted, that kind of neural response isn't necessarily a sign of consciousness:

But what that 'awareness' means is still up for debate. For example, Paul Matthews, a clinical neuroscientist at Imperial College London, argues that the brain imaging technique used cannot evaluate conscious thought; fMRI lights up regions of brain activity by identifying hotspots of oxygen consumption by neurons. "It helps us identify regions associated with a task, but not which regions are necessary or sufficient for performing that task," he says.

Matthews argues that the patient's brain could have been responding automatically to the word 'tennis', rather than consciously imagining a game. He also points out that in many vegetative cases, the patient's motor system seems to be undamaged, so he questions why, if they are conscious, they do not respond physically. "They are simply not behaving as if they are conscious," he says.

Owens counters that an automatic response would be transient, lasting for perhaps a few seconds before fading. He says his patient's responses lasted for up to 30 seconds, until he asked her to stop. He believes this demonstrates strong motivation.

He does admit, however, that it is impossible to say whether the patient is fully conscious. Although in theory it might be possible to ask simple 'yes/no' questions using the technique, he says: "We just don't know what she's capable of. We can't get inside her head and see what the quality of her experience is like."

But then again, as someone who's been reading a lot of literature on automaticity and voluntary action lately, it appears to me likely that a lot of our normal actions are automatic, the product of unconsciously driven motor programs of routine behavior.

Laureys is quoted in the article with a note of skepticism:

"Family members should not think that any patient in a vegetative state is necessarily conscious and can play tennis," says co-author Steven Laureys of the University of Liège, Belgium."It's an illustration of how the evaluation of consciousness, which is a subjective and personal thing, is very tricky, especially with someone who cannot communicate."

The article goes on to note that this woman, who is possibly somewhere between PVS and MCS, "seems to have been much less severely injured than the permanently vegetative Terri Schiavo" (as the report from her Guardian Ad Litem (PDF) made clear).

If Houben is in a minimally conscious state, which he apparently was in order to be included in Laureys' paper that his Wikipedia page says published the Hoeben case in 2009, that appears to contradict news claims that Houben's brain function is "nearly normal," unless he has recovered further function since that paper was written.

UPDATE (November 26, 2009): This footage of Houben and Mrs. Wouters from Belgian (Dutch) state television seems to be the most extensive footage of the facilitation process, and while it starts out looking slightly more plausible, it also clearly shows fairly rapid typing while his eyes are closed (and the camera zooms in on his face).

UPDATE (November 28, 2009): Dr. Laureys and Dr. Novella have had some interaction, which demonstrates that Laureys doesn't get it.

UPDATE (February 15, 2010): Dr. Laureys almost gets it now, and has done additional tests, which have shown that the communications are coming from the facilitator, not Houben.

UPDATE (February 20, 2010): David Gorski at the Science-Based Medicine blog has a bit more from the Belgian Skeptics, who were involved in the test.

Sunday, November 08, 2009

Richard Carrier on the ancient creation/evolution debate

Richard Carrier, an independent scholar with a Ph.D. in Ancient History from Columbia University, gave a talk this morning to the Humanist Society of Greater Phoenix titled "Christianity and Science (Ancient and Modern)." He argued that there was a creation/evolution debate in ancient Rome that had interesting similarities and differences to the current creation/evolution debate.

He began with Michael Behe and a short description of his irreducibly complexity argument regarding the bacterial flagellum--that since it fails to function if any piece is removed, and it's too complex to have originated by evolution in a single step, it must have been intelligently designed and created. He observed that 2,000 years ago, Galen made the same argument about the human hand and other aspects of human and animal anatomy. Galen wrote that "the mark of intelligent design is clear in those works in which the removal of any small component brings about the ruin of the whole."

Behe, Carrier said, hasn't done what you'd expect a scientist to do with respect to his theory. He hasn't looked at the genes that code the flagellum and tried to identify correlate genes in other microbes, for example.

In the ancient context, the debate was between those who argued for natural selection on random arrangements of features that were spontaneously generated, such as Anaxagoras and atomists like Democritus and Epicurus, vs. those who argued for some kind of intelligent design, like Plato, Aristotle, Cicero, and Galen. Carrier set the stage by describing a particular debate about the function of the kidneys between Asclepiades and Galen. Asclepiades thought that the kidneys were either superfluous, with urine forming directly in the bladder, or was an accidental sieve. Galen set out to test this with a public experiment on an anesthetized pig, which had been given water prior to the operation. He opened up the pig, ligated (tied knots in) its ureters, and they started to balloon and the bladder stayed empty. Squeezing the ureter failed to reverse the flow back into the kidney. When one ureter was cut, urine came out. Thus, Galen demonstrated that the kidneys extract urine from the blood and it is transported to the bladder by the ureters. The failure of the flow to operate in reverse showed that the kidneys were not simple sieves, but operated by some power that only allowed it to function in one direction. This, argued Galen, was demonstration of something too complex to have arisen by chance, and refuted the specific claims of Asclepiades.

Galen's 14-volume De Usu Portium (On the Usefulness of Parts) made similar arguments for intelligent design about all aspects of human anatomy--the nerve transport system, biomechanics of arm, hand, and leg movement, the precision of the vocal system, etc. He also asked questions like "How does a fetus know how to build itself?" He allowed for the possibility of some kind of tiny instructions present in the "seed," on analogy with a mechanical puppet theater, programmed with an arrangement of cogs, wheels, and ropes.

Galen also investigated the question of why eyebrows and eyelashes grow to a fixed length and no longer, and found that they grow from a piece of cartilage, the tarsal plate. He concluded that while his evidence required an intelligent designer, they entailed that God is limited and uses only available materials. Galen, a pagan, contrasted his view with that of Christians. For Christians, a pile of ashes could become a horse, because God could will anything to be the case. But for Galen, the evidence supported a God subject to the laws of physics, who was invisibly present but physically interacting to make things happen, and that God realizes the best possible world within constraints.

Which intelligent design theory better explains facts like the growth of horses from fetuses, the fact that fetuses sometimes come out wrong, and why we have complex bodies at all, rather than just willing things into existence via magic? If God can do anything, why wouldn't he just make us as "simple homogenous soul bodies that realize functions by direct will" (or "expedient polymorphism," to use Carrier's term)?

The difference between Galen's views and those of the Christians was that Galen thought of theology as a scientific theory that had to be adjusted according to facts, that facts about God are inferred from observations, and those facts entail either divine malice or a limited divinity. What we know about evolution today places even more limits on viable theories of divinity than in Galen's time. (Carrier gave a brief overview of evolution and in particular a very brief account of the evolution of the bacterial flagellum.)

Galen's views allowed him to investigate, conduct experiments to test the theories of his opponents as well as his own, and make contributions to human knowledge. He supported the scientific values of curiosity as a moral good, empiricism as the primary mode of discovery, and progress as both possible and valuable, while Christianity denigrated or opposes these. The views of early church fathers were such that once Christianity gained power, it not only put a halt to scientific progress, it caused significant losses of knowledge that had already been accumulated. (Carrier later gave many examples.)

Tertullian, a contemporary of Galen, asked, "What concern have I with the conceits of natural science?" and "Better not to know what God has not revealed than to know it from man."

Thales, from the 6th century B.C., was revered by pagans as the first natural scientist--he discovered the natural causes of eclipses, explained the universe as a system of natural causes, performed observations and developed geometry, made inquiries into useful methods, and subordinated theology to science. There was a story that he was so focused on studying the stars that he fell into a well. Tertullian wrote of this event that Thales had a "vain purpose" and that his fall into the well prefigured his fall into hell.

Lactantius, an early Christian writer and tutor of Constantine the Great, denied that the earth was round (as part of a minority faction of Christians at the time), said that only knowledge of good and evil is worthwhile, and argued that "natural science is superfluous, useless, and inane." This despite overwhelming evidence already accumulated of a round earth (lighthouses sinking below the horizon as seen from ships sailing away, astronomical observations of lunar eclipses starting at different times in different locations, the fact that different stars are visible at different latitudes, and the shadow of the earth on the moon), which Lactantius simply was uninterested in.

Eusebius, the first historian of the Christian church, said that all are agreed that only scriptural knowledge is worthwhile, anything contrary to scripture is false, and pursuing scientific explanations is to risk damnation. Armchair speculation in support of scripture, however, is good.

Amid factors such as the failure of the pagan system, civil wars in the Roman empire, and a great economic depression, Christianity came to a position of dominance and scientific research came to a halt from about the 4th century to the 12th-14th centuries.

Carrier compared these Christian views to specific displays at the Answers in Genesis Creation Museum in Kentucky, which compared "human reason" to "God's word." One contrasted Rene Descartes saying "I think therefore I am" to God saying "I am that I am." Galen wouldn't have put those into opposition with each other.

Another display labeled "The First Attack--Question God's Word" told the story of Satan tempting Adam to eat from the fruit of the tree of knowledge of good and evil, which highlights the "questioning" of Satan for criticism, and argues that putting reason first is Satanic.

Another diagram comparing "human reason" to "God's Word" showed evolution as a 14-billion-year winding snake-like shape, compared to the short and straight arrow of a 6,000-year creation.

Carrier noted, "It doesn't have to be that way. Galen's faith didn't condemn fundamental scientific values; Galen's creationism was science-based."

He then gave numerous examples of knowledge lost or ignored by Christianity--that Eratosthenes had calculated the size of the earth (a case described in Carl Sagan's "Cosmos" series), Ptolemy's projection cartography and system of latitude and longitude, developments in optics, hydrostatics, medicine, harmonics and acoustics, pneumatics, tidal theory, cometary theory, the precession of the stars, mathematics, robotics (cuckoo clocks, coin-operated vending machines for holy water and soap dispensing), machinery (water mills, water-powered saws and hammers, a bread-kneading machine), and so on. He described the Antikythera mechanism, an analog computer similar to WWI artillery computers, which was referred to in various ancient texts but had been dismissed by historians as impossible until this instance was actually found in 1900.

Another example was the Archimedes Codex, where Christians scraped the ink from the text and wrote hymns on it, and threw the rest away. The underlying writing has now been partially recovered thanks to modern technology, revealing that Archimedes performed remarkably advanced calculations about areas, volumes, and centers of gravity.

Carrier has a forthcoming book on the subject of this ancient science, called The Scientist in the Early Roman Empire.

A few interesting questions came up in the Q&A. The first question was about why early Christians didn't say anything about abortion. Carrier said it probably just wasn't on the radar, though abortion technology already existed in the form of mechanical devices for performing abortions and abortifacients. He also observed that the ancients knew the importance of cleanliness and antiseptics in medicine, while Jesus said that washing before you eat is a pointless ritual (Mark 7:1-20). Carrier asked, if Jesus was God, shouldn't he have known about the germ theory of disease?

Another question was whether Christianity was really solely responsible for 1,000 years of stangnation. Carrier pointed out that there was a difference between Byzantine and Western Christianity, with the former preserving works like those of Ptolemy without condemning them, but without building upon them. He said there are unerlying cultural, social, and historical factors that explain the differences, so it's not just the religion. He also pointed out that there was a lost sect of Christianity that was pro-science, but we have nothing of what they wrote, only references to them by Tertullian, criticizing them for supporting Thales, Galen, and so forth.

Another questioner asked how he accounts for cases of Christians who have contributed to science, such as Kepler, Boyle, Newton, and Bacon. Carrier said "Not all Christians have to be that way--there's no intrinsic reason Christianity has to be that way." But, he said, if you put fact before authority, scripture will likely end up not impressing you, being contradicted by evidence you find, and unless you completely retool Christianity, you'll likely abandon it. Opposition to scientific values is necessary to preserve Christianity as it is; putting weight on authority and scripture leads to the anti-science position as a method of preservation of the dogma.

It was a wonderfully interesting and wide-ranging talk. He covered a lot more specifics than I've described here. If you find that Carrier is giving a talk in your area, I highly recommend that you go hear him speak.

You can find more information about Richard Carrier at his web site.