Thursday, April 01, 2010

Galileo on the relation between science and religion

Galileo’s view of natural philosophy (science) is that it is the study of the book of nature,” “written in mathematical language” (Finocchiaro 2008, p. 183), as contrasted with theology, the study of the book of Holy Scripture and revelation.  Galileo endorses the idea that theology is the “queen” of the “subordinate sciences” (Finocchiaro 2008, p. 124), by which he means not that theology trumps science in any and all matters.  He distinguishes two senses of theology being “preeminent and worthy of the title of queen”: (1) That “whatever is taught in all the other sciences is found explained and demonstrated in it [theology] by means of more excellent methods and of more sublime principles,” [Note added 12/14/2012: which he rejects] and (2) That theology deals with the most important issues, “the loftiest divine contemplations” about “the gaining of eternal bliss,” but “does not come down to the lower and humbler speculations of the inferior sciences ... it does not bother with them inasmuch as they are irrelevant to salvation” [Note added 12/14/2012: which he affirms] (quotations from Finocchiaro 2008, pp. 124-125).  Where Holy Scripture makes reference to facts about nature, they may be open to allegorical interpretation rather than literal interpretation, unless their literal truth is somehow necessary to the account of “the gaining of eternal bliss.”

Galileo further distinguishes two types of claims about science:  (1) “propositions about nature which are truly demonstrated” and (2) “others which are simply taught” (Finocchiaro 2008, p. 126).  The role of the theologian with regard to the former category is “to show that they are not contrary to Holy Scripture,” e.g., by providing an interpretation of Holy Scripture compatible with the proposition; with regard to the latter, if it contradicts Holy Scripture, it must be considered false and demonstrations of the same sought (Finocchiaro 2008, p. 126).  Presumably, if in the course of attempting to demonstrate that a proposition in the second category is false, it is instead demonstrated to be true, it then must be considered to be part of the former category.  Galileo’s discussion allows that theological condemnation of a physical proposition may be acceptable if it is shown not to be conclusively demonstrated (Finnochiaro 2008, p. 126), rather than a more stringent standard that it must be conclusively demonstrated to be false, which, given his own lack of conclusive evidence for heliocentrism, could be considered a loophole allowing him to be hoist with his own petard.

Galileo also distinguishes between what is apparent to experts vs. the layman (Finnochiaro 2008, p. 131), denying that popular consensus is a measure of truth, but regarding that this distinction is what lies behind claims made in Holy Scripture about physical propositions that are not literally true.  With regard to the theological expertise of the Church Fathers, their consensus on a physical proposition is not sufficient to make it an article of faith unless such consensus is upon “conclusions which the Fathers discussed and inspected with great diligence and debated on both sides of the issue and for which they then all agreed to reject one side and hold the other” (Finnochiaro 2008, p. 133).  Or, in a contemporary (for Galileo) context, the theologians of the day could have a comparably weighted position on claims about nature if they “first hear the experiments, observations, reasons, and demonstrations of philosophers and astronomers on both sides of the question, and then they would be able to determine with certainty whatever divine inspiration will communicate to them” (Finnochiaro 2008, p. 135).

Galileo’s conception of science that leads him to take this position appears to be drawn from what Peter Dear (1990, p. 664), drawing upon Thomas Kuhn (1977), calls “the quantitative, ‘classical’ mathematical sciences” or the “mixed mathematical sciences,” identifying this as a predominantly Catholic conception of science, as contrasted with experimental science developed in Protestant England.  The former conception is one in which laws of nature can be recognized through idealized thought experiments based on limited (or no) actual observations, but demonstrated conclusively by means of rational argument.  This seems to be the general mode of Galileo’s work.  Dear argues that this notion of natural law allows for a conception of the “ordinary course of nature” which can be violated by an observed miraculous event, which comports with a Catholic view that miracles continue to occur in the world.

By contrast, the experimentalist views of Francis Bacon and Robert Boyle involve inductively inferring natural laws on the basis of observations, in which case observing something to occur makes it part of nature that must be accounted for in the generalized law--a view under which a miracle seems to be ruled out at the outset, which was not a problem for Protestants who considered the “age of miracles” to be over (Dear 1990, pp. 682-683).  Dear argues that for the British experimentalists, authentication of an experimental result was in some ways like the authentication of a miracle for the Catholics--requiring appropriately trustworthy observations--but that instead of verifying a violation of the “ordinary course of nature,” it verified what the “ordinary course of nature” itself was (Dear 1990, p. 680).  Where the Catholics like Galileo and Pascal derived conclusions about particulars from universal laws recognized by observation, reasoning, and mathematical demonstration, the Protestants like Bacon and Boyle constructed universal laws by inductive generalization from observations of particulars, and were notably critical of failing to perform a sufficient number of experiments before coming to conclusions (McMullin 1990, p. 821), and put forth standards for hypotheses and experimental method (McMullin 1990, p. 823; Shapin & Schaffer 1985, pp. 25ff & pp. 56-59).  The English experimentalist tradition, arising at a time of political and religious confusion after the English Civil War and the collapse of the English state church, was perhaps an attempt to establish an independent authority for science.  By the 19th century, there were explicit (and successful) attempts to separate science from religious authority and create a professionalized class of scientists (e.g., as Gieryn 1983, pp. 784-787 writes about John Tyndall).

The English experimentalists followed the medieval scholastics (Pasnau, forthcoming) in adopting a notion of “moral certainty” for “the highest degree of probabilistic assurance” for conclusions adopted from experiments (Shapin 1994, pp. 208-209).  This falls short of the Aristotelian conception of knowledge, yet is stronger than mere opinion.  They also placed importance on public demonstration in front of appropriately knowledgeable witnesses--with both the credibility of experimenter and witness being relevant to the credibility of the result.  Where on Galileo’s conception expertise appears to be primarily a function of possessing rational faculties and knowledge, on the experimentalist account there is importance to skill in application of method and to the moral trustworthiness of the participants as a factor in vouching for the observational results.  In the Galilean approach, trustworthiness appears to be less relevant as a consequence of actual observation being less relevant--though Galileo does, from time to time, make remarks about observations refuting Aristotle, e.g., in “Two New Sciences” where he criticizes Aristotle’s claims about falling bodies (Finnochiaro 2008, pp. 301, 303).

The classic Aristotelian picture of science is similar to the Galilean approach, in that observation and data collection is done for the purpose of recognizing first principles and deriving demonstrations by reason from those first principles.  What constitutes knowledge is what can be known conclusively from such first principles and what is derived by necessary connection from them; whatever doesn’t meet that standard is mere opinion (Posterior Analytics, Book I, Ch. 33; McKeon 1941, p. 156).  The Aristotelian picture doesn’t include any particular deference to theology; any discipline could could potentially yield knowledge so long as there were recognizable first principles. The role of observation isn’t to come up with fallible inductive generalizations, but to recognize identifiable universal and necessary features from their particular instantiations (Lennox 2006).  This discussion is all about theoretical knowledge (episteme) rather than practical knowledge (tekne), the latter of which is about contingent facts about everyday things that can change.  Richard Parry (2007) points out an apparent tension in Aristotle between knowledge of mathematics and knowledge of the natural world on account of his statement that “the minute accuracy of mathematics is not to be demanded in all cases, but only in the case of things which have no matter.  Hence its method is not that of natural science; for presumably the whole of nature has matter” (Metaphysics, Book II, Ch. 3, McKeon 1941, p. 715).

The Galilean picture differs from the Aristotelian in its greater use of mathematics (geometry)--McMullin writes that Galileo had “a mathematicism ... more radical than Plato’s” (1990, pp. 822-823) and by its inclusion of the second book, that of revelation and Holy Scripture, as a source of knowledge.  But while the second book is one which can trump mere opinion--anything that isn’t conclusively demonstrated and thus fails to meet Aristotle’s understanding of knowledge--it must be held compatible with anything that does meet those standards.

References
  • Peter Dear (1990) “Miracles, Experiments, and the Ordinary Course of Nature,” ISIS 81:663-683.
  • Maurice A. Finocchiaro, editor/translator (2008) The Essential Galileo.  Indianapolis: Hackett Publishing Company.
  • Thomas Gieryn (1983) “Boundary Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists,” American Sociological Review 48(6, December):781-795.
  • Thomas Kuhn (1957) The Copernican Revolution: Planetary Astronomy in the Development of Western Thought.  Cambridge, Mass.: Harvard University Press.
  • Thomas Kuhn (1977) The Essential Tension.  Chicago: The University of Chicago Press.
    Lennox, James (2006) “Aristotle’s Biology,” Stanford Encyclopedia of Philosophy, online at http://plato.stanford.edu/entries/aristotle-biology/, accessed March 18, 2010.
  • Richard McKeon (1941) The Basic Works of Aristotle. New York: Random House.
  • Ernan McMullin (1990) “The Development of Philosophy of Science 1600-1900,” in Olby et al. (1990), pp. 816-837.
  • R.C. Olby, G.N. Cantor, J.R.R. Christie, and M.J.S. Hodge (1990) Companion to the History of Science.  London: Routledge.
  • Parry, Richard (2007) “Episteme and Techne,” Stanford Encyclopedia of Philosophy, online at http://plato.stanford.edu/entries/episteme-techne/, accessed March 18, 2010.
  • Robert Pasnau (forthcoming) “Medieval Social Epistemology: Scienta for Mere Mortals,” Episteme, forthcoming special issue on history of social epistemology.  Online at http://philpapers.org/rec/PASMSE, accessed March 18, 2010. 
  • Steven Shapin and Simon Schaffer (1985) Leviathan and the Air Pump: Hobbes, Boyle, and the Experimental Life.  Princeton, N.J.: Princeton University Press.
  • Steven Shapin (1994) A Social History of Truth: Civility and Science in Seventeenth-Century England. Chicago: The University of Chicago Press.
[The above is slightly modified from one of my answers on a midterm exam.  My professor observed that another consideration on the difference between Catholic and Protestant natural philosophers is that theological voluntarism, more prevalent among Protestants, can suggest that laws of nature are opaque to human beings except through inductive experience.  NOTE ADDED 13 April 2010: After reading a couple of chapters of Margaret Osler's Divine Will and the Mechanical Philosophy: Gassendi and Descartes on Contingency and Necessity in the Created World (2005, Cambridge University Press), I'd add Pierre Gassendi to the experimentalist/inductivist side of the ledger, despite his being a Catholic--he was a theological voluntarist.]

Thursday, March 11, 2010

Representation, realism, and relativism

The popular view of the “science wars” of the 1990s is that it involved scientists and philosophers criticizing social scientists for making and accepting absurd claims as a result of an extreme relativistic view about scientific knowledge. Such absurd claims included claims like “the natural world in no way constrains what is believed to be,” “the natural world has a small or nonexistent role in the construction of scientific knowledge,” and “the natural world must be treated as though it did not affect our perception of it” (all due to Harry Collins, quoted in Yves Gingras’ scathingly critical review of his book (PDF), Gravity’s Shadow: The Search for Gravitational Waves). Another example was Bruno Latour’s claim that it was impossible for Ramses II to have died of tuberculosis because the tuberculosis bacillus was not discovered until 1882. This critical popular view is right as far as it goes--those claims are absurd--but the popular view of science also tends toward an overly rationalistic and naively realistic conception of scientific knowledge that fails to account for social factors that influence science as actually practiced by scientists and scientific institutions. The natural world and our social context both play a role in the production of scientific knowledge.

Mark B. Brown’s Science in Democracy: Expertise, Institutions, and Representation tries to steer a middle course between extremes, but periodically veers too far in the relativist direction. Early on, in a brief discussion of the idea of scientific representations corresponding to reality, he writes (p. 6): “Emphasizing the practical dimensions of science need not impugn the truth of scientific representations, as critics of science studies often assume ...” But he almost immediately seems to retract this when he writes that “science is not a mirror of nature” (p. 7) and, in one of several unreferenced and unargued-for claims appealing to science studies that occur in the book, that “constructivist science studies does undermine the standard image of science as an objective mirror of nature” (p. 16). Perhaps he merely means that scientific representations are imperfect and fallible, for he does periodically make further attempts to steer a middle course, such as when he quotes Latour: “Either they went on being relativists even about the settled parts of science--which made them look ridiculous; or they continued being realists even about the warm uncertain parts--and they made fools of themselves” (p. 183). It’s surely reasonable to take an instrumentalist approach to scientific theories that aren’t well established, are somewhat isolated from the rest of our knowledge, or are highly theoretical, but also to take a realist approach to theories that are well established with evidence from multiple domains and have remained stable while being regularly put to the test. The evidence that we have today for a heliocentric solar system, for common ancestry of species, and for the position and basic functions of organs in the human body is of such strength that it is unlikely that we will see that knowledge completely overthrown in a future scientific revolution. But Brown favorably quotes Latour: “Even the shape of humans, our very body, is composed to a great extent of sociotechnical negotiations and artifacts.” (p. 171) Our bodies are not “composed” of “sociotechnical negotiations and artifacts”--this is either a mistaken use of the word “composed” (instead of perhaps “the consequence of”) or a use-mention error (referring to “our very body” instead of our idea of our body).

In Ch. 6, in a section titled “Realism and Relativism” that begins with a reference to the “science wars,” he follows the pragmatist philosopher John Dewey in order to “help resolve some of the misunderstandings and disagreements among today’s science warriors” such as that “STS scholars seem to endorse a radical form of relativism, according to which scientific accounts of reality are no more true than those of witchcraft, astrology, or common sense” (p. 156). Given that Brown has already followed Dewey’s understanding of scientific practice as continuous with common sense (pp.151-152), it’s somewhat odd to see it listed with witchcraft and astrology in that list--though perhaps in this context it’s not meant as the sort of critical common sense Dewey described, but more like folk theories that are undermined or refuted by science.

Brown seems to endorse Dewey’s view that “reality is the world encountered through successful intervention” and favorably quotes philosopher Ian Hacking that “We shall count as real what we can use to intervene in the world to affect something else, or what the world can use to affect us” (pp. 156-157), but he subsequently drops the second half of Hacking’s statement when he writes “If science is understood in terms of the capacity to direct change, knowing cannot be conceived on the model of observation.” Such an understanding may capture experimental sciences, but not observational or historical sciences, an objection Brown attributes to Bertrand Russell, who “pointed out in his review of Dewey’s Logic that knowledge of a star could not be said to affect the star” (p. 158). Brown, however, follows Latour and maintains that “the work of representation ... always transforms what it represents” (p. 177). Brown defends this by engaging in a use-mention error, the failure to properly distinguish between the use of an expression and talking about the expression, when he writes that stars as objects of knowledge are newly created objects (p. 158, more below). Such an error is extremely easy to make when talking about social facts, where representations are themselves partly constitutive of the facts, such as in talk about knowledge or language.

Brown writes that “People today experience the star as known, differently than before ... The star as an object of knowledge is thus indeed a new object” (p. 158). But this is unnecessary given the second half of Hacking’s statement, since we can observe and measure stars--they have impact upon us. Brown does then talk about impact on us, but only by the representation, not the represented: “...this new object causes existential changes in the knower. With the advent of the star as a known object, people actually experience it differently. This knowledge should supplement and not displace whatever aesthetic or religious experiences people continue to have of the star, thus making their experiences richer and more fulfilling” (p. 158). There may certainly be augmented experience with additional knowledge, which may not change the perceptual component of the experience, but I wonder what the Brown’s basis is for the normative claim that religious experiences in particular shouldn’t be displaced--if those religious experiences are based on claims that have been falsified, such as an Aristotelian conception of the universe, then why shouldn’t they be displaced? But perhaps here I’m making the use-mention error, and Brown doesn’t mean that religious interpretations shouldn’t be displaced, only experiences that are labeled as “religious” shouldn’t be displaced.

A few other quibbles:

Brown writes that “all thought relies on language” (p. 56). If this is the case, then nonhuman animals that have no language cannot have thoughts. (My commenter suggested that all sentient beings have language, and even included plants in that category. I think the proposal that sentience requires language is at least plausible, though I wouldn’t put many nonhuman animals or any plants into that category--perhaps chimps, whales, and dolphins. Some sorts of “language” extend beyond that category, such as the dance of honeybees that seems to code distance and direction information, but I interpreted Brown’s claim to refer to human language with syntax, semantics, generative capacity, etc., and to mean that one can’t have non-linguistic thoughts in the form of, say, pictorial imagery, without language. I.e., that even such thoughts require a “language of thought,” to use Jerry Fodor’s expression.)

Brown endorses Harry Collins’ idea of the “experimenter’s regress,” without noting that his evidence for the existence of such a phenomenon is disputed (Allan Franklin, “How to Avoid the Experimenters’ Regress,” Studies in History and Philosophy of Science 25(3, 1994): 463-491). (Franklin also discusses this in the entry on "Experiment in Physics" at the Stanford Encyclopedia of Philosophy.)

Brown contrasts Harry Collins and Robert Evans with Hobbes on the nature of expertise: The former see “expertise as a ‘real and substantive’ attribute of individuals” while “For Hobbes, in contrast, what matters is whether the claims of reason are accepted by the relevant audience.” (p. 116). Brown sides with Hobbes, but this is to make a similar mistake to that Richard Rorty made when claiming that truth is what you can get away with, which is false by its own definition--since philosophers didn’t let him get away with it. This definition doesn’t allow for the existence of a successful fake expert or con artist, but we know that such persons exist from examples that have been exposed. Under this definition, such persons were experts until they were unmasked.

Brown’s application of Hobbes’ views on political representation to nature is less problematic when he discusses the political representation of environmental interests (pp. 128-131) than when he discusses scientific representations of nature (pp. 131-132). The whole discussion might have been clearer had it taken account of John Searle’s account of social facts (in The Construction of Social Reality).

Brown writes that “Just as recent work in science studies has shown that science is not made scientifically ...” (p. 140), without argument or reference.

He apparently endorses a version of Dewey’s distinction between public and private actions with private being “those interactions that do not affect anyone beyond those engaged in the interaction; interactions that have consequences beyond those so engaged he calls public” (p. 141). This distinction is probably not tenable since the indirect consequences of even actions that we’d consider private can ultimately affect others, such as a decision to have or not to have children.

On p. 159, Brown attributes the origin of the concept of evolution to “theories of culture, such as those of Vico and Comte” rather than Darwin, but neither of them had theories of evolution by natural selection comparable to Darwin’s innovation; concepts of evolutionary change go back at least to the pre-Socratic philosophers like the Epicureans and Stoics. (Darwin didn't invent natural selection, either, but he was the first to put all the pieces together and recognize that evolution by natural selection could serve a productive as well as a conservative role.)

[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Brenda T. for her comments. It should be noted that the above really doesn't address the main arguments of the book, which are about the meaning of political representation and representation in science, and an argument about proper democratic representation in science policy.]

Wednesday, February 24, 2010

Science as performance

The success of science in the public sphere is determined not just by the quality of research but by the ability to persuade. Stephen Hilgartner’s Science on Stage: Expert Advice as Public Drama uses a theatrical metaphor, drawing on the work of Erving Goffman, to shed light on and explain the outcomes associated with three successive reports on diet and nutrition issued by the National Academies of Science, one of which was widely criticized by scientists, one of which was criticized by food industry groups, and one of which was never published. They differed in “backstage” features such as how they coordinated their work and what sources they drew upon, in “onstage” features such as the composition of experts on their committees and how they communicated their results, and how they responded to criticism.

The kinds of features and techniques that Hilgartner identifies as used to enhance perceptions of credibility--features of rhetoric and performance--are the sorts of features relied upon by con artists. If there is no way to distinguish such features as used by con artists from those used by genuine practitioners, if all purported experts are on equal footing and only the on-stage performances are visible, then we have a bit of a problem. All purported experts of comparable performing ability are on equal footing, and we may as well flip coins to distinguish between them. But part of a performance includes the propositional content of the performance--the arguments and evidence deployed--and these are evaluated not just on aesthetic grounds but with respect to logical coherence and compatibility with what the audience already knows. Further, the performance itself includes an interaction with the audience that strains the stage metaphor. Hilgartner describes this as members of the audience themselves taking the stage, yet audience members in his metaphor also interact with each other, individually and in groups, through complex webs of social relationships.

The problem of expert-layman interaction is that the layman in most cases lacks the interactional expertise to even be able to communicate about the details of the evidence supporting a scientific position, and must rely upon other markers of credibility which may be rhetorical flourishes. This is the problem of Plato’s “Charmides,” in which Socrates asserts that only a genuine doctor can distinguish a sufficiently persuasive quack from a genuine doctor. A similar position is endorsed by philosopher John Hardwig, in his paper “Epistemic Dependence,” (PDF) and by law professor Scott Brewer in “Scientific Expert Testimony and Intellectual Due Process,” which points out that the problem faces judges and juries. There are some features which enable successful distinctions between genuine and fake experts in at least the more extreme circumstances--examination of track records, credentials, evaluations by other experts or meta-experts (e.g., experts in methods used across multiple domains, such as logic and mathematics). Brewer enumerates four strategies of nonexperts in evaluating expert claims: (1) “substantive second-guessing,” (2) “using general canons of rational evidentiary support,” (3) “evaluating demeanor,” and (4) “evaluating credentials.” Of these, only (3) is an examination of the merely surface appearances of the performance (which is not to say that it can’t be a reliable, though fallible, mechanism). But when the evaluation is directed not at distinguishing genuine expert from fake, but conflicting claims between two genuine experts, the nonexpert may be stuck in a situation where none of these is effective and only time (if anything) will tell--but in some domains, such as the legal arena, a decision may need to be reached much more quickly than a resolution might become available.

One novel suggestion for institutionalizing a form of expertise that fits into Hilgartner’s metaphor is philosopher Don Ihde’s proposal of “science critics”, in which individuals with at least interactional expertise within the domain they criticize serve a role similar to art and literary critics in evaluating a performance, including its content and not just its rhetorical flourishes.

[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. The Hardwig and Brewer articles are both reprinted in Evan Selinger and Robert P. Crease, editors, The Philosophy of Expertise. NY: Columbia University Press, 2006, along with an excellent paper I didn't mention above, Alvin I. Goldman's "Experts: Which Ones Should You Trust?" (PDF). The term "interactional expertise" comes from Harry M. Collins and Robert Evans, "The Third Wave of Science Studies: Studies of Expertise and Experience," also reprinted in the Selinger & Crease volume; a case study of such expertise is in Steven Epstein's Impure Science: AIDS, Activism, and the Politics of Knowledge, Berkeley: University of California Press, 1996. Thanks to Tim K. for his comments on the above.]

Monday, February 22, 2010

Is knowledge drowning in a flood of information?

There have long been worries that the mass media are producing a “dumbing down” of American political culture, reducing political understanding to sound bites and spin. The Internet has been blamed for information overload, and, like MTV in prior decades, for a reduction in attention span as the text-based web became the multimedia web, and cell phones have become a more common tool for its use. Similar worries have been expressed about public understanding of science. Nicholas Carr has asked the question, “Is Google Making Us Stupid?”

Yaron Ezrahi’s “Science and the political imagination in contemporary democracies” (a chapter in Sheila Jasanoff's States of Knowledge: The Co-Production of Science and Social Order) argues that the post-Enlightenment synthesis of scientific knowledge and politics in democratic societies is in decline, on the basis of a transition of public discourse into easily consumed, bite-sized chunks of vividly depicted information that he calls “outformation.” Where, prior to the Enlightenment, authority had more of a religious basis and the ideal for knowledge was “wisdom”--which Ezrahi sees as a mix of the “cognitive, moral, social, philosophical, and practical” which is privileged, unteachable, and a matter of faith, the Enlightenment brought systematized, scientific knowledge to the fore. Such knowledge was formalized, objective, universal, impersonal, and teachable--with effort. When that scientific knowledge is made more widely usable, “stripped of its theoretical, formal, logical and mathematical layers” into a “think knowledge” that is context-dependent and localized, it becomes “information.” And finally, when information is further stripped of its context and design for use for a particular purpose, yet augmented with “rich and frequently intense” representations that include “cognitive, emotional, aesthetic, and other dimensions of experience,” it becomes “outformation.”

According to Ezrahi, such “outformations” mix references to objective and subjective reality, and they become “shared references in the context of public discourse and action.” They are taken to be legitimated and authoritative despite lacking any necessary grounding in “observations, experiments, and logic.” He describes this shift as a shift from a high-cost political reality to a low-cost political reality, where “cost” is a measure of the recipient’s ability to consume it rather than the consequences to the polity of its consumption and use as the basis for political participation. This shift, he says, “reflects the diminished propensity of contemporary publics to invest personal or group resources in understanding and shaping politics and the management of public affairs.”

But, I wonder, is this another case of reflecting on “good old days” that never existed? While new media have made new forms of communication possible, was there really a time when the general public was fully invested in “understanding and shaping politics” and not responding to simplifications and slogans? And is it really the case, as Ezrahi argues, that while information can be processed and reconstructed into knowledge, the same is not possible for outformations? Some of us do still read books, and for us, Google may not be “making us stupid,” but rather providing a supplement that allows us to quickly search a vast web of interconnected bits of information that can be assembled into knowledge, inspired by a piece of “outformation.”

[A slightly different version of the above was written as a comment on Ezrahi's article for my Human and Social Dimensions of Science and Technology core seminar. Although I wrote about new media, it is apparent that Ezrahi was writing primarily about television and radio, where "outformation" seems to be more prevalent than information. Thanks to Judd A. for his comments on the above.]

UPDATE (April 19, 2010): Part of the above is translated into Italian, with commentary from Ugo Bardi of the University of Florence, at his blog.

Saturday, February 20, 2010

Seeing like a slime mold

Land reforms instituted in Vietnam under French rule, in India under the British, and in rural czarist Russia introduced simplified rights of ownership and standardized measurements of size and shape that were primarily for the benefit of the state, e.g., for tax purposes. James C. Scott’s Seeing as a State: How Certain Schemes to Improve the Human Condition Have Failed gives these and numerous other examples of ways in which standardization and simplification have been used by the state to make legible and control resources (and people) within its borders. He recounts cases of how the imposition of such standardization often fails or at least has unintended negative consequences, such as his example of German scientific forestry’s introduction of a monoculture of Norway spruce or Scotch pine designed to maximize lumber production, but which led to die-offs a century later. (The monoculture problem of reduced resilience/increased vulnerability is one which has been recognized in an information security context, as well, e.g., in Dan Geer et al.'s paper on Microsoft monoculture that got him fired from @stake and his more recent work.)

Scott’s examples of state-imposed uniformity should not, however, be misconstrued to infer that any case of uniformity is state-imposed, or that such regularities, even if state-imposed, don't have underlying natural constraints. Formalized institutions of property registration and title have appeared in the crevices between states, for example in the squatter community of Kowloon Walled City that existed from 1947-1993 on a piece of the Kowloon peninsula that was claimed by both China and Britain, yet governed by neither. While the institutions of Kowloon Walled City may have been patterned after those familiar to its residents from the outside world, they were internally imposed rather than by a state.

Patterns of highway network design present another apparent counterexample. Scott discusses the design of highways around Paris as being designed by the state to intentionally route traffic through Paris, as well as to allow for military and law enforcement activity within the city in order to put down insurrections. But motorway patterns in the UK appear to have a more organic structure, as a recent experiment with slime molds oddly confirmed. Two researchers at the University of West of England constructed a map of the UK out of agar, putting clumps of oat flakes at the locations of the nine most populous cities. They then introduced a slime mold colony to the mix, and in many cases it extruded tendrils to feed on the oat flakes creating patterns which aligned with the existing motorway design, with some variations. A similar experiment with a map of cities around Tokyo duplicated the Tokyo railway network, slime-mold style. The similarity between transportation networks and evolved biological systems for transporting blood and sap may simply be because they are efficient and resilient solutions.

These examples, while not refuting Scott’s point about frequent failures in top-down imposition of order, suggest that it may be possible for states to achieve success in certain projects by facilitating bottom-up development of ordered structures. The state often imposes an order that has already been developed via some other means--e.g., electrical standards were set up by industry bodies before being codified, IETF standards for IP which don't have the force of law yet are globally implemented. In other cases, states may ratify an emerging order by, e.g., preempting a diversity of state rules with a set that have been demonstrated to be successful, though that runs the risk of turning into a case like Scott describes, if there are local reasons for the diversity.

[A slightly different version of the above was written as a comment on the first two chapters of Scott's book for my Human and Social Dimensions of Science and Technology core seminar. I've ordered a copy of the book since I found the first two chapters to be both lucidly written and extremely interesting. Thanks to Gretchen G. for her comments that I've used to improve (I hope) the above.]

UPDATE (April 25, 2010): Nature 407:470 features "Intelligence: Maze-solving by an amoeboid organism."

Rom Houben not communicating; blogger suppresses the evidence

It has now been demonstrated, as no surprise to skeptics, that Rom Houben was not communicating via facilitated communication, a discredited method by which facilitators have typed for autistic children. A proper test was conducted by Dr. Steven Laureys with the help of the Belgian Skeptics, and it was found that the communications were coming from the facilitator, not from Houben.

A blogger who was a vociferous critic of James Randi and Arthur Caplan for pointing out that facilitated communication is a bogus technique and who had attempted to use Houben's case to argue that Terri Schiavo also may have been conscious is not only unwilling to admit he was wrong, but is deleting comments that point to the results of this new test. I had posted a comment along the lines of "Dr. Laureys performed additional tests with Houben and the facilitator and found that, in fact, the communications were coming from the facilitator, not Houben" with a link to the Neurologica blog; this blogger called that "spam" (on the basis of my posting a similar comment on another blog, perhaps) and "highly misleading" (on the basis of nothing).

As I've said all along, this doesn't mean that Houben isn't "locked in" and conscious, but facilitated communication provides no evidence that he is.

(Previously, previously.)

Friday, February 19, 2010

Another lottery tragedy

From CNN:
A Florida woman has been charged with first-degree murder in connection with the death of a lottery millionaire whose body was found buried under fresh concrete, authorities said.

Dorice Donegan Moore, 37, was arrested last week on charges of accessory after the fact regarding a first-degree murder in the death of Abraham Shakespeare, 43, said Hillsborough County Sheriff David Gee. She remains in the Hillsborough County Jail, he said.

Moore befriended Shakespeare after he won a $31 million Florida lottery prize in 2006 and was named a person of interest in the case after Shakespeare disappeared, authorities said.

Tuesday, February 09, 2010

Where is the global climate model without AGW?

One of the regular critics of creationism on the Usenet talk.origins newsgroup (where the wonderful Talk Origins Archive FAQs were originally developed) was a guy who posted under the name "Dr. Pepper." His posts would always include the same request--"Please state the scientific theory of creationism." It was a request that was rarely responded to, and never adequately answered, because there is no scientific theory of creationism.

A parallel question for those who are skeptical about anthropogenic climate change is to ask for a global climate model that more accurately reflects temperature changes over the last century than those used by the IPCC, without including the effect of human emissions of greenhouse gases. For comparison, here's a review of the 23 models which contributed to the IPCC AR4 assessment. While these models are clearly not perfect, shouldn't those who deny anthropogenic global warming be able to do better?

Friday, January 29, 2010

ApostAZ podcast #19

After a multi-month hiatus, the ApostAZ podcast returns:
Episode 019 Atheism and Spooky Bullshite in Phoenix! Go to meetup.com/phoenix-atheists for group events! Intro- Joe Rogan "Noah's Ark (George Carlin Remix)". Paranormal Activity, Chick Tracts and Ugandan Love.
The guy whose name you couldn't think of around 16:22-16:30--of the Stop Sylvia Brown website--is Robert Lancaster.

Wednesday, January 06, 2010

A few comments on the nature and scope of skepticism

Of late there has been a lot of debate about the nature, scope, and role of skepticism. Does skepticism imply atheism? Are "climate change skeptics" skeptics? Must skeptics defer to scientific consensus or experts? Should skepticism as a movement or skeptical organizations restrict themselves to paranormal claims, or avoid religious or political claims?

I think "skepticism" can refer to multiple different things, and my answers to the above questions differ in some cases depending on how the term is being used. It can refer to philosophical skepticism, to scientific skepticism, to "skeptical inquiry," to "doubt" broadly speaking, to the "skeptical movement," to skeptical organizations, and to members of the class of people who identify themselves as skeptics.

My quick answers to the above questions, then, are:

Does skepticism imply atheism? No, regardless of which definition you choose. It is reasonable to argue that proper application of philosophical skepticism should lead to atheism, and to argue that scientific skepticism should include methodological naturalism, but I prefer to identify skepticism with a commitment to a methodology rather than its outputs. That still involves a set of beliefs--which are themselves subject to reflection, criticism, and evaluation--but it is both a more minimal set than the outputs of skepticism and involves commitment to values as well as what is scientifically testable. My main opposition to defining skepticism by its outputs is that that is a set of beliefs that can change over time with access to new and better information, and shouldn't be held dogmatically.

Are "climate change skeptics" skeptics? I would say that some are, and some aren't--some are outright "deniers" who are allowing ideology to trump science and failing to dig into the evidence. Others are digging into the evidence and just coming to (in my opinion) erroneous conclusions, but that doesn't preclude them from being skeptics so long as they're still willing to engage and look at contrary evidence, as well as admit to mistakes and errors when they make them--like relying on organizations and individuals who are demonstrably not reliable. As you'll see below, I agree we should to try to save the term "skeptic" from being equated with denial.

Must skeptics defer to scientific consensus or experts? I think skeptical organizations and their leaders should defer to experts on topics outside of their own fields of expertise on pragmatic and ethical grounds, but individual skeptics need not necessarily do so.

Should skepticism as a movement or skeptical organizations restrict themselves to paranormal claims, or avoid religious or political claims? I think skepticism as a movement, broadly speaking, is centered on organizations that promote scientific skepticism and focus on paranormal claims, but also promote science and critical thinking, including with some overlap with religious and public policy claims, where the scientific evidence is relevant. At its fringes, though, it also includes some atheist and rationalist groups that take a broader view of skeptical inquiry. I think those central groups (like CSI, JREF, and the Skeptics Society) should keep their focus, but not as narrowly as Daniel Loxton suggests in his "Where Do We Go From Here?" (PDF) essay.

Here are a few of my comments, on these same topics, from other blogs.

Comment
on Michael De Dora, "Why Skeptics Should be Atheists," at the Gotham Skeptic blog:

Scientific skepticism (as opposed to philosophical skepticism) no more necessitates atheism than it does amoralism. Your argument would seem to suggest that skeptics shouldn’t hold any positions that can’t be established by empirical science, which would seem to limit skeptics to descriptive, rather than normative, positions on morality and basic (as opposed to instrumental) values.

“Skepticism” does have the sort of inherent ambiguity that “science” does, in that it can refer to process, product, or institution. I favor a methodological view of skepticism as a process, rather than defining it by its outputs. Organizations, however, seem to coalesce around sets of agreed-upon beliefs that are outputs of methodology, not just beliefs about appropriate/effective methodology; historically that set of agreed-upon beliefs has been that there is no good scientific support for paranormal and fringe science claims. As the scope of skeptical inquiry that skeptical organizations address has broadened, that leads to more conflict over issues in the sphere of politics and religion, where empirical science yields less conclusive results.

I’d rather see skeptical organizations share some basic epistemic and ethical values that are supportive of the use of science than a commitment to a set of beliefs about the outputs of skeptical methodology. The latter seems more likely to result in dogmatism.

Comment on Daniel Loxton, "What, If Anything, Can Skeptics Say About Science?" at SkepticBlog:

While I think the picture Daniel presents offers some good heuristics, I can’t help but note that this is really proffered normative advice about the proper relationship between the layman and the expert, which is a question that is itself a subject of research in multiple domains of expertise including philosophy of science, science and technology studies, and the law. A picture much like the one argued for here is defended by some, such as philosopher John Hardwig (”Epistemic Dependence,” Journal of Philosophy 82(1985):335-349), but criticized by others, such as philosopher Don Ihde (”Why Not Science Critics?”, International Studies in Philosophy 29(1997):45-54). There are epistemological, ethical, and political issues regarding deference to experts that are sidestepped by the above discussion. Not only is there a possibility of meta-expertise about evaluating experts, there are cases of what Harry Collins and Robert Evans call “interactional expertise” (”The Third Wave of Science Studies: Studies of Expertise and Experience,” Social Studies of Science 32:2(2002):235-196) where non-certified experts attain sufficient knowledge to interact at a deep level with certified experts, and challenge their practices and results (this is discussed in Evan Selanger and John Mix, “On Interactional Expertise: Pragmatic and Ontological Considerations,” Phenomenology and the Cognitive Sciences 3:2(2004):145-163); Steven Epstein’s book Impure Science: AIDS, Activism, and the Politics of Knowledge, 1996, Berkeley: Univ. of California Press, discusses how AIDS activists developed such expertise and successfully made changes to AIDS drug research and approval processes.

The above discussion also doesn’t discuss context–are these proposed normative rules for skeptics in any circumstance, or only for those speaking on behalf of skeptical organizations? I don’t think it’s reasonable to suggest that skeptics, speaking for themselves, should be limited about questioning anything. The legal system is an example of a case where experts should be challenged and questioned–it’s a responsibility of the judge, under both the Frye and Daubert rules, to make judgments about the relevance and admissibility of expert testimony, and of laymen on the jury to decide who is more credible. (This itself raises enormous issues, which are discussed at some length by philosopher and law professor Scott Brewer, “Scientific Expert Testimony and Intellectual Due Process,” The Yale Law Journal vol. 107, 1535-1681.) Similar considerations apply to the realm of politics in a democratic society (cf. Ihde’s article).

All of the papers I’ve cited are reprinted in the volume The Philosophy of Expertise, edited by Evan Selinger and Robert P. Crease, 2006, N.Y.: Columbia University Press.

Comment on jdc325's "The Trouble With Skeptics" at the Stuff And Nonsense blog:
@AndyD I’d say that it’s possible for a skeptic to believe individual items on your list (though not the ones phrased like “the entirety of CAM”), so long as they do so because they have legitimately studied them in some depth and think that the weight of the scientific evidence supports them, or if they admit that it’s something they buy into irrationally, perhaps for the entertainment it brings or to be part of a social group. If, however, they believe in a whole bunch of such things, that’s probably evidence that they’re not quite getting the point of critical thinking and skepticism somewhere. Being a skeptic doesn’t mean that you’re always correct (as per the above comment on Skeptic Fail #7), and I don’t think it necessarily means you’re always in accord with mainstream science, either.

Skeptic fail #6 is a pretty common one. For example, I don’t think most skeptics have a sufficient knowledge of the parapsychology literature to offer a qualified opinion, as opposed to simply repeat the positions of some of the few skeptics (like Ray Hyman and Susan Blackmore) who do.
Comments (one and two) on "Open Thread #17" at Tamino's Open Mind blog:

Ray Ladbury: I think you’re in a similar position as those who want to preserve “hacker” for those who aren’t engaged in criminal activity. I understand and appreciate the sentiment, but I think “skeptic” already has (and, unlike “hacker,” has actually always had) common currency in a much broader sense as one who doubts, for whatever reason.

I also think that there are many skeptics involved in the organized and disorganized skeptical movement in the U.S. (the one started by CSICOP) who don’t meet your criteria of “sufficiently knowledgeable about the evidence and theory to render an educated opinion” even with respect to many paranormal and pseudoscience claims, let alone with respect to climate science. There’s an unfortunately large subset of “skeptics” in the CSICOP/JREF/Skeptics Society sense who are also climate change skeptics or deniers, as can be seen from the comments on James Randi’s brief-but-retracted semi-endorsement of the Oregon Petition Project at the JREF Swift Blog and on the posts about climate science at SkepticBlog.org.

Ray: You make a persuasive argument for attempting to preserve “skeptic.” Since I’ve just been defending against the colloquial misuse of “begs the question,” I think I can likewise endorse a defense of “skeptic” against “pseudoskeptic.” However, I think I will continue to be about as reserved in my use of “denier” as I am in my use of “liar.” I don’t make accusations of lying unless I have evidence not just that a person is uttering falsehoods, but that they’ve been presented with good evidence that they are uttering falsehoods, and continue to do so anyway.

On another subject, I’d love to see an equivalent of the Talk Origins Archive (http://www.talkorigins.org/), and in particular Mark Isaak’s “Index to Creationist Claims” (http://www.talkorigins.org/indexcc/list.html) for climate science (and its denial). Do they already exist?

Some previous posts at this blog on this subject may be found under the "skepticism" label, including:

"Massimo Pigliucci on the scope of skeptical inquiry"
(October 21, 2009)
"Skepticism, belief revision, and science" (October 21, 2009)

Also, back in 1993 I wrote a post to the sci.skeptic Usenet group that gave a somewhat oversimplified view of "the proper role of skeptical organizations" which was subsequently summarized in Michael Epstein's "The Skeptical Viewpoint," Journal of Scientific Exploration, vol. 7, no. 3, Fall 1993, pp. 311-315.

UPDATE (January 7, 2010): Skepdude has taken issue with a couple of points above, and offers his contrary arguments at his blog. First, he says that skeptics need to defer to scientific consensus with the "possible exception" of cases where "the person is also an expert on said field." I think that case is a definite, rather than a possible exception, but would go farther--it's possible to be an expert (or even just a well-informed amateur) in a field that has direct bearing on premises or inferences used by experts in another field where one is not expert. That can give a foothold for challenging a consensus in a field where one is not expert. For example, philosophers, mathematicians, and statisticians can spot errors of conceptual confusion, fallacious reasoning, invalid inferences, mathematical errors, and misuse of statistics. It's possible for an entire field to have an erroneous consensus, such as that rocks cannot fall from the sky or continents cannot move. I suspect an argument can be made that erroneous consensus is more likely to occur in a field with a high degree of specialization that doesn't have good input from generalists and related fields.

I also am uncomfortable with talk of "deference" to experts without scope or context, as it can be taken to imply the illegitimacy of questioning or demanding evidence and explanation in support of the consensus, which to my mind should always be legitimate.

The second point is one which Skepdude and I have gone back and forth on before, both at his blog (here, here, and here -- I could have used these comments as well in the above post) and via Twitter, which is about whether skepticism implies (or inevitably leads) to atheism. It's a position which I addressed above in my comments on Michael De Dora and on the "Stuff and Nonsense" blog, though he doesn't directly respond to those. He writes:
I fail to see the distinction between skepticism implying atheism and proper application of skepticism leading to atheism. I regard the two as saying the same thing, that skepticism, if consistently applied should lead to atheism. I am not sure what Jim means by philosophical skepticism, and maybe that’s where he draws the difference, but I refrain from using qualifiers in front of the word skepticism, be it philosophical or scientific. Skepticism is skepticism, we evaluate if a given claim is supported by the evidence.
There is most definitely a distinction between "skepticism implies atheism" and "proper application of skepticism leads to atheism." The former is a logical claim that says atheism is derivable from skepticism, or that it's necessarily the case that the use of skepticism (regardless of inputs?) yields atheism. The latter is a contingent claim that's dependent upon the inputs and the result of the inquiry. If skepticism is defined as a method, the former claim would mean in essence that the game is rigged to produce a particular result for an existence claim necessarily, which would seem to me to be a serious flaw in the method, unless you thought that atheism was logically necessary. But I'm not aware of any atheists who hold that, and I know that Skepdude doesn't, since he prefers to define atheism as mere lack of belief and has argued that there is no case to be made for positive atheism/strong atheism.

If we take skepticism defined as a product, as a set of output beliefs, there's the question of which output beliefs we use. Some idealized set of beliefs that would be output from the application of skeptical processes? If so, based on which set of inputs? In what historical context? The sets of inputs, the methods, and the outputs all have changed over time, and there is also disagreement about what counts as appropriately well-established inputs and the scope of the methods. The advocate of scientific skepticism is going to place more constraints on what is available as input to the process and the scope of what the process can deal with (in such a way that the process cannot be used even to fully evaluate reasons for being a skeptic, which likely involve values and commitments that are axiomatic or a priori). Methodological naturalism is likely to be part of the definition of the process, which means that theism cannot be an output belief--I think this is probably what Skepdude means when he says that atheism defined as a lack of belief is a product of skepticism. But note that the set of output beliefs from this process is a subset of what it is reasonable to believe, unless the advocate of this view wants to assert that the commitment to skepticism itself is not reasonable to believe--in virtue of the fact that it is not subject to a complete evaluation by the process. (As an aside, I think that it is possible for the process of skepticism thus defined to yield a conclusion of its own inadequacy to address certain questions, and in fact, that if we were to observe certain things, to yield the conclusion that methodological naturalism should be rejected.)

If we look at skepticism more broadly, where philosophical arguments more generally are acceptable as input or method, atheism (in the positive or strong form) then becomes a possible output. As an atheist, I think that use of the best available evidence and arguments and the best available methodology does lead to a conclusion of atheism (and 69.7% of philosophy faculty and Ph.D.s agree), that still doesn't mean that everyone's going to get there (as 69.3% of philosophy faculty and Ph.D.s specializing in philosophy of religion don't) or that anyone who doesn't has necessarily done anything irrational in the process, but for a different reason than in the prior case. That reason is that we don't function by embodying this skeptical process, taking all of our input data, running it through the process, and believing only what comes out the other side. That's not consistent with how we engage in initial learning or can practically proceed in our daily lives. Rather, we have a vast web of beliefs that we accumulate over our lifetimes, and selectively focus our attention and use skeptical processes on subsets of our beliefs. The practical demands of our daily lives, of our professions, of our social communities, and so forth place constraints on us (see my answers to questions in "Skepticism, belief revision, and science"). And even with unlimited resources, I think there are reasons that we wouldn't want everyone to apply skeptical methods to everything they believed--there is value to false belief in generating new hypotheses, avoiding Type I errors, keeping true beliefs from becoming "dead dogma," and so forth (which I discussed in my SkeptiCamp Phoenix presentation last year, "Positive side-effects of misinformation").

UPDATE (January 16, 2009): Skepdude responds here.