Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Thursday, February 29, 2024

If embryos are babies, then in-vitro fertilization is immoral

Alabama and the GOP are discovering what this blog pointed out 15 years ago--if you're going to adopt a policy that embryos are full bearers of moral personhood, then you can't allow in-vitro fertilization (IVF). From my five-part debate with Vocab Malone about abortion in 2009:

Once the zygote becomes a blastocyst, it forms into an outer layer of cells, which later becomes the placenta, and an inner cell mass of pluripotent embryonic stem cells, each of which is capable of differentiating into any kind of human cell. Only after this stage does the blastocyst implant in the wall of the uterus, about a week after fertilization, and begin taking nutrients directly from the blood of the mother--a dependency that can itself be of moral significance, as Judith Jarvis Thomson's violinist argument shows. As already mentioned above, a great many fertilized ova do not reach this stage. Further, the percentages of implant failure are higher for in vitro fertilization (IVF), a procedure which Vocab's criteria would have to declare unethical, even though it is the only way that many couples can have their own biological offspring.

I made the same point earlier in a comment on a podcast interview with atheist anti-abortion advocate Jen Roth (comments are no longer present but I reiterated it in response to Malone):

Was Jen Roth ultimately arguing that personhood is something that a human organism has for its entire lifecycle? At what starting point? Conception, implantation, or something else?

I find it completely implausible that an organism at a life stage with no capacity for perception, let alone reason, counts as a person. Nor that a particular genetic code is either necessary or sufficient for personhood.

I think every point that she made was brought up in a debate I had with a Christian blogger on the topic of abortion, who similarly argued for an equation between personhood and human organism. I wonder if she has any better rejoinders. Does she think that IVF and therapeutic cloning are immoral? IUDs?

The naive anti-abortion position is philosophically and scientifically unsupportable and leads to bad public policy, and today's GOP consists of a majority struggling to avoid it and a minority that is full-steam ahead and prepared to ban IVF and contraception.

The full debate between Vocab Malone and myself was spread across our respective blogs.  My contributions were:

Vocab Malone on abortion and personhood, part 1 (December 11, 2009)

Vocab Malone on abortion and personhood, part 2 (December 13, 2009)

Vocab Malone on abortion and personhood, part 3 (December 16, 2009)

Vocab Malone on abortion and personhood, part 4 (December 18, 2009)

Vocab Malone on abortion and personhood, part 5 (December 19, 2009)


And, finally, perhaps most apt to the current situation, was this exchange from the following year:

Does Vocab Malone understand the implications of his own position? (November 15, 2010)

Vocab's response is that he does think IVF is immoral, except perhaps for some hypothetical version he doesn't describe, that perhaps involves adopting all the "snowflake babies" and removing and reimplanting excessive multiple births into surrogates. (But that still doesn't address the implantation failure rate!)

Wednesday, September 28, 2011

Skeptics and Bayesian epistemology

A few prominent skeptics have been arguing that science and medicine should rely upon Bayesian epistemology.  Massimo Pigliucci, in his book Nonsense on Stilts, on the Rationally Speaking podcast, and in his column in the Skeptical Inquirer, has suggested that scientists should best proceed with a Bayesian approach to updating their beliefs.  Steven Novella and Kimball Atwood at the Science-Based Medicine blog (and at the Science-Based Medicine workshops at The Amazing Meeting) have similarly argued that what distinguishes Science-Based Medicine from Evidence-Based Medicine is the use of a Bayesian approach in accounting for the prior plausibility of theories is superior to simply relying upon the outcomes of randomized controlled trials to determine what's a reasonable medical treatment.  And, in the atheist community, Richard Carrier has argued for a Bayesian approach to history, and in particular for assessing claims of Christianity (though in the linked-to case, this turned out to be problematic and error-ridden).

It's worth observing that Bayesian epistemology has some serious unresolved problems, including among them the problem of prior probabilities and the problem of considering new evidence to have a probability of 1 [in simple conditionalization].  The former problem is that the prior assessment of the probability of a hypothesis plays a huge factor in the outcome of whether a hypothesis is accepted, and whether that prior probability is based on subjective probability, "gut feel," old evidence, or arbitrarily selected to be 0.5 can produce different outcomes and doesn't necessarily lead to concurrence even over a large amount of agreement on evidence. So, for example, Stephen Unwin has argued using Bayes' theorem for the existence of God (starting with a prior probability of 0.5), and there was a lengthy debate between William Jefferys and York Dobyns in the Journal of Scientific Exploration about what the Bayesian approach yields regarding the reality of psi which didn't yield agreement. The latter problem, of new evidence, is that a Bayesian approach considers new evidence to have a probability of 1, but evidence can itself be uncertain.

And there are other problems as well--a Bayesian approach to epistemology seems to give special privilege to classical logic, not properly account for old evidence [(or its reduction in probability due to new evidence)] or the introduction of new theories, and not be a proper standard for judgment of rational belief change of human beings for the same reason on-the-spot act utilitarian calculations aren't a proper standard for human moral decision making--it's not a method that is practically psychologically realizable.

The Bayesian approach has certainly been historically useful, as Desiree Schell's interview with Sharon Bertsch McGrane, author of The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy, demonstrates.  But before concluding that Bayesianism is the objective rational way for individuals or groups to determine what's true, it's worth taking a look at the problems philosophers have pointed out for making it the central thesis of epistemology.  (Also see John L. Pollock and Joseph Cruz, Contemporary Theories of Knowledge, 2nd edition, Rowman & Littlefield, 1999, which includes a critique of Bayesian epistemology.)

UPDATE (August 6, 2013): Just came across this paper by Brandon Fitelson (PDF) defending Bayesian epistemology against some of Pollock's critiques (in Pollock's Nomic Probability book, which I've read, and in his later Thinking About Acting, which I've not read).  A critique of how Bayesianism (and not really Bayesian epistemology in the sense defended by Fitelson) is being used by skeptics is here.

Saturday, November 20, 2010

What to think vs. how to think

While listening to a recent Token Skeptic podcast of a Dragon*Con panel on Skepticism and Education moderated by D.J. Grothe of the James Randi Educational Foundation, I was struck by his repeated references to Skepticism as a worldview (which I put in uppercase to distinguish it from skepticism as a set of methods of inquiry, an attitude or approach).  I wrote the following email to the podcast:
I am sufficiently irritated by D.J. Grothe's repeated reference to skepticism as a "worldview" that I will probably be motivated to write a blog post about it.
There is a growing ambiguity caused by overloading of the term "skepticism" on different things--attitudes, methods and processes, accumulated bodies of knowledge, a movement.  To date, there hasn't really been a capital-S Skepticism as a worldview since the Pyrrhonean philosophical variety.  A worldview is an all-encompassing view of the world which addresses how one should believe, how one should act, what kinds of things exist, and so forth.  It includes presuppositions not only about factual matters, but about values. 
The skepticisms worth promoting are attitudes, methods and processes, and accumulated bodies of knowledge that are consistent with a wide variety of world views.  The methods are contextual, applied against a background of social institutions and relationships that are based on trust.  There is room in the broader skeptical movement for pluralism, a diversity of approaches that set the skepticisms in different contexts for different purposes--educational, political, philosophical, religious.  An unrestricted skepticism is corrosive and undermines all knowledge, for there is no good epistemological response to philosophical skepticism that doesn't make some assumptions.
Trying to turn skepticism into a capital-S Skeptical worldview strikes me as misguided.
To my mind, what's most important and useful about skepticism is that it drives the adoption of the best available tools for answering questions, providing more guidance on how to think than on what to think, and on how to recognize trustworthy sources and people to rely upon.  There's not a completely sharp line between these--knowledge about methods and their accuracy is dependent upon factual knowledge, of course.

I think the recent exchanges about the Missouri Skepticon conference really being an atheist conference may partly have this issue behind them, though I think there are further issues there as well about the traditional scope of "scientific skepticism" being restricted to "testable claims" and the notion of methodological naturalism that I don't entirely agree with.  Skepticism is about critical thinking, inquiry, investigation, and using the best methods available to find reliable answers to questions (and promoting broader use of those tools), while atheism is about holding a particular position on a particular issue, that no gods exist.  The broader skeptical movement produces greater social benefits by promoting more critical thinking in the general public than does the narrower group of skeptical atheists who primarily argue against religion and especially the smaller subset who are so obsessed that they are immediately dismissed by the broader public as monomaniacal cranks.  The organized skeptical groups with decades of history have mainly taken pains to avoid being represented by or identified with the latter, and as a result have been represented by skeptics of a variety of religious views in events of lasting consequence. Think, for example, of the audience for Carl Sagan's "Cosmos" and his subsequent works, or of the outcome of the Kitzmiller v. Dover trial.

In my opinion, the distinction between skepticism and atheism is an important one, and I think Skepticon does blur and confuse that distinction by using the "skeptic" name and having a single focus on religion. This doesn't mean that most of the atheists participating in that conference don't qualify as skeptics, or even that atheist groups promoting rationality on religious subjects don't count as part of the broader skeptical movement.  It just means that there is a genuine distinction to be drawn.

(BTW, I don't think atheism is a worldview, either--it's a single feature of a worldview, and one that is less important to my mind than skepticism.)

Previous posts on related subjects:
"A few comments on the nature and scope of skepticism"
"Skepticism, belief revision, and science"
"Massimo Pigliucci on the scope of skeptical inquiry"

Also related, a 1999 letter to the editor of Skeptical Inquirer from the leaders of many local skeptical groups (Daniel Barnett, North Texas Skeptics, Dallas, TX; David Bloomberg, Rational Examination Association of Lincoln Land, Springfield, IL; Tim Holmes, Taiwan Skeptics, Tanzu, Taiwan; Peter Huston, Inquiring Skeptics of Upper New York, Schenectady, NY; Paul Jaffe, National Capitol Area Skeptics, Washington, D.C.; Eric Krieg, Philadelphia Association for Critical Thinking, Philadelphia, PA; Scott Lilienfeld, Georgia Skeptics, Atlanta, GA; Jim Lippard, Phoenix Skeptics and Tucson Skeptical Society, Tucson, AZ; Rebecca Long, Georgia Skeptics, Atlanta, GA; Lori Marino, Georgia Skeptics, Atlanta, GA; Rick Moen, Bay Area Skeptics, Menlo Park, CA; Steven Novella, New England Skeptical Society, New Haven, CT; Bela Scheiber, Rocky Mountain Skeptics, Denver, CO; and Michael Sofka, Inquiring Skeptics of Upper New York, Troy, NY).

UPDATE (December 1, 2010): D.J. Grothe states in the most recent (Nov. 26) Point of Inquiry podcast (Karen Stollznow interviews James Randi and D.J. Grothe), at about 36:50, that he has been misunderstood in his references to skepticism as a "worldview."  This suggests to me that he has in mind a narrower meaning, as Barbara Drescher has interpreted him in the comments below.  My apologies to D.J. for misconstruing his meaning.

Monday, November 15, 2010

Does Vocab Malone understand the implications of his own position?

Vocab Malone, with whom I had a blog debate about abortion and personhood last year, recently came across this comment of mine on the Point of Inquiry podcast with Jen Roth, an atheist who argues for the immorality of abortion:
Was Jen Roth ultimately arguing that personhood is something that a human organism has for its entire lifecycle? At what starting point? Conception, implantation, or something else?

I find it completely implausible that an organism at a life stage with no capacity for perception, let alone reason, counts as a person. Nor that a particular genetic code is either necessary or sufficient for personhood.

I think every point that she made was brought up in a debate I had with a Christian blogger on the topic of abortion, who similarly argued for an equation between personhood and human organism. I wonder if she has any better rejoinders. Does she think that IVF and therapeutic cloning are immoral? IUDs?
Vocab claimed that my argument was a "Chewbacca argument," a smoke screen, or a slippery slope argument, but in fact it is none of these.  I posted the following comment in response to him:
Vocab:
The argument I made is not a slippery slope argument, it's a reductio ad absurdum.  Your position is that the human organism is a person and has a right to life from fertilization to death (and presumably beyond), so you've already gone down the "slippery slope" and must of necessity say that IVF, therapeutic cloning, and IUDs are immoral because they result in the destruction and death of fertilized ova.  My position is that it is absurd to think that these things are immoral, and if you were to avoid the slippery slope by agreeing with me, you would have contradicted a logical consequence of your own position--thus, a reductio ad absurdum by being committed to a proposition and its negation.
A slippery slope argument is an argument that says your position is committed to some consequence because there is no criterion that you can use to draw a line to avoid.  For example, if I argued that your position committed you to giving a right to life to all animals, and required you to be a vegetarian, or that it required you to give a right to life to every organism with DNA, and required you to hold a position like the Jain religion that all killing is wrong.
As it happens, you never did supply an account of just what it is about the human organism that gives it a right to life or personhood--you offered no constitutive account of what properties entail a right to life or personhood, other than a genetic one.  I made the case near the end of our debate that you are probably implicitly assuming that personhood comes from a soul, and that souls are connected to human organisms at the point of fertilization, but there's clearly no evidence for that position, scientific, philosophical, or theological.
BTW, my argument is also clearly not a Chewbacca argument or smoke screen, which is a simple non sequitur.  To think that, you would have to fail to understand that the items I identified all result in the destruction of fertilized human ova.
It's important to note that not all slippery slope arguments are fallacious--if there really is no criterion to stop the fall down the slope, the argument is valid.  As Vocab never did explain what it is about human organisms that make them rights-bearers, I think he does face the slippery slope argument I presented unless he can offer some criterion for distinguishing human organisms from other organisms with respect to having a right to life.

Saturday, June 05, 2010

Abe Heward's new blog on software testing

Veteran software tester Abe Heward has started up a blog on software testing, which I'm sure will also include many items of epistemological, economic, and skeptical interest.  He's already got posts on how the post hoc ergo propter hoc fallacy is relevant to software testing, why good testers aren't robots (and the flaws in one company's attempt to treat them as if they were), and on opportunity cost and testing automation.

Check it out at www.abeheward.com.

Saturday, May 22, 2010

Martin Gardner, RIP

The prominent skeptic Martin Gardner, mathematician, philosopher, magician, and writer, died today at the age of 95 (b. October 21, 1914, d. May 22, 2010).  He was one of the founders of the Committee for the Scientific Investigation of Claims of the Paranormal (now Committee for Skeptical Inquiry), and had been part of the earlier Resources for the Scientific Evaluation of the Paranormal along with CSICOP founding members Ray Hyman, James Randi, and Marcello Truzzi.  Long before that, he wrote one of the classic texts debunking pseudoscience, Fads and Fallacies in the Name of Science (the Dover 2nd edition was published in 1957).  For many years (1956-1981) he was the author of the Scientific American column, "Mathematical Games" (taken over by Douglas Hofstadter and retitled "Metamagical Themas"), and he wrote a regular "Notes of a Psi-Watcher" column for the Skeptical Inquirer right up to the present.  His 70+ books included a semi-autobiographical novel, The Flight of Peter Fromm, a book explaining his philosophical positions including why he wasn't an atheist, The Whys of a Philosophical Scrivener, and an annotated version of Lewis Carroll's Alice in Wonderland works, The Annotated Alice.

He had been scheduled to appear by video link at the upcoming The Amazing Meeting 8 in Las Vegas, where a number of other skeptical old timers will be appearing on discussion panels.  His death is a great loss.

I never met Gardner, but was first introduced to his work reading his "Mathematical Games" column in the late 70's, and then his Fads and Fallacies and Skeptical Inquirer columns.  Gardner, Isaac Asimov, Carl Sagan, and James Randi were the first major figures I identified as skeptical role models.  One of the great honors of my life was receiving the Martin Gardner Award for Best Skeptical Critic from the Skeptics Society in 1996.

A Martin Gardner documentary that is part of "The Nature of Things" may be found online, and Scientific American has republished online its December 1995 profile of Gardner.  Here's a transcript of a February 1979 telephone interview between Martin Gardner and five mathematicians (thanks to Anthony Barcellos for transcribing it and bringing it to my attention in the comments below).

Various tributes:
UPDATE (June 11, 2011): An interesting chapter on Martin Gardner from George Hansen's book, The Trickster and the Paranormal, is available online as a PDF.

Thursday, May 06, 2010

Chinese astronomy and scientific anti-realism

On the last day of my class on Scientific Revolutions and the law, one of the students in the class, Lijing Jiang, gave a presentation titled "To Consider the Heavens: The Incorporation of Jesuit Astronomy in the Seventeenth Century Chinese Court."

Her presentation was about how Jesuit missionaries in China brought western astronomy with them, and how it was received.  This added a very interesting complement to the course, as much of the early part of the semester was about the Copernican revolution (using Kuhn's book of the same name).  Part of what happened early on in astronomy was a division between cosmology and positional astronomy, with the former being about the actual nature of the heavens, and the latter being about creating mathematical models for prediction, to be used for navigation and calendar-setting that incorporated features not intended to represent reality (like epicycles).  These two types of astronomy didn't really get reconnected (aside from the occasional realist depiction of epicycles in crystalline spheres) until Galileo argued for a realist interpretation of the Copernican model.  And that didn't fully catch on until Newton.

In China, calendar reform was very important as they used a combination of a lunar month (based on phases of the moon) and tropical year that had to be synchronized annually, and an unpredicted eclipse was considered to be a bad omen.  The Chinese had gone through many calendar reforms as a result of these requirements, and they considered that theories needed to be revised about every 300 years (in other realms as well, not just astronomy).

The Jesuits happened to bring Copernican astronomy to China in the late 16th/early 17th century, with a goal of impressing and converting the Emperor.  They got their big chance to make a splash in 1610, when the Chinese court astronomers mispredicted a solar eclipse by one day, which the Jesuits predicted correctly in advance.  But this turned out in a way to be poorly timed, as the Counter-Reformation decided to start cracking down on Copernican heliocentrism after 1610, making it a formal doctrinal issue in 1616.  The Jesuits in China thus switched to the Tychonic system which was geometrically equivalent to the Copernican model but geocentric.

Multiple factors persuaded the Chinese to maintain a relativistic, anti-realist understanding of positional astronomy beyond the Scientific Revolution.  In addition to Taoist and Buddhist views of life involving constant change and their past experience with calendars suggesting revisions every 300 years, the Jesuits presented another example of apparent arbitrariness in cosmological model selection, and they continued to stick with the Tychonic model as the western world switched to heliocentrism.

You can read Lijing Jiang's blogging at Science in a Mirror, where she may post something about her presentation in the future.

Thursday, April 22, 2010

Haven't we already been nonmodern?

Being modern, argues Bruno Latour in We Have Never Been Modern (1993, Harvard Univ. Press), involves drawing a sharp distinction between “nature” and “culture,” through a process of “purification” that separates everything into one or the other of these categories. It also involves breaking with the past: “Modernization consists in continually exiting from an obscure age that mingled the needs of society with scientific truth, in order to enter into a new age that will finally distinguish clearly what belongs to atemporal nature and what comes from humans, what depends on things and what belongs to signs” (p. 71).

But hold on a moment--who actually advocates that kind of a sharp division between nature and culture, without acknowledging that human beings and their cultures are themselves a part of the natural order of things? As the 1991 Love and Rockets song, “No New Tale to Tell,” said: “You cannot go against nature / because when you do / go against nature / it’s part of nature, too.” Trying to divide the contents of the universe into a sharp dichotomy often yields a fuzzy edge, if not outright paradox. While Latour is right to object to such a sharp distinction (or separation) and to argue for a recognition that much of the world consists of “hybrids” that include natural and cultural aspects (true of both material objects and ideas), I’m not convinced that he’s correctly diagnosed a genuine malady when he writes that “Moderns ... refuse to conceptualize quasi-objects as such. In their eyes, hybrids present the horror that must be avoided at all costs by a ceaseless, even maniacal purification” (p. 112).

Latour writes that anthropologists do not study modern cultures in the manner that they study premodern cultures. For premoderns, an ethnographer will generate “a single narrative that weaves together the way people regard the heavens and their ancestors, the way they build houses and the way they grow yams or manioc or rice, the way they construct their government and their cosmology,” but that this is not done for modern societies because “our fabric is no longer seamless” (p. 7). True, but the real problem for such ethnography is not that we don’t have such a unified picture of the world (and we don’t) but that we have massive complexity and specialization--a complexity which Latour implicitly recognizes (pp. 100-101) but doesn’t draw out as a reason.

The argument that Latour makes in the book builds upon this initial division of nature and culture by the process of “purification” with a second division between “works of purification” and “works of translation,” “translation” being a four-step process of his advocated framework of actor-network theory that he actually doesn’t discuss much in this book. He proposes that the “modern constitution” contains “works of translation”--networks of hybrid quasi-objects--as a hidden and unrecognized layer that needs to be made explicit in order to be “nonmodern” (p. 138) or “amodern” (p. 90) and avoid the paradoxes of modernity (or other problems of anti-modernity, pre-modernity, and post-modernity).

His attempt to draw the big picture is interesting and often frustrating, as when he makes unargued-for claims that appear to be false, e.g., “as concepts, ‘local’ and ‘global’ work well for surfaces and geometry, but very badly for networks and topology’” (p. 119); “the West may believe that universal gravitation is universal even in the absence of any instrument, any calculation, any decoding, any laboratory ... but these are respectable beliefs that comparative anthropology is no longer obliged to share” (p. 120; also p. 24); speaking of “time” being reversible where he apparently means “change” or perhaps “progress” (p. 73); his putting “universality” and “rationality” on a list of values of moderns to be rejected (p. 135). I’m not sure how it makes sense to deny the possibility of universal generalizations while putting forth a proposed framework for the understanding of everything.

My favorite parts of the book were his recounting of Steven Shapin and Simon Schaffer’s Leviathan and the Air Pump (pp. 15-29) and his critique of that project, and his summary of objections to postmodernism (p. 90). Latour is correct, I think, in his critique that those who try to explain the results of science solely in terms of social factors are making a mistake that privileges “social” over “natural” in the same way that attempting to explain them without any regard to social factors privileges “natural” over “social.” He writes to the postmodernists (p. 90):

“Are you not fed up at finding yourselves forever locked into language alone, or imprisoned in social representations alone, as so many social scientists would like you to be? We want to gain access to things themselves, not only their phenomena. The real is not remote; rather, it is accessible in all the objects mobilized throughout the world. Doesn’t external reality abound right here among us?”

In a commentary on this post, Gretchen G. observed that we do regularly engage in the process of "purification" about our concepts and attitudes towards propositions in order to make day-to-day decisions--and I think she's right.  We do regard things as scientific or not scientific, plausible or not plausible, true or false, even while we recognize that there may be fuzzy edges and indeterminate cases.  And we tend not to like the fuzzy cases, and to want to put them into one category or the other.  In some cases, this may be merely an epistemological problem of our human (and Humean) predicament where there is a fact of the matter; in others, our very categories may themselves be fuzzy and not fit reality ("carve nature at its joints").

[A slightly different version of the above was written for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Gretchen G. for her comments.  An entertaining critique of Latour's earlier book Science in Action is Olga Amsterdamska's "Surely You're Joking, Monsieur Latour!", Science, Technology, and Human Values vol. 15, no. 4 (1990): 495-504.]

Tuesday, April 20, 2010

Translating local knowledge into state-legible science

James Scott’s Seeing Like a State (about which I've blogged previously) talks about how the state imposes standards in order to make features legible, countable, regulatable, and taxable. J. Stephen Lansing’s Perfect Order: Recognizing Complexity in Bali describes a case where the reverse happened. When Bali tried to impose a top-down system of scientifically designed order--a system of water management--on Balinese rice farmers, in the name of modernization in the early 1970s, the result was a brief increase in productivity followed by disaster. Rather than lead to more efficient use of water and continued improved crop yields, it produced pest outbreaks which destroyed crops. An investment of $55 million in Romijn gates to control water flow in irrigation canals had the opposite of the intended effect. Farmers removed the gates or lifted them out of the water and left them to rust, upsetting the consultants and officials behind the project. Pesticides delivered to farmers resulted in brown leafhoppers becoming resistant to pesticides, and supplied fertilizers washed into the rivers and killed coral reefs at the mouths of the rivers.

Lansing was part of a team sponsored by the National Science Foundation in 1983 that evaluated the Balinese farmers’ traditional water management system to understand how it worked. The farmers of each village belong to subaks, or organizations that manage rice terraces and irrigation systems, which are referred to in Balinese writings going back at least a thousand years. Lansing notes that “Between them, the village and subak assemblies govern most aspects of a farmer’s social, economic, and spiritual life.”

Lansing’s team found that the Balinese system of water temples, religious ritual, and irrigation managed by the subaks would synchronize fallow periods of contiguous segments of terraces, so that long segments could be kept flooded after harvest, killing pests by depriving them of habitat. But their attempt and that of the farmers to persuade the government to allow the traditional system to continue fell upon deaf ears, and the modernization scheme continued to be pushed.

In 1987, Lansing worked with James Kremer to develop a computer model of the Balinese water temple system, and ran a simulation using historical rainfall data. This translation of the traditional system into scientific explanation showed that the traditional system was more effective than the modernized system, and government officials were persuaded to allow and encourage a return to the traditional system.

The Balinese system of farming is an example of how local knowledge can develop and become embedded in a “premodern” society by mechanisms other than conscious and intentional scientific investigation (in this case, probably more like a form of evolution), and be invisible to the state until it is specifically studied. It’s also a case where the religious aspects of the traditional system may have contributed to its dismissal by the modern experts.

What I find of particular interest here is to what extent the local knowledge was simply embedded into the practices, and not known by any of the participants--were they just doing what they've "always" done (with practices that have evolved over the last 1,000 years), in a circumstance where the system as a whole "knows," but no individual had an understanding until Lansing and Kremer built and tested a model of what they were doing?

[A slightly different version of the above was written for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Brenda T. for her comments.  More on Lansing's work in Bali may be found online here.]

Tuesday, April 06, 2010

Against "coloring book" history of science

It's a bad misconception about evolution that it proceeds in a linear progression of one successfully evolving species after another displacing its immediate ancestors.  Such a conception of human history is equally mistaken, and is often criticized with terms such as "Whiggish history" or "determinism" with a variety of adjectives (technological, social, cultural, historical).  That includes the history of science, where the first version we often hear is one that has been rationally reconstructed by looking back at the successes and putting them into a linear narrative.  Oh, there are usually a few errors thrown in, but they're usually fit into the linear narrative as challenges that are overcome by the improvement of theories.

The reality is a lot messier, and getting into the details makes it clear that not only is a Whiggish history of science mistaken, but that science doesn't proceed through the algorithmic application of "the scientific method," and in fact that there is no such thing as "the scientific method."  Rather, there is a diverse set of methods that are themselves evolving in various ways, and sometimes not only do methods which are fully endorsed as rational and scientific produce erroneous results, sometimes methods which have no such endorsement and are even demonstrably irrational fortuitously produce correct results.  For example, Johannes Kepler was a neo-pythagorean number mystic who correctly produced his second law of planetary motion by taking an incorrect version of the law based on his intuitions and deriving the correct version from it by way of a mathematical argument that contained an error.  Although he fortuitously got the right answer and receives credit for devising it, he was not justified in believing it to be true on the basis of his erroneous proof.  With his first law, by contrast, he followed an almost perfectly textbook version of the hypothetico-deductive model of scientific method of formulating hypotheses and testing them against Tycho Brahe's data.

The history of the scientific revolution includes numerous instances of new developments occurring piecemeal, with many prior erroneous notions being retained.  Copernicus retained not only perfectly circular orbits and celestial spheres, but still needed to add epicycles to get his theory any where close to the predictive accuracy of the Ptolemaic models in use.  Galileo insisted on retaining perfect circles and insisting that circular motion was natural motion, refusing to consider Kepler's elliptical orbits.  There seems to be a good case for "path dependence" in science.  Even the most revolutionary changes are actually building on bits and pieces that have come before--and sometimes rediscovering work that had already been done before, like Galileo's derivation of the uniform acceleration of falling bodies that had already been done by Nicole Oresme and the Oxford calculators.  And the social and cultural environment--not just the scientific history--has an effect on what kinds of hypotheses are considered and accepted.

This conservativity of scientific change is a double-edged sword.  On the one hand, it suggests that we're not likely to see claims that purport to radically overthrow existing theory (that "everything we know is wrong") succeed--even if they happen to be correct.  And given that there are many more ways to go wrong than to go right, such radical revisions are very likely not to be correct.  Even where new theories are correct in some of their more radical claims (e.g., like Copernicus' heliocentric model, or Wegener's continental drift), it often requires other pieces to fall into place before they become accepted (and before it becomes rational to accept them).  On the other hand, this also means that we're likely to be blinded to new possibilities by what we already accept that seems to work well enough, even though it may be an inaccurate description of the world that is merely predictively successful.  "Consensus science" at any given time probably includes lots of claims that aren't true.

My inference from this is that we need both visionaries and skeptics, and a division of cognitive labor that's largely conservative, but with tolerance for diversity and a few radicals generating the crazy hypotheses that may turn out to be true.  The critique of evidence-based medicine made by Kimball Atwood and Steven Novella--that it fails to consider prior plausibility of hypotheses to be tested--is a good one that recognizes the unlikelihood of radical hypotheses to be correct, and thus that huge amounts of money shouldn't be spent to generate and test them.  (Their point is actually stronger than that, since most of the "radical hypotheses" in question are not really radical or novel, but are based on already discredited views of how the world works.)  But that critique shouldn't be taken to exclude anyone from engaging in the generation and test of hypotheses that don't appear to have a plausible mechanism, because there is ample precedent for new phenomena being discovered before the mechanisms that explain them.

I think there's a tendency among skeptics to talk about science as though it's a unified discipline, with a singular methodology, that makes continuous progress, and where the consensus at any moment is the most appropriate thing to believe.  The history of science suggests, on the other hand, that it's composed of multiple disciplines, with multiple methods, that proceeds in fits and starts, that has dead-ends, that sometimes rediscovers correct-but-ignored past discoveries, and is both fallible and influenced by cultural context.  At any given time, some theories are not only well-established but unified well with others across disciplines, while others don't fit comfortably well with others, or may be idealized models that have predictive efficacy but seem unlikely to be accurate descriptions of reality in their details.  To insist on an overly rationalistic and ahistorical model is not just out-of-date history and philosophy of science, it's a "coloring book" oversimplification.  While that may be useful for introducing ideas about science to children, it's not something we should continue to hold to as adults.

Friday, April 02, 2010

Scientific autonomy, objectivity, and the value-free ideal

It has been argued by many that science, politics, and religion are distinct subjects that should be kept separate, in at least one direction if not both.  Stephen Jay Gould argued that science and religion have non-overlapping areas of authority (NOMA, or non-overlapping magisteria), with the former concerned about how questions and the latter with why questions, and that conflicts between them won’t occur if they stick to their own domain.  Between science and politics, most have little problem with science informing politics, but a big problem with political manipulation of science.  Failure to properly maintain the boundaries leads to junk science, politicized science, scientism, science wars, and other objectionable consequences.

Heather E. Douglas, in Science, Policy, and the Value-Free Ideal argues that notions of scientific autonomy and a scientific ideal of being isolated from questions of value (political or otherwise) are mistaken, and that this idea of science without regard to value questions (apart from epistemic virtues) is itself a contributing factor to such consequences.  She attributes blame for this value-free ideal of science to post-1940 philosophy of science, though the idea of scientific autonomy appears to me to have roots much further back, including in Galileo’s “Letter to Castelli” and "Letter to the Grand Duchess Christina" and John Tyndall’s 1874 Belfast Address, which were more concerned to argue that religion should not intrude into the domain of science rather than the reverse.  (As I noted in a previous post about Galileo, he did not carve out complete autonomy for natural philosophy from theology, only for those things which can be demonstrated or proven, which he argued that scripture could not contradict--and where it apparently does, scripture must be interpreted allegorically.)

Douglas describes a “topography of values” in the categories of cognitive, ethical, and social values, and distinguishes direct and indirect roles for them.  Within the “cognitive” category go values pertaining to our ability to understand evidence, such as simplicity, parsimony, fruitfulness, coherence, generality, and explanatory power, but excluding truth-linked epistemic virtues such as internal consistency and predictive competency or adequacy, which she identifies not as values but as minimal negative conditions that theories must necessarily meet.  Ethical values and social values are overlapping categories, the former concerned with what’s good or right and the latter with what a particular society values, such as “justice, privacy, freedom, social stability, or innovation” (Douglas, p. 92).  Her distinction between a direct and indirect role is that the former means that values can act directly as reasons for decisions, versus indirectly as a factor in decision-making where evidence is uncertain.

Douglas argues that values can legitimately play a direct role in certain phases of science, such as problem selection, selection of methodology, and in the policy-making arena, but should be restricted to an indirect role in phases such as data collection and analysis and drawing conclusions from evidence.  She identifies some exceptions, however--problem selection and method selection can’t legitimately be guided by values in a way that undermines the science by forcing a pre-determined conclusion (e.g., by selecting a method that is guaranteed to be misleading), and a direct role for ethical values can surface in later stages by discovering that research is causing harm.

Her picture of science is one where values cannot directly intrude between the collection of data and the inference of the facts from that data, but the space between evidence and fact claims is somewhat more complex than she describes.  There is the inference by a scientist of a fact from the evidence, the communication of that fact to other scientists, the publication of that fact in the scientific literature, and its communication to the general public and policy makers.  All but the first of these are not purely epistemic, but are also forms of conduct.  It seems to me that there is, in fact, a potential direct role for ethical values, at the very least, for each such type of conduct, in particular circumstances, which could merit withholding of the fact claim.  For example, a scientist in Nazi Germany could behave ethically by withholding information about how to build an atomic bomb.

Douglas argues that the motivation for the value-free ideal is as a mechanism for preserving scientific objectivity; she therefore gives an account of objectivity that comports with her account of science with values.  She identifies seven types of objectivity that are relevant in three different domains (plus one she rejects), all of which have to do with a shared ground for trust.  First, within the domain of human interactions with the world, are “manipulable objectivity,” or the ability to repeatably and reliably make interventions in nature that give the same result, and “convergent objectivity,” or having supporting evidence for a conclusion from multiple independent lines of evidence.  Second, in the realm of individual thought processes, she identifies “detached objectivity”--a scientific disinterest, freedom from bias, and eschewing the use of values in place of evidence.  There’s also “value-free objectivity,” the notion behind the value-free ideal, which she rejects.  And there’s “value-neutral objectivity,” or leaving personal views aside in, e.g., conducting a review of the literature in a field and identifying possible sets of explanations, or taking a "centrist" or "balanced" view of potentially relevant values.  Finally, in the domain of social processes, Douglas identifies “procedural objectivity,” where use of the same procedures produces the same results regardless of who engages in the procedure, and “intersubjectivity” in two senses--“concordant objectivity,” agreement in judgments between different people, and “interactive objectivity,” agreement as the result of argument and deliberation.

Douglas writes clearly and concisely, and makes a strong case for the significance of values within science as well as in its application to public policy.  Though she limits her discussion to natural science (and focuses on scientific discovery rather than fields of science that involve the production of new materials, an area where more direct use of values is likely appropriate), her account could likely be extended with the introduction of a bit more complexity.  While I don’t think she has identified all or even the primary causes of the “science wars,” which she discusses at the beginning of her book, I think her account is more useful in adjudicating the “sound science”/“junk science” debate that she also discusses, as well as identifying a number of ways in which science isn’t and shouldn’t be autonomous from other areas of society.

[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Judd A. for his comments.]

Thursday, April 01, 2010

Galileo on the relation between science and religion

Galileo’s view of natural philosophy (science) is that it is the study of the book of nature,” “written in mathematical language” (Finocchiaro 2008, p. 183), as contrasted with theology, the study of the book of Holy Scripture and revelation.  Galileo endorses the idea that theology is the “queen” of the “subordinate sciences” (Finocchiaro 2008, p. 124), by which he means not that theology trumps science in any and all matters.  He distinguishes two senses of theology being “preeminent and worthy of the title of queen”: (1) That “whatever is taught in all the other sciences is found explained and demonstrated in it [theology] by means of more excellent methods and of more sublime principles,” [Note added 12/14/2012: which he rejects] and (2) That theology deals with the most important issues, “the loftiest divine contemplations” about “the gaining of eternal bliss,” but “does not come down to the lower and humbler speculations of the inferior sciences ... it does not bother with them inasmuch as they are irrelevant to salvation” [Note added 12/14/2012: which he affirms] (quotations from Finocchiaro 2008, pp. 124-125).  Where Holy Scripture makes reference to facts about nature, they may be open to allegorical interpretation rather than literal interpretation, unless their literal truth is somehow necessary to the account of “the gaining of eternal bliss.”

Galileo further distinguishes two types of claims about science:  (1) “propositions about nature which are truly demonstrated” and (2) “others which are simply taught” (Finocchiaro 2008, p. 126).  The role of the theologian with regard to the former category is “to show that they are not contrary to Holy Scripture,” e.g., by providing an interpretation of Holy Scripture compatible with the proposition; with regard to the latter, if it contradicts Holy Scripture, it must be considered false and demonstrations of the same sought (Finocchiaro 2008, p. 126).  Presumably, if in the course of attempting to demonstrate that a proposition in the second category is false, it is instead demonstrated to be true, it then must be considered to be part of the former category.  Galileo’s discussion allows that theological condemnation of a physical proposition may be acceptable if it is shown not to be conclusively demonstrated (Finnochiaro 2008, p. 126), rather than a more stringent standard that it must be conclusively demonstrated to be false, which, given his own lack of conclusive evidence for heliocentrism, could be considered a loophole allowing him to be hoist with his own petard.

Galileo also distinguishes between what is apparent to experts vs. the layman (Finnochiaro 2008, p. 131), denying that popular consensus is a measure of truth, but regarding that this distinction is what lies behind claims made in Holy Scripture about physical propositions that are not literally true.  With regard to the theological expertise of the Church Fathers, their consensus on a physical proposition is not sufficient to make it an article of faith unless such consensus is upon “conclusions which the Fathers discussed and inspected with great diligence and debated on both sides of the issue and for which they then all agreed to reject one side and hold the other” (Finnochiaro 2008, p. 133).  Or, in a contemporary (for Galileo) context, the theologians of the day could have a comparably weighted position on claims about nature if they “first hear the experiments, observations, reasons, and demonstrations of philosophers and astronomers on both sides of the question, and then they would be able to determine with certainty whatever divine inspiration will communicate to them” (Finnochiaro 2008, p. 135).

Galileo’s conception of science that leads him to take this position appears to be drawn from what Peter Dear (1990, p. 664), drawing upon Thomas Kuhn (1977), calls “the quantitative, ‘classical’ mathematical sciences” or the “mixed mathematical sciences,” identifying this as a predominantly Catholic conception of science, as contrasted with experimental science developed in Protestant England.  The former conception is one in which laws of nature can be recognized through idealized thought experiments based on limited (or no) actual observations, but demonstrated conclusively by means of rational argument.  This seems to be the general mode of Galileo’s work.  Dear argues that this notion of natural law allows for a conception of the “ordinary course of nature” which can be violated by an observed miraculous event, which comports with a Catholic view that miracles continue to occur in the world.

By contrast, the experimentalist views of Francis Bacon and Robert Boyle involve inductively inferring natural laws on the basis of observations, in which case observing something to occur makes it part of nature that must be accounted for in the generalized law--a view under which a miracle seems to be ruled out at the outset, which was not a problem for Protestants who considered the “age of miracles” to be over (Dear 1990, pp. 682-683).  Dear argues that for the British experimentalists, authentication of an experimental result was in some ways like the authentication of a miracle for the Catholics--requiring appropriately trustworthy observations--but that instead of verifying a violation of the “ordinary course of nature,” it verified what the “ordinary course of nature” itself was (Dear 1990, p. 680).  Where the Catholics like Galileo and Pascal derived conclusions about particulars from universal laws recognized by observation, reasoning, and mathematical demonstration, the Protestants like Bacon and Boyle constructed universal laws by inductive generalization from observations of particulars, and were notably critical of failing to perform a sufficient number of experiments before coming to conclusions (McMullin 1990, p. 821), and put forth standards for hypotheses and experimental method (McMullin 1990, p. 823; Shapin & Schaffer 1985, pp. 25ff & pp. 56-59).  The English experimentalist tradition, arising at a time of political and religious confusion after the English Civil War and the collapse of the English state church, was perhaps an attempt to establish an independent authority for science.  By the 19th century, there were explicit (and successful) attempts to separate science from religious authority and create a professionalized class of scientists (e.g., as Gieryn 1983, pp. 784-787 writes about John Tyndall).

The English experimentalists followed the medieval scholastics (Pasnau, forthcoming) in adopting a notion of “moral certainty” for “the highest degree of probabilistic assurance” for conclusions adopted from experiments (Shapin 1994, pp. 208-209).  This falls short of the Aristotelian conception of knowledge, yet is stronger than mere opinion.  They also placed importance on public demonstration in front of appropriately knowledgeable witnesses--with both the credibility of experimenter and witness being relevant to the credibility of the result.  Where on Galileo’s conception expertise appears to be primarily a function of possessing rational faculties and knowledge, on the experimentalist account there is importance to skill in application of method and to the moral trustworthiness of the participants as a factor in vouching for the observational results.  In the Galilean approach, trustworthiness appears to be less relevant as a consequence of actual observation being less relevant--though Galileo does, from time to time, make remarks about observations refuting Aristotle, e.g., in “Two New Sciences” where he criticizes Aristotle’s claims about falling bodies (Finnochiaro 2008, pp. 301, 303).

The classic Aristotelian picture of science is similar to the Galilean approach, in that observation and data collection is done for the purpose of recognizing first principles and deriving demonstrations by reason from those first principles.  What constitutes knowledge is what can be known conclusively from such first principles and what is derived by necessary connection from them; whatever doesn’t meet that standard is mere opinion (Posterior Analytics, Book I, Ch. 33; McKeon 1941, p. 156).  The Aristotelian picture doesn’t include any particular deference to theology; any discipline could could potentially yield knowledge so long as there were recognizable first principles. The role of observation isn’t to come up with fallible inductive generalizations, but to recognize identifiable universal and necessary features from their particular instantiations (Lennox 2006).  This discussion is all about theoretical knowledge (episteme) rather than practical knowledge (tekne), the latter of which is about contingent facts about everyday things that can change.  Richard Parry (2007) points out an apparent tension in Aristotle between knowledge of mathematics and knowledge of the natural world on account of his statement that “the minute accuracy of mathematics is not to be demanded in all cases, but only in the case of things which have no matter.  Hence its method is not that of natural science; for presumably the whole of nature has matter” (Metaphysics, Book II, Ch. 3, McKeon 1941, p. 715).

The Galilean picture differs from the Aristotelian in its greater use of mathematics (geometry)--McMullin writes that Galileo had “a mathematicism ... more radical than Plato’s” (1990, pp. 822-823) and by its inclusion of the second book, that of revelation and Holy Scripture, as a source of knowledge.  But while the second book is one which can trump mere opinion--anything that isn’t conclusively demonstrated and thus fails to meet Aristotle’s understanding of knowledge--it must be held compatible with anything that does meet those standards.

References
  • Peter Dear (1990) “Miracles, Experiments, and the Ordinary Course of Nature,” ISIS 81:663-683.
  • Maurice A. Finocchiaro, editor/translator (2008) The Essential Galileo.  Indianapolis: Hackett Publishing Company.
  • Thomas Gieryn (1983) “Boundary Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists,” American Sociological Review 48(6, December):781-795.
  • Thomas Kuhn (1957) The Copernican Revolution: Planetary Astronomy in the Development of Western Thought.  Cambridge, Mass.: Harvard University Press.
  • Thomas Kuhn (1977) The Essential Tension.  Chicago: The University of Chicago Press.
    Lennox, James (2006) “Aristotle’s Biology,” Stanford Encyclopedia of Philosophy, online at http://plato.stanford.edu/entries/aristotle-biology/, accessed March 18, 2010.
  • Richard McKeon (1941) The Basic Works of Aristotle. New York: Random House.
  • Ernan McMullin (1990) “The Development of Philosophy of Science 1600-1900,” in Olby et al. (1990), pp. 816-837.
  • R.C. Olby, G.N. Cantor, J.R.R. Christie, and M.J.S. Hodge (1990) Companion to the History of Science.  London: Routledge.
  • Parry, Richard (2007) “Episteme and Techne,” Stanford Encyclopedia of Philosophy, online at http://plato.stanford.edu/entries/episteme-techne/, accessed March 18, 2010.
  • Robert Pasnau (forthcoming) “Medieval Social Epistemology: Scienta for Mere Mortals,” Episteme, forthcoming special issue on history of social epistemology.  Online at http://philpapers.org/rec/PASMSE, accessed March 18, 2010. 
  • Steven Shapin and Simon Schaffer (1985) Leviathan and the Air Pump: Hobbes, Boyle, and the Experimental Life.  Princeton, N.J.: Princeton University Press.
  • Steven Shapin (1994) A Social History of Truth: Civility and Science in Seventeenth-Century England. Chicago: The University of Chicago Press.
[The above is slightly modified from one of my answers on a midterm exam.  My professor observed that another consideration on the difference between Catholic and Protestant natural philosophers is that theological voluntarism, more prevalent among Protestants, can suggest that laws of nature are opaque to human beings except through inductive experience.  NOTE ADDED 13 April 2010: After reading a couple of chapters of Margaret Osler's Divine Will and the Mechanical Philosophy: Gassendi and Descartes on Contingency and Necessity in the Created World (2005, Cambridge University Press), I'd add Pierre Gassendi to the experimentalist/inductivist side of the ledger, despite his being a Catholic--he was a theological voluntarist.]

Thursday, March 11, 2010

Representation, realism, and relativism

The popular view of the “science wars” of the 1990s is that it involved scientists and philosophers criticizing social scientists for making and accepting absurd claims as a result of an extreme relativistic view about scientific knowledge. Such absurd claims included claims like “the natural world in no way constrains what is believed to be,” “the natural world has a small or nonexistent role in the construction of scientific knowledge,” and “the natural world must be treated as though it did not affect our perception of it” (all due to Harry Collins, quoted in Yves Gingras’ scathingly critical review of his book (PDF), Gravity’s Shadow: The Search for Gravitational Waves). Another example was Bruno Latour’s claim that it was impossible for Ramses II to have died of tuberculosis because the tuberculosis bacillus was not discovered until 1882. This critical popular view is right as far as it goes--those claims are absurd--but the popular view of science also tends toward an overly rationalistic and naively realistic conception of scientific knowledge that fails to account for social factors that influence science as actually practiced by scientists and scientific institutions. The natural world and our social context both play a role in the production of scientific knowledge.

Mark B. Brown’s Science in Democracy: Expertise, Institutions, and Representation tries to steer a middle course between extremes, but periodically veers too far in the relativist direction. Early on, in a brief discussion of the idea of scientific representations corresponding to reality, he writes (p. 6): “Emphasizing the practical dimensions of science need not impugn the truth of scientific representations, as critics of science studies often assume ...” But he almost immediately seems to retract this when he writes that “science is not a mirror of nature” (p. 7) and, in one of several unreferenced and unargued-for claims appealing to science studies that occur in the book, that “constructivist science studies does undermine the standard image of science as an objective mirror of nature” (p. 16). Perhaps he merely means that scientific representations are imperfect and fallible, for he does periodically make further attempts to steer a middle course, such as when he quotes Latour: “Either they went on being relativists even about the settled parts of science--which made them look ridiculous; or they continued being realists even about the warm uncertain parts--and they made fools of themselves” (p. 183). It’s surely reasonable to take an instrumentalist approach to scientific theories that aren’t well established, are somewhat isolated from the rest of our knowledge, or are highly theoretical, but also to take a realist approach to theories that are well established with evidence from multiple domains and have remained stable while being regularly put to the test. The evidence that we have today for a heliocentric solar system, for common ancestry of species, and for the position and basic functions of organs in the human body is of such strength that it is unlikely that we will see that knowledge completely overthrown in a future scientific revolution. But Brown favorably quotes Latour: “Even the shape of humans, our very body, is composed to a great extent of sociotechnical negotiations and artifacts.” (p. 171) Our bodies are not “composed” of “sociotechnical negotiations and artifacts”--this is either a mistaken use of the word “composed” (instead of perhaps “the consequence of”) or a use-mention error (referring to “our very body” instead of our idea of our body).

In Ch. 6, in a section titled “Realism and Relativism” that begins with a reference to the “science wars,” he follows the pragmatist philosopher John Dewey in order to “help resolve some of the misunderstandings and disagreements among today’s science warriors” such as that “STS scholars seem to endorse a radical form of relativism, according to which scientific accounts of reality are no more true than those of witchcraft, astrology, or common sense” (p. 156). Given that Brown has already followed Dewey’s understanding of scientific practice as continuous with common sense (pp.151-152), it’s somewhat odd to see it listed with witchcraft and astrology in that list--though perhaps in this context it’s not meant as the sort of critical common sense Dewey described, but more like folk theories that are undermined or refuted by science.

Brown seems to endorse Dewey’s view that “reality is the world encountered through successful intervention” and favorably quotes philosopher Ian Hacking that “We shall count as real what we can use to intervene in the world to affect something else, or what the world can use to affect us” (pp. 156-157), but he subsequently drops the second half of Hacking’s statement when he writes “If science is understood in terms of the capacity to direct change, knowing cannot be conceived on the model of observation.” Such an understanding may capture experimental sciences, but not observational or historical sciences, an objection Brown attributes to Bertrand Russell, who “pointed out in his review of Dewey’s Logic that knowledge of a star could not be said to affect the star” (p. 158). Brown, however, follows Latour and maintains that “the work of representation ... always transforms what it represents” (p. 177). Brown defends this by engaging in a use-mention error, the failure to properly distinguish between the use of an expression and talking about the expression, when he writes that stars as objects of knowledge are newly created objects (p. 158, more below). Such an error is extremely easy to make when talking about social facts, where representations are themselves partly constitutive of the facts, such as in talk about knowledge or language.

Brown writes that “People today experience the star as known, differently than before ... The star as an object of knowledge is thus indeed a new object” (p. 158). But this is unnecessary given the second half of Hacking’s statement, since we can observe and measure stars--they have impact upon us. Brown does then talk about impact on us, but only by the representation, not the represented: “...this new object causes existential changes in the knower. With the advent of the star as a known object, people actually experience it differently. This knowledge should supplement and not displace whatever aesthetic or religious experiences people continue to have of the star, thus making their experiences richer and more fulfilling” (p. 158). There may certainly be augmented experience with additional knowledge, which may not change the perceptual component of the experience, but I wonder what the Brown’s basis is for the normative claim that religious experiences in particular shouldn’t be displaced--if those religious experiences are based on claims that have been falsified, such as an Aristotelian conception of the universe, then why shouldn’t they be displaced? But perhaps here I’m making the use-mention error, and Brown doesn’t mean that religious interpretations shouldn’t be displaced, only experiences that are labeled as “religious” shouldn’t be displaced.

A few other quibbles:

Brown writes that “all thought relies on language” (p. 56). If this is the case, then nonhuman animals that have no language cannot have thoughts. (My commenter suggested that all sentient beings have language, and even included plants in that category. I think the proposal that sentience requires language is at least plausible, though I wouldn’t put many nonhuman animals or any plants into that category--perhaps chimps, whales, and dolphins. Some sorts of “language” extend beyond that category, such as the dance of honeybees that seems to code distance and direction information, but I interpreted Brown’s claim to refer to human language with syntax, semantics, generative capacity, etc., and to mean that one can’t have non-linguistic thoughts in the form of, say, pictorial imagery, without language. I.e., that even such thoughts require a “language of thought,” to use Jerry Fodor’s expression.)

Brown endorses Harry Collins’ idea of the “experimenter’s regress,” without noting that his evidence for the existence of such a phenomenon is disputed (Allan Franklin, “How to Avoid the Experimenters’ Regress,” Studies in History and Philosophy of Science 25(3, 1994): 463-491). (Franklin also discusses this in the entry on "Experiment in Physics" at the Stanford Encyclopedia of Philosophy.)

Brown contrasts Harry Collins and Robert Evans with Hobbes on the nature of expertise: The former see “expertise as a ‘real and substantive’ attribute of individuals” while “For Hobbes, in contrast, what matters is whether the claims of reason are accepted by the relevant audience.” (p. 116). Brown sides with Hobbes, but this is to make a similar mistake to that Richard Rorty made when claiming that truth is what you can get away with, which is false by its own definition--since philosophers didn’t let him get away with it. This definition doesn’t allow for the existence of a successful fake expert or con artist, but we know that such persons exist from examples that have been exposed. Under this definition, such persons were experts until they were unmasked.

Brown’s application of Hobbes’ views on political representation to nature is less problematic when he discusses the political representation of environmental interests (pp. 128-131) than when he discusses scientific representations of nature (pp. 131-132). The whole discussion might have been clearer had it taken account of John Searle’s account of social facts (in The Construction of Social Reality).

Brown writes that “Just as recent work in science studies has shown that science is not made scientifically ...” (p. 140), without argument or reference.

He apparently endorses a version of Dewey’s distinction between public and private actions with private being “those interactions that do not affect anyone beyond those engaged in the interaction; interactions that have consequences beyond those so engaged he calls public” (p. 141). This distinction is probably not tenable since the indirect consequences of even actions that we’d consider private can ultimately affect others, such as a decision to have or not to have children.

On p. 159, Brown attributes the origin of the concept of evolution to “theories of culture, such as those of Vico and Comte” rather than Darwin, but neither of them had theories of evolution by natural selection comparable to Darwin’s innovation; concepts of evolutionary change go back at least to the pre-Socratic philosophers like the Epicureans and Stoics. (Darwin didn't invent natural selection, either, but he was the first to put all the pieces together and recognize that evolution by natural selection could serve a productive as well as a conservative role.)

[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Brenda T. for her comments. It should be noted that the above really doesn't address the main arguments of the book, which are about the meaning of political representation and representation in science, and an argument about proper democratic representation in science policy.]

Wednesday, February 24, 2010

Science as performance

The success of science in the public sphere is determined not just by the quality of research but by the ability to persuade. Stephen Hilgartner’s Science on Stage: Expert Advice as Public Drama uses a theatrical metaphor, drawing on the work of Erving Goffman, to shed light on and explain the outcomes associated with three successive reports on diet and nutrition issued by the National Academies of Science, one of which was widely criticized by scientists, one of which was criticized by food industry groups, and one of which was never published. They differed in “backstage” features such as how they coordinated their work and what sources they drew upon, in “onstage” features such as the composition of experts on their committees and how they communicated their results, and how they responded to criticism.

The kinds of features and techniques that Hilgartner identifies as used to enhance perceptions of credibility--features of rhetoric and performance--are the sorts of features relied upon by con artists. If there is no way to distinguish such features as used by con artists from those used by genuine practitioners, if all purported experts are on equal footing and only the on-stage performances are visible, then we have a bit of a problem. All purported experts of comparable performing ability are on equal footing, and we may as well flip coins to distinguish between them. But part of a performance includes the propositional content of the performance--the arguments and evidence deployed--and these are evaluated not just on aesthetic grounds but with respect to logical coherence and compatibility with what the audience already knows. Further, the performance itself includes an interaction with the audience that strains the stage metaphor. Hilgartner describes this as members of the audience themselves taking the stage, yet audience members in his metaphor also interact with each other, individually and in groups, through complex webs of social relationships.

The problem of expert-layman interaction is that the layman in most cases lacks the interactional expertise to even be able to communicate about the details of the evidence supporting a scientific position, and must rely upon other markers of credibility which may be rhetorical flourishes. This is the problem of Plato’s “Charmides,” in which Socrates asserts that only a genuine doctor can distinguish a sufficiently persuasive quack from a genuine doctor. A similar position is endorsed by philosopher John Hardwig, in his paper “Epistemic Dependence,” (PDF) and by law professor Scott Brewer in “Scientific Expert Testimony and Intellectual Due Process,” which points out that the problem faces judges and juries. There are some features which enable successful distinctions between genuine and fake experts in at least the more extreme circumstances--examination of track records, credentials, evaluations by other experts or meta-experts (e.g., experts in methods used across multiple domains, such as logic and mathematics). Brewer enumerates four strategies of nonexperts in evaluating expert claims: (1) “substantive second-guessing,” (2) “using general canons of rational evidentiary support,” (3) “evaluating demeanor,” and (4) “evaluating credentials.” Of these, only (3) is an examination of the merely surface appearances of the performance (which is not to say that it can’t be a reliable, though fallible, mechanism). But when the evaluation is directed not at distinguishing genuine expert from fake, but conflicting claims between two genuine experts, the nonexpert may be stuck in a situation where none of these is effective and only time (if anything) will tell--but in some domains, such as the legal arena, a decision may need to be reached much more quickly than a resolution might become available.

One novel suggestion for institutionalizing a form of expertise that fits into Hilgartner’s metaphor is philosopher Don Ihde’s proposal of “science critics”, in which individuals with at least interactional expertise within the domain they criticize serve a role similar to art and literary critics in evaluating a performance, including its content and not just its rhetorical flourishes.

[A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. The Hardwig and Brewer articles are both reprinted in Evan Selinger and Robert P. Crease, editors, The Philosophy of Expertise. NY: Columbia University Press, 2006, along with an excellent paper I didn't mention above, Alvin I. Goldman's "Experts: Which Ones Should You Trust?" (PDF). The term "interactional expertise" comes from Harry M. Collins and Robert Evans, "The Third Wave of Science Studies: Studies of Expertise and Experience," also reprinted in the Selinger & Crease volume; a case study of such expertise is in Steven Epstein's Impure Science: AIDS, Activism, and the Politics of Knowledge, Berkeley: University of California Press, 1996. Thanks to Tim K. for his comments on the above.]

Monday, February 22, 2010

Is knowledge drowning in a flood of information?

There have long been worries that the mass media are producing a “dumbing down” of American political culture, reducing political understanding to sound bites and spin. The Internet has been blamed for information overload, and, like MTV in prior decades, for a reduction in attention span as the text-based web became the multimedia web, and cell phones have become a more common tool for its use. Similar worries have been expressed about public understanding of science. Nicholas Carr has asked the question, “Is Google Making Us Stupid?”

Yaron Ezrahi’s “Science and the political imagination in contemporary democracies” (a chapter in Sheila Jasanoff's States of Knowledge: The Co-Production of Science and Social Order) argues that the post-Enlightenment synthesis of scientific knowledge and politics in democratic societies is in decline, on the basis of a transition of public discourse into easily consumed, bite-sized chunks of vividly depicted information that he calls “outformation.” Where, prior to the Enlightenment, authority had more of a religious basis and the ideal for knowledge was “wisdom”--which Ezrahi sees as a mix of the “cognitive, moral, social, philosophical, and practical” which is privileged, unteachable, and a matter of faith, the Enlightenment brought systematized, scientific knowledge to the fore. Such knowledge was formalized, objective, universal, impersonal, and teachable--with effort. When that scientific knowledge is made more widely usable, “stripped of its theoretical, formal, logical and mathematical layers” into a “think knowledge” that is context-dependent and localized, it becomes “information.” And finally, when information is further stripped of its context and design for use for a particular purpose, yet augmented with “rich and frequently intense” representations that include “cognitive, emotional, aesthetic, and other dimensions of experience,” it becomes “outformation.”

According to Ezrahi, such “outformations” mix references to objective and subjective reality, and they become “shared references in the context of public discourse and action.” They are taken to be legitimated and authoritative despite lacking any necessary grounding in “observations, experiments, and logic.” He describes this shift as a shift from a high-cost political reality to a low-cost political reality, where “cost” is a measure of the recipient’s ability to consume it rather than the consequences to the polity of its consumption and use as the basis for political participation. This shift, he says, “reflects the diminished propensity of contemporary publics to invest personal or group resources in understanding and shaping politics and the management of public affairs.”

But, I wonder, is this another case of reflecting on “good old days” that never existed? While new media have made new forms of communication possible, was there really a time when the general public was fully invested in “understanding and shaping politics” and not responding to simplifications and slogans? And is it really the case, as Ezrahi argues, that while information can be processed and reconstructed into knowledge, the same is not possible for outformations? Some of us do still read books, and for us, Google may not be “making us stupid,” but rather providing a supplement that allows us to quickly search a vast web of interconnected bits of information that can be assembled into knowledge, inspired by a piece of “outformation.”

[A slightly different version of the above was written as a comment on Ezrahi's article for my Human and Social Dimensions of Science and Technology core seminar. Although I wrote about new media, it is apparent that Ezrahi was writing primarily about television and radio, where "outformation" seems to be more prevalent than information. Thanks to Judd A. for his comments on the above.]

UPDATE (April 19, 2010): Part of the above is translated into Italian, with commentary from Ugo Bardi of the University of Florence, at his blog.

Saturday, February 20, 2010

Seeing like a slime mold

Land reforms instituted in Vietnam under French rule, in India under the British, and in rural czarist Russia introduced simplified rights of ownership and standardized measurements of size and shape that were primarily for the benefit of the state, e.g., for tax purposes. James C. Scott’s Seeing as a State: How Certain Schemes to Improve the Human Condition Have Failed gives these and numerous other examples of ways in which standardization and simplification have been used by the state to make legible and control resources (and people) within its borders. He recounts cases of how the imposition of such standardization often fails or at least has unintended negative consequences, such as his example of German scientific forestry’s introduction of a monoculture of Norway spruce or Scotch pine designed to maximize lumber production, but which led to die-offs a century later. (The monoculture problem of reduced resilience/increased vulnerability is one which has been recognized in an information security context, as well, e.g., in Dan Geer et al.'s paper on Microsoft monoculture that got him fired from @stake and his more recent work.)

Scott’s examples of state-imposed uniformity should not, however, be misconstrued to infer that any case of uniformity is state-imposed, or that such regularities, even if state-imposed, don't have underlying natural constraints. Formalized institutions of property registration and title have appeared in the crevices between states, for example in the squatter community of Kowloon Walled City that existed from 1947-1993 on a piece of the Kowloon peninsula that was claimed by both China and Britain, yet governed by neither. While the institutions of Kowloon Walled City may have been patterned after those familiar to its residents from the outside world, they were internally imposed rather than by a state.

Patterns of highway network design present another apparent counterexample. Scott discusses the design of highways around Paris as being designed by the state to intentionally route traffic through Paris, as well as to allow for military and law enforcement activity within the city in order to put down insurrections. But motorway patterns in the UK appear to have a more organic structure, as a recent experiment with slime molds oddly confirmed. Two researchers at the University of West of England constructed a map of the UK out of agar, putting clumps of oat flakes at the locations of the nine most populous cities. They then introduced a slime mold colony to the mix, and in many cases it extruded tendrils to feed on the oat flakes creating patterns which aligned with the existing motorway design, with some variations. A similar experiment with a map of cities around Tokyo duplicated the Tokyo railway network, slime-mold style. The similarity between transportation networks and evolved biological systems for transporting blood and sap may simply be because they are efficient and resilient solutions.

These examples, while not refuting Scott’s point about frequent failures in top-down imposition of order, suggest that it may be possible for states to achieve success in certain projects by facilitating bottom-up development of ordered structures. The state often imposes an order that has already been developed via some other means--e.g., electrical standards were set up by industry bodies before being codified, IETF standards for IP which don't have the force of law yet are globally implemented. In other cases, states may ratify an emerging order by, e.g., preempting a diversity of state rules with a set that have been demonstrated to be successful, though that runs the risk of turning into a case like Scott describes, if there are local reasons for the diversity.

[A slightly different version of the above was written as a comment on the first two chapters of Scott's book for my Human and Social Dimensions of Science and Technology core seminar. I've ordered a copy of the book since I found the first two chapters to be both lucidly written and extremely interesting. Thanks to Gretchen G. for her comments that I've used to improve (I hope) the above.]

UPDATE (April 25, 2010): Nature 407:470 features "Intelligence: Maze-solving by an amoeboid organism."

Wednesday, January 06, 2010

A few comments on the nature and scope of skepticism

Of late there has been a lot of debate about the nature, scope, and role of skepticism. Does skepticism imply atheism? Are "climate change skeptics" skeptics? Must skeptics defer to scientific consensus or experts? Should skepticism as a movement or skeptical organizations restrict themselves to paranormal claims, or avoid religious or political claims?

I think "skepticism" can refer to multiple different things, and my answers to the above questions differ in some cases depending on how the term is being used. It can refer to philosophical skepticism, to scientific skepticism, to "skeptical inquiry," to "doubt" broadly speaking, to the "skeptical movement," to skeptical organizations, and to members of the class of people who identify themselves as skeptics.

My quick answers to the above questions, then, are:

Does skepticism imply atheism? No, regardless of which definition you choose. It is reasonable to argue that proper application of philosophical skepticism should lead to atheism, and to argue that scientific skepticism should include methodological naturalism, but I prefer to identify skepticism with a commitment to a methodology rather than its outputs. That still involves a set of beliefs--which are themselves subject to reflection, criticism, and evaluation--but it is both a more minimal set than the outputs of skepticism and involves commitment to values as well as what is scientifically testable. My main opposition to defining skepticism by its outputs is that that is a set of beliefs that can change over time with access to new and better information, and shouldn't be held dogmatically.

Are "climate change skeptics" skeptics? I would say that some are, and some aren't--some are outright "deniers" who are allowing ideology to trump science and failing to dig into the evidence. Others are digging into the evidence and just coming to (in my opinion) erroneous conclusions, but that doesn't preclude them from being skeptics so long as they're still willing to engage and look at contrary evidence, as well as admit to mistakes and errors when they make them--like relying on organizations and individuals who are demonstrably not reliable. As you'll see below, I agree we should to try to save the term "skeptic" from being equated with denial.

Must skeptics defer to scientific consensus or experts? I think skeptical organizations and their leaders should defer to experts on topics outside of their own fields of expertise on pragmatic and ethical grounds, but individual skeptics need not necessarily do so.

Should skepticism as a movement or skeptical organizations restrict themselves to paranormal claims, or avoid religious or political claims? I think skepticism as a movement, broadly speaking, is centered on organizations that promote scientific skepticism and focus on paranormal claims, but also promote science and critical thinking, including with some overlap with religious and public policy claims, where the scientific evidence is relevant. At its fringes, though, it also includes some atheist and rationalist groups that take a broader view of skeptical inquiry. I think those central groups (like CSI, JREF, and the Skeptics Society) should keep their focus, but not as narrowly as Daniel Loxton suggests in his "Where Do We Go From Here?" (PDF) essay.

Here are a few of my comments, on these same topics, from other blogs.

Comment
on Michael De Dora, "Why Skeptics Should be Atheists," at the Gotham Skeptic blog:

Scientific skepticism (as opposed to philosophical skepticism) no more necessitates atheism than it does amoralism. Your argument would seem to suggest that skeptics shouldn’t hold any positions that can’t be established by empirical science, which would seem to limit skeptics to descriptive, rather than normative, positions on morality and basic (as opposed to instrumental) values.

“Skepticism” does have the sort of inherent ambiguity that “science” does, in that it can refer to process, product, or institution. I favor a methodological view of skepticism as a process, rather than defining it by its outputs. Organizations, however, seem to coalesce around sets of agreed-upon beliefs that are outputs of methodology, not just beliefs about appropriate/effective methodology; historically that set of agreed-upon beliefs has been that there is no good scientific support for paranormal and fringe science claims. As the scope of skeptical inquiry that skeptical organizations address has broadened, that leads to more conflict over issues in the sphere of politics and religion, where empirical science yields less conclusive results.

I’d rather see skeptical organizations share some basic epistemic and ethical values that are supportive of the use of science than a commitment to a set of beliefs about the outputs of skeptical methodology. The latter seems more likely to result in dogmatism.

Comment on Daniel Loxton, "What, If Anything, Can Skeptics Say About Science?" at SkepticBlog:

While I think the picture Daniel presents offers some good heuristics, I can’t help but note that this is really proffered normative advice about the proper relationship between the layman and the expert, which is a question that is itself a subject of research in multiple domains of expertise including philosophy of science, science and technology studies, and the law. A picture much like the one argued for here is defended by some, such as philosopher John Hardwig (”Epistemic Dependence,” Journal of Philosophy 82(1985):335-349), but criticized by others, such as philosopher Don Ihde (”Why Not Science Critics?”, International Studies in Philosophy 29(1997):45-54). There are epistemological, ethical, and political issues regarding deference to experts that are sidestepped by the above discussion. Not only is there a possibility of meta-expertise about evaluating experts, there are cases of what Harry Collins and Robert Evans call “interactional expertise” (”The Third Wave of Science Studies: Studies of Expertise and Experience,” Social Studies of Science 32:2(2002):235-196) where non-certified experts attain sufficient knowledge to interact at a deep level with certified experts, and challenge their practices and results (this is discussed in Evan Selanger and John Mix, “On Interactional Expertise: Pragmatic and Ontological Considerations,” Phenomenology and the Cognitive Sciences 3:2(2004):145-163); Steven Epstein’s book Impure Science: AIDS, Activism, and the Politics of Knowledge, 1996, Berkeley: Univ. of California Press, discusses how AIDS activists developed such expertise and successfully made changes to AIDS drug research and approval processes.

The above discussion also doesn’t discuss context–are these proposed normative rules for skeptics in any circumstance, or only for those speaking on behalf of skeptical organizations? I don’t think it’s reasonable to suggest that skeptics, speaking for themselves, should be limited about questioning anything. The legal system is an example of a case where experts should be challenged and questioned–it’s a responsibility of the judge, under both the Frye and Daubert rules, to make judgments about the relevance and admissibility of expert testimony, and of laymen on the jury to decide who is more credible. (This itself raises enormous issues, which are discussed at some length by philosopher and law professor Scott Brewer, “Scientific Expert Testimony and Intellectual Due Process,” The Yale Law Journal vol. 107, 1535-1681.) Similar considerations apply to the realm of politics in a democratic society (cf. Ihde’s article).

All of the papers I’ve cited are reprinted in the volume The Philosophy of Expertise, edited by Evan Selinger and Robert P. Crease, 2006, N.Y.: Columbia University Press.

Comment on jdc325's "The Trouble With Skeptics" at the Stuff And Nonsense blog:
@AndyD I’d say that it’s possible for a skeptic to believe individual items on your list (though not the ones phrased like “the entirety of CAM”), so long as they do so because they have legitimately studied them in some depth and think that the weight of the scientific evidence supports them, or if they admit that it’s something they buy into irrationally, perhaps for the entertainment it brings or to be part of a social group. If, however, they believe in a whole bunch of such things, that’s probably evidence that they’re not quite getting the point of critical thinking and skepticism somewhere. Being a skeptic doesn’t mean that you’re always correct (as per the above comment on Skeptic Fail #7), and I don’t think it necessarily means you’re always in accord with mainstream science, either.

Skeptic fail #6 is a pretty common one. For example, I don’t think most skeptics have a sufficient knowledge of the parapsychology literature to offer a qualified opinion, as opposed to simply repeat the positions of some of the few skeptics (like Ray Hyman and Susan Blackmore) who do.
Comments (one and two) on "Open Thread #17" at Tamino's Open Mind blog:

Ray Ladbury: I think you’re in a similar position as those who want to preserve “hacker” for those who aren’t engaged in criminal activity. I understand and appreciate the sentiment, but I think “skeptic” already has (and, unlike “hacker,” has actually always had) common currency in a much broader sense as one who doubts, for whatever reason.

I also think that there are many skeptics involved in the organized and disorganized skeptical movement in the U.S. (the one started by CSICOP) who don’t meet your criteria of “sufficiently knowledgeable about the evidence and theory to render an educated opinion” even with respect to many paranormal and pseudoscience claims, let alone with respect to climate science. There’s an unfortunately large subset of “skeptics” in the CSICOP/JREF/Skeptics Society sense who are also climate change skeptics or deniers, as can be seen from the comments on James Randi’s brief-but-retracted semi-endorsement of the Oregon Petition Project at the JREF Swift Blog and on the posts about climate science at SkepticBlog.org.

Ray: You make a persuasive argument for attempting to preserve “skeptic.” Since I’ve just been defending against the colloquial misuse of “begs the question,” I think I can likewise endorse a defense of “skeptic” against “pseudoskeptic.” However, I think I will continue to be about as reserved in my use of “denier” as I am in my use of “liar.” I don’t make accusations of lying unless I have evidence not just that a person is uttering falsehoods, but that they’ve been presented with good evidence that they are uttering falsehoods, and continue to do so anyway.

On another subject, I’d love to see an equivalent of the Talk Origins Archive (http://www.talkorigins.org/), and in particular Mark Isaak’s “Index to Creationist Claims” (http://www.talkorigins.org/indexcc/list.html) for climate science (and its denial). Do they already exist?

Some previous posts at this blog on this subject may be found under the "skepticism" label, including:

"Massimo Pigliucci on the scope of skeptical inquiry"
(October 21, 2009)
"Skepticism, belief revision, and science" (October 21, 2009)

Also, back in 1993 I wrote a post to the sci.skeptic Usenet group that gave a somewhat oversimplified view of "the proper role of skeptical organizations" which was subsequently summarized in Michael Epstein's "The Skeptical Viewpoint," Journal of Scientific Exploration, vol. 7, no. 3, Fall 1993, pp. 311-315.

UPDATE (January 7, 2010): Skepdude has taken issue with a couple of points above, and offers his contrary arguments at his blog. First, he says that skeptics need to defer to scientific consensus with the "possible exception" of cases where "the person is also an expert on said field." I think that case is a definite, rather than a possible exception, but would go farther--it's possible to be an expert (or even just a well-informed amateur) in a field that has direct bearing on premises or inferences used by experts in another field where one is not expert. That can give a foothold for challenging a consensus in a field where one is not expert. For example, philosophers, mathematicians, and statisticians can spot errors of conceptual confusion, fallacious reasoning, invalid inferences, mathematical errors, and misuse of statistics. It's possible for an entire field to have an erroneous consensus, such as that rocks cannot fall from the sky or continents cannot move. I suspect an argument can be made that erroneous consensus is more likely to occur in a field with a high degree of specialization that doesn't have good input from generalists and related fields.

I also am uncomfortable with talk of "deference" to experts without scope or context, as it can be taken to imply the illegitimacy of questioning or demanding evidence and explanation in support of the consensus, which to my mind should always be legitimate.

The second point is one which Skepdude and I have gone back and forth on before, both at his blog (here, here, and here -- I could have used these comments as well in the above post) and via Twitter, which is about whether skepticism implies (or inevitably leads) to atheism. It's a position which I addressed above in my comments on Michael De Dora and on the "Stuff and Nonsense" blog, though he doesn't directly respond to those. He writes:
I fail to see the distinction between skepticism implying atheism and proper application of skepticism leading to atheism. I regard the two as saying the same thing, that skepticism, if consistently applied should lead to atheism. I am not sure what Jim means by philosophical skepticism, and maybe that’s where he draws the difference, but I refrain from using qualifiers in front of the word skepticism, be it philosophical or scientific. Skepticism is skepticism, we evaluate if a given claim is supported by the evidence.
There is most definitely a distinction between "skepticism implies atheism" and "proper application of skepticism leads to atheism." The former is a logical claim that says atheism is derivable from skepticism, or that it's necessarily the case that the use of skepticism (regardless of inputs?) yields atheism. The latter is a contingent claim that's dependent upon the inputs and the result of the inquiry. If skepticism is defined as a method, the former claim would mean in essence that the game is rigged to produce a particular result for an existence claim necessarily, which would seem to me to be a serious flaw in the method, unless you thought that atheism was logically necessary. But I'm not aware of any atheists who hold that, and I know that Skepdude doesn't, since he prefers to define atheism as mere lack of belief and has argued that there is no case to be made for positive atheism/strong atheism.

If we take skepticism defined as a product, as a set of output beliefs, there's the question of which output beliefs we use. Some idealized set of beliefs that would be output from the application of skeptical processes? If so, based on which set of inputs? In what historical context? The sets of inputs, the methods, and the outputs all have changed over time, and there is also disagreement about what counts as appropriately well-established inputs and the scope of the methods. The advocate of scientific skepticism is going to place more constraints on what is available as input to the process and the scope of what the process can deal with (in such a way that the process cannot be used even to fully evaluate reasons for being a skeptic, which likely involve values and commitments that are axiomatic or a priori). Methodological naturalism is likely to be part of the definition of the process, which means that theism cannot be an output belief--I think this is probably what Skepdude means when he says that atheism defined as a lack of belief is a product of skepticism. But note that the set of output beliefs from this process is a subset of what it is reasonable to believe, unless the advocate of this view wants to assert that the commitment to skepticism itself is not reasonable to believe--in virtue of the fact that it is not subject to a complete evaluation by the process. (As an aside, I think that it is possible for the process of skepticism thus defined to yield a conclusion of its own inadequacy to address certain questions, and in fact, that if we were to observe certain things, to yield the conclusion that methodological naturalism should be rejected.)

If we look at skepticism more broadly, where philosophical arguments more generally are acceptable as input or method, atheism (in the positive or strong form) then becomes a possible output. As an atheist, I think that use of the best available evidence and arguments and the best available methodology does lead to a conclusion of atheism (and 69.7% of philosophy faculty and Ph.D.s agree), that still doesn't mean that everyone's going to get there (as 69.3% of philosophy faculty and Ph.D.s specializing in philosophy of religion don't) or that anyone who doesn't has necessarily done anything irrational in the process, but for a different reason than in the prior case. That reason is that we don't function by embodying this skeptical process, taking all of our input data, running it through the process, and believing only what comes out the other side. That's not consistent with how we engage in initial learning or can practically proceed in our daily lives. Rather, we have a vast web of beliefs that we accumulate over our lifetimes, and selectively focus our attention and use skeptical processes on subsets of our beliefs. The practical demands of our daily lives, of our professions, of our social communities, and so forth place constraints on us (see my answers to questions in "Skepticism, belief revision, and science"). And even with unlimited resources, I think there are reasons that we wouldn't want everyone to apply skeptical methods to everything they believed--there is value to false belief in generating new hypotheses, avoiding Type I errors, keeping true beliefs from becoming "dead dogma," and so forth (which I discussed in my SkeptiCamp Phoenix presentation last year, "Positive side-effects of misinformation").

UPDATE (January 16, 2009): Skepdude responds here.