Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Wednesday, January 06, 2010

Definitions of atheism and agnosticism

I recently posted this at the Phoenix Atheists Meetup group's discussion forum in a thread titled "atheism v. agnosticism," and thought it might be worth reposting here:

There are lots of ways to define these terms, to the extent that you can't be sure how people are using them unless you ask.

The general population of English speakers understands atheism to be equivalent to what Michael Martin calls "positive atheism" and what used to be commonly known among Internet atheists as "strong atheism"--an active disbelief in the existence of gods. That's a position which does have a burden of proof over mere nonbelief, also known as weak atheism or negative atheism. George H. Smith made the same distinction using the terms explicit vs. implicit atheism. Richard Dawkins complicated matters by redefining "strong atheism" as absolute certainty that there is no God (position 7 on his scale). I wish he had chosen a different term, as I think it's a mistake to associate positive atheism/strong atheism with certainty, proof, or even knowledge.

I used to like this distinction, but am less enamored with it because "weak atheism" or "negative atheism" or atheism as mere lack of belief in gods has a few logical problems as a basis for anything. A lack of belief is not a position, it cannot be used to motivate action or to infer conclusions from. Those who say that they are only atheists in the weak sense, however, do join groups and appear to draw inferences and conclusions as though they are using the nonexistence of gods as a premise, which means either that they are really implicitly using strong atheism as a position, or they are drawing those inferences based on other meta-beliefs.

The advantage of equating atheism with weak atheism is that theism and atheism then become contradictories which cover the entire space of logical possibilities--you either have a belief in one or more gods, or you don't. Under that definition, there's no space for agnosticism except as a subset of one or both of atheism and theism.

The definition of "agnosticism" that was given earlier in this thread as pertaining to the possibility of knowledge about the existence or nonexistence of gods then gives you two dimensions, on which you can have agnostic atheists (I don't believe there are gods, and it's not possible to know), agnostic theists (I believe in at least one god, but it's not possible to know), gnostic atheists (I don't believe there are gods, and it is possible to know there aren't), and gnostic theists (I believe there's at least one god, and it's possible to know). Of those positions, I think agnostic theism is difficult to make a case for with respect to most conceptions of God, except for deism and other forms of uninvolved gods.

But most people who call themselves agnostics aren't using that definition, they're using a notion that is a particular form of weak atheism, that holds to something like there is a parity between arguments for and against the existence of gods, or that there is no way to effectively compare their evidential weight, or similar. They might agree with agnosticism regarding the possibility of knowledge for the existence or nonexistence of gods, but they go further and say that there is some parity on the case for mere belief in either direction, as well.

I'm generally in favor of allowing people to choose their own self-identifying terms and defining them as they see fit, so long as they can give a legitimate reason for their classification and it's not completely at odds with ordinary usage. One example that goes beyond ordinary usage and I think just indicates some kind of confusion is that 21% of self-identified atheists in a Pew survey reported last October said that they believe in God. Sorry, but that's not a definition of atheist that I think can get off the ground.

My own position is strong atheism/positive atheism with respect to most traditional conceptions of God, and weak atheism/agnosticism (or igtheism) with respect to certain rarefied/unempirical notions of God. I'm comfortable calling myself an atheist in general, and dispute claims that it's impossible have knowledge that at least most gods do not exist. "You can't prove a negative" is a widely expressed canard, which I argue against here:

http://www.discord.org/~lippard/debiak.html

That also contains links to a few other essays which make the same point in a way that is probably more clear, including one by Jeff Lowder which argues for the possibility of disproofs of God's existence.

UPDATE (November 22, 2010): Also see the Internet Infidels' Atheist Web definition page.  I now suspect that "empirical agnosticism" and "weak atheism" are indistinguishable.

UPDATE (November 24, 2011): Also see Jeff Lowder's January 4, 2006 post at Naturalistic Atheism, "Disagreement Among Self-Described Atheists about the Meaning of 'Atheism'" and Ted Drange's 1998 article at the Secular Web, "Atheism, Agnosticism, and Noncognitivism." Drange's distinctions seem to me to be well worth using.  Maverick Philosopher's "Against Terminological Mischief: 'Negative Atheism' and 'Negative Nominalism'" is also good.

UPDATE (January 20, 2012): Jeff Lowder has written further on this subject at the Secular Outpost, in "The Definition of Atheism, the Anal-Retentive Defense of Etymological Purism, and Linguistic Relativism."

Saturday, December 19, 2009

Vocab Malone on abortion and personhood, part 5

Vocab has put up the fifth and final part of his essay on abortion and personhood up at his blog, devoted to Thomson's violinist argument. I don't really have much to say about it--we didn't coordinate our posts in advance, and I've already discussed Thomson's argument myself in my response to part 4. I disagree with Vocab's claim that Thomson's argument proves too much and would allow infanticide--her argument only addresses a physically dependent fetus. And, as I already pointed out in my prior response, the argument doesn't prove as much as it purports to. The violinist case isn't exactly analogous to pregnancy and abortion in a number of ways, and Vocab is right to point out the differences. I agree that if a pregnancy is allowed to go to term (as well as to some earlier point at which there is plausible evidence for personhood on my standard), then that entails at least tacit consent and a moral duty of care. I would still argue, however, that abortion would be legitimate beyond that point for medically justifiable reasons (e.g., endangered health and life of the mother). This position--like the current position of the courts, which I think is approximately correct despite being based on viability--points out that there are more than two polar opposite positions in this debate.

In Vocab's final part, he talks a bit about the work that he and his wife do in caring for foster children. I commend him for that work, which is all-too-rare among opponents of abortion.

Thanks, Vocab, for the debate--and I still would like to hear a response from you in the comments on some of the issues that have been left hanging (e.g., in the comments on part 3).

UPDATE: It would probably be better to end this discussion with a summary that I already made in the comments on part 3:
We don't disagree that there is continuity of organism (just as there is continuity of a population of organisms over time)--all life on this planet is connected in that way. But just as we don't count every species as human, even in our own genetic lineage, we don't count every life stage of individual human organisms as persons. There's a sense in which "I" was once a zygote that had my same DNA, but at that stage there was no "me" there yet--there was nothing that it was like to be a zygote, to use Thomas Nagel's expression. In that same sense that "I" was a zygote, "I" will be a dead body in the future, even though there will at that point be nothing that it is like to be me, and the person that I am will be gone from the world though my body will briefly remain.

I think we understand each other's positions. You think that being a human organism is the same thing as to be a person, while I think personhood is a feature that comes into existence and persists for a subset of the life of an organism, that requires capacities of sentience or self-awareness.

But I think I can give reasons to support why my view makes moral, legal, and practical sense, and why human cultures and practices are more consistent with my view than yours. I don't think you can give such reasons, other than the brute assertion that human organisms are persons from start to finish. Your view has no need of the notion of person, yet it seems to me that there are all sorts of practical, moral, and legal reasons why we do need and use such a notion.

Friday, December 18, 2009

Vocab Malone on abortion and personhood, part 4

Vocab Malone has posted the fourth part of his essay on abortion and personhood, addressing the arguments from viability and wantedness. These are two more arguments that I don't place a whole lot of stock in, though perhaps some commenters will want to say more about.

The viability criterion is significant in that it's the basis of current federal case law on abortion since Roe v. Wade, but Vocab correctly notes that viability changes with the availability of technology, and that doesn't seem like a feature that should be relevant to whether one is a person. On the other hand, it is relevant to the notion of dependence--pre-viability is a time where, if you do grant that a fetus is a person, it's a person that is dependent for its existence upon another person. This raises questions of when it is morally permissible for a person upon whom another is dependent for their life to sever that dependence. Judith Jarvis Thomson's argument on abortion, which I referred to earlier in my response to part 1 of Vocab's essay, presents the following scenario:
You wake up in the morning and find yourself back to back in bed with an unconscious violinist. A famous unconscious violinist. He has been found to have a fatal kidney ailment, and the Society of Music Lovers has canvassed all the available medical records and found that you alone have the right blood type to help. They have therefore kidnapped you, and last night the violinist's circulatory system was plugged into yours, so that your kidneys can be used to extract poisons from his blood as well as your own. The director of the hospital now tells you, "Look, we're sorry the Society of Music Lovers did this to you--we would never have permitted it if we had known. But still, they did it, and the violinist is now plugged into you. To unplug you would be to kill him. But never mind, it's only for nine months. By then he will have recovered from his ailment, and can safely be unplugged from you." Is it morally incumbent on you to accede to this situation? No doubt it would be very nice of you if you did, a great kindness. But do you have to accede to it? What if it were not nine months, but nine years? Or longer still? What if the director of the hospital says. "Tough luck. I agree. but now you've got to stay in bed, with the violinist plugged into you, for the rest of your life. Because remember this. All persons have a right to life, and violinists are persons. Granted you have a right to decide what happens in and to your body, but a person's right to life outweighs your right to decide what happens in and to your body. So you cannot ever be unplugged from him."
My intuition is that in this scenario, it is morally supererogatory to remain connected to the violinist--it is not a moral requirement. The problem with this scenario is that it isn't quite analogous to pregnancy except in case of rape. If one gave voluntary consent to be connected to the violinist to save his life, it seems that one would have a moral duty to see it through. That raises the question of what constitutes "voluntary consent" with respect to pregnancy, which may occur accidentally or unintentionally despite use of contraception, for example. And note again that this scenario only applies in the case where personhood is taken as given, which I've been arguing is definitely not the case in early stages of a pregnancy.

The argument from wantedness, like the argument from viability, doesn't appear to be offer a criterion of personhood, but it is of course relevant to the overall abortion debate. Bringing into being persons who are not wanted and aren't going to be cared for is something that should be avoided, since the odds are not good for children in such circumstances. A controversial argument in Steven Levitt and Stephen Dubner's book Freakonomics is that there's a correlation between abortion rates and declining crime rates--i.e., the authors argued that a consequence of the unavailability of abortion is more unwanted children who become criminals. If that argument is correct (and I personally wouldn't bet on it), that's a form of evidence in favor of the availability of legal abortion, though I don't think it trumps a personhood argument. [NOTE (added Nov. 24, 2012): Levitt and Dubner's argument is thoroughly debunked in chapter 3 of Steven Pinker's The Better Angels of Our Nature: Why Violence Has Declined (pp. 119-121).  Freakonomics in general is found to be filled with errors in a review in the American Scientist by Andrew Gelman and Kaiser Fung.]

Vocab quotes from a book by abortion doctor Suzanne Poppema about her own abortion, in which she says to her embryo, "I’m very sorry that this is happening to you but there’s just no way that you can come into existence right now." He identifies this as "confused logic," since clearly the embryo already exists. I agree with Vocab that she has written this statement in an apparently confused way, but it could be made coherent if she had written of the embryo developing into a person or of a person coming into existence, which is probably what she meant to imply.

Continue to part five.

Wednesday, December 16, 2009

Vocab Malone on abortion and personhood, part 3

Vocab Malone has posted the third part of his argument against abortion at his blog, focusing on what he calls "the argument from size." As I don't think there's any plausibility to this argument, I won't spend any time with it, but there are still a few things in his post that I think demand response. The first is the assertion Vocab quotes from "prolific pro-life trainer and speaker Scott Klusendorf" that he always encounters this argument when he speaks at Christian schools. I find this assertion very difficult to believe--I don't think I've ever encountered this argument anywhere, and I suspect that Klusendorf is either intentionally or unintentionally misconstruing some other argument as this argument. (Would he consider Randy Newman's song, "Short People," to be an instance of the argument, given its lyric, "short people got no reason to live"?)

The instance of the argument Vocab suggests is nothing of the sort, though at least he admits that it is an argument about another subject. Here's the quote as Vocab presents it:
From the other end of things, a recent New York Times article featured a similar argument (although his piece was on a broader topic than abortion):
Look at your loved ones. Do you see a hunk of cells or do you see something else? … We do not see cells, simple or complex – we see people, human life. That thing in a petri dish is something else. [2]
The quote is from a New York Times editorial by neuroscientist Michael Gazzaniga about the difference between reproductive and therapeutic cloning. Here's the quotation in context; it's the ending of the piece:

In his State of the Union speech, President Bush went on to observe that "human life is a gift from our creator — and that gift should never be discarded, devalued or put up for sale." Putting aside the belief in a "creator," the vast majority of the world's population takes a similar stance on valuing human life. What is at issue, rather, is how we are to define "human life." Look around you. Look at your loved ones. Do you see a hunk of cells or do you see something else?

Most humans practice a kind of dualism, seeing a distinction between mind and body. We all automatically confer a higher order to a developed biological entity like a human brain. We do not see cells, simple or complex — we see people, human life. That thing in a petri dish is something else. It doesn't yet have the memories and loves and hopes that accumulate over the years. Until this is understood by our politicians, the gallant efforts of so many biomedical scientists, as good as they are, will remain only stopgap measures.

Vocab has removed a critical piece of what Gazzaniga wrote--he's not making anything like an argument from size, but rather an argument much more like my position, as seen in what Vocab omitted with his ellipsis and immediately following what he quoted. The piece as a whole is taking issue with the conflation of reproductive and therapeutic cloning, with the idea that the latter involves creating cloned people, and Gazzaniga's position seems to be that this confusion occurs because people are thinking of and talking about undifferentiated cells as though they are people--the same thing that is occuring in this very debate. (BTW, the president's Council on Bioethics, of which Gazzaniga was a member, argued that therapeutic, but not reproductive cloning should be permissible. My view is that while there are currently issues of knowledge and technology that could result in harm to cloned people, in the long run I don't see any ethical difference between reproductive cloning and natural reproduction, so long as the products of each get equal treatment on the same standard of personhood.)

Vocab suggests it would have been better to call this the "just a bunch of cells" argument, but that's really not an argument based on size, but rather an argument based on structure, function, and capacity--which is a good argument! I suspect that this is, in fact, the sort of argument that Klusendorf is misconstruing.

Next Vocab gives an argument from essences:

can any living being become anything else besides what it already is? How can something become a person unless its essence is already personhood? If the color blue is only blue and not the color red in the same way at the same time, its very essence – its fundamental property – must be blue and not red. Another example is that of the tadpole and frog. The tadpole is simply a name for a specific stage during a frog’s development. If one were to terminate a certain tadpole, then a certain frog would be terminated and no longer exist. This means you did not come from a fetus you once were a fetus.

The answer to the first question is clearly yes--there are all kinds of metamorphoses that occur in living things while they are alive, including changes of shape, color, size, and sex. And when they die, they can become parts of other things--just as other things become part of them when they come into existence, develop, and change. The second question is, I think, flawed. First, I don't think it's correct to regard personhood as a fixed, unchanging property. Douglas Hofstadter's book, I Am A Strange Loop, argues that self concepts not only develop over time, but can be shared across persons. Second, the question implies that anything that is a person is always and eternally a person and cannot be constructed out of something else. But on everybody's views, human beings are biological organisms, which come into and go out of existence in virtue of the states of their underlying components. Both the view Vocab has been defending and mine say that there are biological components which are not persons, which through some change of state subsequently become persons. If Vocab wants to hold a view by which personhood is an essential property of a simple substance, then he can do that by holding a dualistic view of an eternal soul which is a person that attaches at some point to a human as a biological animal. But if that's his view, then that's the argument we should be having, rather than one in which Vocab is defending a view like animalism.

Vocab makes a subsequent statement that I think vividly illustrates the error in his view:
One way to think about the idea of probability (or potentiality) is that every adult was once an unborn person, just as every oak tree was once an acorn. An acorn is simply a mini-oak tree, just as a microscopic person is a mini-human.
But that last sentence is just false. Acorns are not miniature oak trees and zygotes are not miniature people. That's precisely the error that Gazzaniga is warning against in his article.

Vocab subsequently makes a point about skulls being crushed in an abortion procedure, and on that point he's correct--embryos do develop into fetuses, they do develop identifiable distinct parts and functions, and at some point they do become miniature people, but they don't pop into existence as such.

Continue to part four.

Friday, December 11, 2009

Vocab Malone on abortion and personhood, part 1

Vocab Malone has put up his first post arguing for the position that "the unborn human embryo is a full person at the moment of conception and should be afforded the full rights due human beings by their very essence."

Criteria of Personhood or Humanity
He starts by looking at the question of what it is to be human or to be a person, citing a few historical references of individual characteristics--being rational, being "in relationship," and "the capacity for self-objectification." He expresses doubt that any single characteristic is appropriate, on the grounds that human beings undergo changes of state such as being asleep or being drugged, or not thinking. I agree with him that the characteristics he has listed won't do the trick, and I also agree with him that features that go away when we sleep are inadequate. But it doesn't follow that there is no single feature that can do the trick--if the feature is a capacity that we have, for example, that capacity doesn't cease to exist when it's not being used.

He goes on to note that lack of personhood doesn't entail that any treatment is morally permissible, pointing out animals as examples of nonpersons that deserve humane treatment. Again, I agree with him--and observe the converse, that possession of personhood doesn't mean that there are no cases where it can be moral to kill a person--cases of self-defense, euthanasia, capital punishment, or war come to mind as possibilities. But what makes animals deserve humane treatment is that they have certain capacities and interests, such as an inner mental life that includes at the very least the ability to feel sensations--and note that humane treatment doesn't necessarily entail a right to life on the part of an animal, or a duty on our part not to kill them.

Vocab appears to want to lay the groundwork for rejecting the use of a criterion of personhood in favor of a criterion of humanity as his standard for arguing against abortion, but here he only offers a promissory note and doesn't provide an argument to that effect. I think this is a mistake, however, because ethical distinctions should be based on morally relevant features, and I don't believe species membership is any more relevant in and of itself to being the holder of rights or of being the object of duties than is race or gender. If a member of an intelligent alien species capable of language were to make contact with us, my intuition is that we would attribute personhood to that entity and give it the same consideration as a human being. Likewise if we manage to build artificially intelligent, self-directed machines with beliefs, desires, and intentions, though the intuition is not as strong there unless I imagine them to have mental lives similar to our own.

Conception: Fertilization
Even though Vocab hasn't yet given a reason to reject a personhood criterion in favor of a human being criterion, the rest of his case is solely about human life rather than personhood, which I think is the wrong issue for the reasons I just gave. He argues that human life begins at conception, and clarifies that he means fertilization rather than implantation. This choice means that 30-50% of human lives are spontaneously aborted due to the failure of the fertilized ova to implant in the uterine wall. If Vocab thinks that this loss of human life is the loss of beings with rights and interests to whom we owe a duty to enable them to live out normal lives, then he has some explaining to do. First of all, why would a loving God create a human reproductive system that resulted in such a Holocaust of lives lost before they get a chance to start? Second, why has no one considered this to be a serious ethical problem that we need to urgently devote medical resources to address? We can call this the problem of natural abortion, which has both a natural evil and human evil component that requires justification.

Complete at Fertilization?
Vocab says that at conception (by which he means fertilization), "every human is complete and alive." I agree that a fertilized human ovum is alive--as life is a continuous process, arising from living components, at least until synthetic biology gets to the point of creating life from entirely nonliving components. Sperm and ova are also alive. But it is certainly not complete--zygotes have no brains, no central nervous systems, no organs, no body parts other than undifferentiated, identical cells.

An Individual at Fertilization?
Vocab also says that at fertilization and pre-implantation, "it is not merely a collection of cells lumped together but an actual individual." This also need not be the case. At fertilization, a zygote is an undifferentiated cell that undergoes a process of division without changing size for several days, to become a blastocyst by about the fifth day. During this period each of its cells is totipotent, meaning that each individual cell has the potential to become a full human being. Sometimes more than one of the cells does become a separate human being, as in the case of identical twins. In the case of identical twins, if they don't split completely, they may become conjoined twins or parasitic twins, or one twin may be completely absorbed into the other or otherwise fail to develop and become a vanishing twin. Where a vanishing twin occurs with fraternal twins, the resulting individual can be a chimera, with two sets of DNA. Should we also grieve for those twins who fail to develop, either due to failure to split off or failure to develop?

The science fiction scenarios of teleportation that create interesting philosophical puzzles for the notion of personal identity are real puzzles for a view that attributes personhood to zygotes, though without the additional problem of memories and experiences, since zygotes are undifferentiated cells.

Blastocysts
Once the zygote becomes a blastocyst, it forms into an outer layer of cells, which later becomes the placenta, and an inner cell mass of pluripotent embryonic stem cells, each of which is capable of differentiating into any kind of human cell. Only after this stage does the blastocyst implant in the wall of the uterus, about a week after fertilization, and begin taking nutrients directly from the blood of the mother--a dependency that can itself be of moral significance, as Judith Jarvis Thomson's violinist argument shows. As already mentioned above, a great many fertilized ova do not reach this stage. Further, the percentages of implant failure are higher for in vitro fertilization (IVF), a procedure which Vocab's criteria would have to declare unethical, even though it is the only way that many couples can have their own biological offspring.

It should also be noted that the process of therapeutic cloning involves taking a female ovum (which Vocab doesn't seem to indicate he considers to be a bearer of rights on its own), removing its haploid DNA, inserting the nucleus from a (diploid) human somatic cell (this is called somatic cell nuclear transfer), and giving it a shock to cause it to start dividing just like a fertilized egg. This occurs without fertilization by a human sperm. Once it reaches the blastocyst stage, its inner cell mass is harvested for embryonic stem cells, which destroys the blastocyst in the process. The natural process of fertilization never takes place, but there's little doubt that reproductive human cloning is possible via this process. Vocab's choice of fertilization as key suggests that there is no moral issue with this process, even though it also has some potential to become a human being. Further, if fertilization is a necessary, not just a sufficient, condition for rights, Vocab's view suggests that human clones would have no rights.

Fully Programmed?
Vocab goes on to say that "the embryo is already 'fully programmed' (to use computer language). This means the pre-implanted embryo needs no more information input at any further point in its development." While this was formerly believed to be the case about the individual embryo's biology, we now know that the environment of development can play a role in the characteristics that will come to be exhibited, such as from mRNA supplied from the mother to a developing embryo after fertilization and prior to zygote formation. But in any case, I would maintain that it's not our cellular biology that gives us moral value, as opposed to our capacities to have interests, desires, intentions, plans, sensations, and so forth--all capacities that zygotes lack.

Vocab ends this piece with some anthropomorphizing of zygotes, which appears to me to be a highly misleading form of argument--his analogies cannot be taken literally, since zygotes have no mental processes.

Human and Living = Human Being?
I agree with Vocab that a fertilized human ovum is living, that it's human, and that, if all goes well, it will become one (or more) individual human beings. I don't agree that it's yet a person or a "human being," since it lacks the requisite parts and capacities.

To sum up:
  1. Vocab hasn't given a reason to favor a criterion of "being human" over personhood for determining when it's legitimate to attribute rights or incur duties on our part.
  2. His choice of fertilization as the point at which rights begin is not when life begins (as it is continuous) and implies that a large percentage of rights-bearing entities die without any apparent concern from God or those who share Vocab's views, an inconsistency requiring justification and explanation.
  3. A zygote has the potential to be not just one person, but multiple. The same lack of concern over non-actualized multiples that could have been born requires explanation.
  4. Vocab's view suggests that IVF, which similarly loses even more zygotes or blastocysts (not even counting the embryos that are left frozen or discarded), is unethical.
  5. Vocab's view so far gives no reason to classify human therapeutic or reproductive cloning as unethical--but might even entail that human clones have no rights, since there's no fertilization by a human sperm, if he thinks that fertilization is both a necessary and sufficient condition for rights.
  6. In the stages of life described so far, we've gone from completely undifferentiated totipotent cells to a differentiation between two types of cell, the outer wall of the blastocyst (which we both agree is neither a person nor a human being, but what becomes a placenta) and an inner cell mass of embryonic stem cells. Vocab hasn't given a reason why we should give that rights or moral value.
  7. At this state, the embryo is dependent upon the mother for its existence; Vocab will need to give an account of how the mother's rights are weighed against the embryo's in light of arguments like Judith Jarvis Thomson's violinist example.
  8. Vocab calls a fertilized zygote a "complete" human being and implies that it has everything it needs to determine its future state, but this is neither the case biologically (given maternal effects on development, for example) nor regarding features that we consider quite important for human value, such as those that develop as a result of acquisition of language, ideas, experiences, and so forth.
  9. Vocab has used some anthropomorphic language in describing the implantation process which is misleading since zygotes have no mental processes.
Continue to part two.

UPDATE (December 12, 2009): Added the sentence on chimeras.

UPDATE (December 13, 2009): Vocab has posted a brief rebuttal to this post.

Thursday, December 10, 2009

Discussion on abortion and personhood w/Vocab Malone

Local Christian hip-hop artist and slam poet Vocab Malone, who I've interacted with online and met when Daniel Dennett spoke at ASU early this year, asked me in January for my thoughts on abortion and personhood. He's now written a paper on the subject which he's asked me to critique, and we thought it would be interesting to see how it would work out to do it in a public manner via our respective blogs. The plan is that he will post successive sections of his paper on his blog, and I'll respond here, with cross-links to share some traffic and discussion. Both of us allow blog comments; it probably makes the most sense to post your comments at the blog for the person you'd like to see a response from.

Vocab has posted an introduction and the comments that I originally sent to him on the subject at his blog, Backpack Apologetics. He's taking a position that I think is very difficult to justify, that full personhood and human rights are acquired at the moment of conception--we'll have to see which definition of conception he chooses, fertilization or implantation.

Just to throw out a little issue I raised this semester in one of my classes--some have argued that climate change raises the ethical issue of a duty to future generations. If we can have moral duties now to people who don't exist at all yet, what does that imply about duties to embryos?

Monday, November 16, 2009

Daniel Dennett, The Evolution of Confusion

Daniel Dennett's talk from the 2009 Atheist Alliance International convention (link is to my summary) is now online:

Sunday, November 08, 2009

Philosophy Bites podcast

I've been listening to past episodes of the Philosophy Bites podcast, and I highly recommend it--they are short (about 15 minute) discussions with prominent philosophers about specific philosophical topics and questions. I've found them to be consistently of high quality and interesting, even in the one case where I think the philosophical argument was complete nonsense (Robert Rowland Smith on Derrida on forgiveness). Even there, the interviewers asked the right questions.

I particularly have enjoyed listening to topics that are outside the areas of philosophy I've studied, like Alain de Botton on the aesthetics of architecture. Other particularly good ones have been Hugh Mellor on time, David Papineau on physicalism, A.C. Grayling on Descartes' Meditations, and Peter Millican on the significance of Hume. I've still got a bunch more past episodes to listen to; I'm going to be somewhat disappointed when I catch up.

Wednesday, November 04, 2009

Where is the academic literature on skepticism as a social movement?

Here's all I've been able to find so far, independent of self-descriptions from within the movement (and excluding history and philosophy of Pyrrhonism, Academic Skepticism, the Carvaka, the Enlightenment, British Empiricism, and lots of work on the development of the enterprise of science):
  • George Hansen, "CSICOP and the Skeptics: An Overview," The Journal of the American Society for Psychical Research vol. 86, no. 1, January 1992, pp. 19-63. I've not seen a more detailed history of contemporary skepticism elsewhere.
  • Stephanie A. Hall, "Folklore and the Rise of Moderation Among Organized Skeptics," New Directions in Folklore vol. 4, no. 1, March 2000.
  • David J. Hess, Science in the New Age: The Paranormal, Its Defenders and Debunkers, and American Culture, 1993, The University of Wisconsin Press.
I note that Paul Kurtz's The New Skepticism: Inquiry and Reliable Knowledge (1992, Prometheus Books) puts contemporary skepticism in the lineage of several of the other forms of philosophical skepticism I mentioned above, identifying his form of skepticism as a descendant of pragmatism in the C.S. Peirce/John Dewey/Sidney Hook tradition (and not the Richard Rorty style of pragmatism). But I think that says more about Kurtz than about the skeptical movement, which also draws upon other epistemological traditions and probably doesn't really have a sophisticated epistemological framework to call its own.

There's a lot of literature on parallel social movements of various sorts, including much about advocates of some of the subject matter that skeptics criticize, and some of that touches upon skeptics. For example:
  • Harry Collins and Trevor Pinch, "The Construction of the Paranormal: Nothing Unscientific is Happening," in Roy Wallis, editor, On the Margins of Science: The Social Construction of Rejected Knowledge, 1979, University of Keele Press, pp. 237-270.
  • Harry Collins and Trevor Pinch, Frames of Meaning: The Social Construction of Extraordinary Science, 1982, Taylor & Francis.
  • Ronald L. Numbers, The Creationists: From Scientific Creationism to Intelligent Design, 2nd edition, 2006, Harvard University Press.
  • Christopher P. Toumey, God's Own Scientists: Creationists in a Secular World, 1994, Rutgers University Press.
The Toumey book doesn't really have anything about skeptics, but is an anthropological study of creationists in the United States which describes the connection between "creationism as a national movement" and "creationism as a local experience" that seems intriguingly similar to the skeptical movement, especially in light of the fact (as I mentioned in my previous post) that national skeptical organizations are independent of established institutions of science that provide the key literature of the movement and at least implicitly assume that the average layman can develop the ability to discern truth from falsehood, at least within a particular domain, from that literature.

In some ways, the skeptical movement also resembles a sort of layman's version of the activist element in the field of science and technology studies, based on positivist views of science that are the "vulgar skepticism" dismissed in this article:
I think if contemporary skepticism wants to achieve academic respectability, it will need to develop a more sophisticated view of science that comes to terms with post-Popper philosophy of science and post-Merton sociology of science; my recommendation for skeptics who are interested in that subject is to read, as a start:
  • Philip Kitcher, The Advancement of Science: Science Without Legend, Objectivity Without Illusions, 1995, Oxford University Press.
There's an enormous relevant literature on those topics, an interesting broad overview is:
  • R.C. Olby, G.N. Cantor, J.R.R. Christie, and M.J.S. Hodge, Companion to the History of Modern Science, 1990, Routledge.
I welcome any new revelations about sources of relevance that I've missed, particularly if there is other academic work specifically addressing the history, philosophy, sociology, and anthropology of the contemporary skeptical movement--three sources ain't much.

UPDATE (September 27, 2014): Some additional works I recommend for skeptics:

  • Harry Collins, Are We All Scientific Experts Now?, 2014, Polity Press.  A very brief and quick overview of science studies with respect to expertise.
  • Massimo Pigliucci, Nonsense On Stilts: How to Tell Science from Bunk, 2010, University of Chicago Press. A good corrective to the overuse of Popper, easy read.
  • Massimo Pigliucci and Maarten Boudry, Philosophy of Pseudoscience: Reconsidering the Demarcation Problem, 2013, University of Chicago Press. Good collection of essays reopening the debate many thought closed by Larry Laudan on whether there can be philosophical criteria for distinguishing the boundary between science and pseudoscience.

Wednesday, October 21, 2009

Skepticism, belief revision, and science

In the comments of Massimo Pigliucci's blog post about the scope of skepticism (which I've already discussed here), Skepdude pointed to a couple of blog posts he had written on similar topics some time ago, about what atheists have in common and skepticism and atheism. He argues that skeptics must be atheists and cannot be agnostics or theists, a position I disagree with. In an attempt to get to the bottom of our disagreement after a few exchanges in comments on his blog, I wrote the following set of questions which I first answered myself, so we can see how his answers differ.

Do we have voluntary control over what we believe?

In general, no. The credence we place in various propositions--our belief or rejection of them--is largely out of our voluntary control and dependent upon our perceptual experiences, memories, other beliefs, and established habits and methods of belief formation and revision. We can indirectly cause our beliefs to change by engaging in actions which change our habits--seeking out contrary information, learning new methods like forms of mathematics and logic, scientific methods, reading books, listening to others, etc.

How does someone become a skeptic?

People aren't born as skeptics--they learn about skepticism and how it has been applied in various cases (only after learning a whole lot of other things that are necessary preconditions--like language and reasoning). If skepticism coheres with their other beliefs, established habits and methods of belief formation and revision, and/or they are persuaded by arguments in favor of it, either self-generated or from external sources, they accept it and, to some degree or another, apply it subsequently.

When someone becomes a skeptic, what happens to all of the other beliefs they already have?

They are initially retained, but may be revised and rejected as they are examined through the application of skeptical methods and other retained habits and methods of belief formation and revision. Levels of trust in some sources will likely be reduced, either within particular domains or in general, if they are discovered to be unreliable. It's probably not possible to start from a clean slate, as Descartes tried to do in his Meditations.

Is everything a skeptic believes something which is a conclusion reached by scientific methods?

No. Much of what we believe, we believe on the basis of testimony from other people who we trust, including our knowledge of our own names and date and place of birth, parts of our childhood history, the history of our communities and culture, and knowledge of places we haven't visited. We also have various beliefs that are not scientifically testable, such as that there is an external world that persists independently of our experience of it, that there are other minds having experiences, that certain experiences and outcomes are intrinsically or instrumentally valuable, that the future will continue to resemble the past in various predictable ways, etc. If you did believe that skeptics should only believe conclusions which are reached by scientific methods, that would be a belief that is not reached by scientific methods.

Tuesday, September 22, 2009

Mirror neurons and the study of science

Tony Barnhart was kind enough to invite me to a psychology seminar yesterday afternoon that was a discussion of mirror neurons, at least partly inspired by (or inflamed by) Marco Iacoboni's August 27 talk which I attended and summarized.

I found the discussion particularly interesting in light of my current studies, as it touched repeatedly on issues of what's appropriate in science--what does and does not conform to the norms of good science.

The discussion leaders began with quotes from V.S. Ramachandran and Marco Iacoboni:
"The discovery of mirror neurons in the frontal lobes of monkeys, and their potential relevance to human brain evolution…is the single most important ‘unreported’ (or at least, unpublicized) story of the decade. I predict that mirror neurons will do for psychology what DNA did for biology: they will provide a unifying framework and help explain a host of mental abilities that have hitherto remained mysterious and inaccessible to experiments." (Ramachandran, 2001)
and
"We achieve our very subtle understanding of other people thanks to certain collections of special cells in the brain called mirror neurons. These are the tiny miracles that get us through the day. They are at the heart of how we navigate through our lives. They bind us with each other, mentally and emotionally." (Iacoboni, Mirroring People, p. 4)
The immediate objections were to the trumpeting of the importance of mirror neurons prior to the discovery of supporting evidence, as well as to the use of the word "miracle" to describe something that's supposed to be science. These objections ran through the seminar, much of which confronted the issue of whether or not mirror neuron claims are scientific at all.

This first objection is closely related to the first red flag of Robert Park's list of "Seven Warning Signs of Bogus Science" (2003)--that a claim is pitched directly to the media (or general public) rather than to scientists, and that specific objection was raised about Iacoboni's talk, that he was making grandiose claims to a general (or "naive") audience. This has been a common issue raised in defining the boundary between science and non-science, traceable at least back to the debate between anatomists and phrenologists in Scotland in the early 19th century, where "anatomists accused phrenologists of relying on popular opinion to validate their theories while ignoring opinions of scientific 'experts'" (to quote sociologist of science Thomas Gieryn's 1983 paper on "Boundary-work and the demarcation of science from non-science," p. 789). While it wasn't stated in this case that mirror neuron advocates are appealing to the general public to the exclusion of scientists, they were explicitly criticized for their appeals to the public in order to raise interest in their work, make it easier to get funding, and so forth, and, in the case of Iacoboni's book, for using different language in his book aimed at a popular audience that eliminated qualifiers and wasn't appropriately skeptical.

In my opinion, Iacoboni shouldn't be faulted for popularization of his work or his generation of excitement and funding from public interest--the former criticism seems a bit like sour grapes--but only for the latter, any cases where he presents arguments without proper supporting evidence, or fails to identify theoretical speculation as such. What should be significant is not the mere fact of public appeal, but the extent of the gap between the scientific evidence and the public description. Note that there will always be a gap between evidence and any scientific theory, even where a theory is firmly established, since scientific theories are always subject to further revision--they're not logical proofs. "Tiny miracles," though--I have to agree that's over the top.

Another objection raised to mirror neurons is the wide variety of human behavior that they've been proposed to explain (from the presenters' slides):
"Since their discovery, mirror neurons have been invoked to explain imitation, speech perception, empathy, autism, morality, the appeal of porn, sports team activities, social cognition, self-awareness, yawning, mind reading, action understanding, altruism, etc."
A list of neuroimaging studies purported to provide evidence for a human mirror neuron system was shown, and the question asked was how many of these studies looked at both observation of an action and execution of an action? The answer was very few, likely because observation is much easier to test in an fMRI machine compared to execution. Of those, which found evidence of activation for both observation and execution? The answer was only a single study (Gazzola, et al., 2007).

Further questions raised for discussion (from slide):
  • Is there any conceivable way to falsify MN theories?
  • As Iacoboni claimed, MNs are not anatomically-defined, and can fire in response to the same, similar, and opposite observations/actions. The whole brain, therefore, comprises the MN system. How is that useful?
  • Many researchers have moved away from hypothesizing about “mirror neurons” to “mirror systems.” Must mirror systems necessarily be composed of mirror neurons?
  • If not, are mirror neurons the most parsimonious explanation for [insert favorite behavior here]?
  • Can you generalize findings from one species to the next when one of the species possesses cognitive capabilities that have never been demonstrated in the original species? Yes, this is a “monkeys don’t have language, nor do they imitate”-based question.
  • Can individual neuron activity logically be used as an explanation for higher-order cognitive abilities?
  • How do mirror neurons handle sarcasm?
And, though not on the slide, the following claim was noted:
Similarly, a baseball pitcher’s windup is chock full of similar kinetic clues that can activate the batter's mirror neurons and help him predict the kind of pitch he will get. "This may help explain the fact that a great pitcher, Babe Ruth, was also one of the greatest home run hitters of all time," writes John Milton in Your Brain on Cubs.
and the question was asked--if mirror neurons activation is involved in imitation, rather than a complementary activity in this case, why wouldn't the mirror neuron activation interfere with Babe Ruth's ability to hit, rather than improve it? (The answer, it would seem to me, would be a suggestion that his pitching knowledge would allow him to recognize cues about the type of pitch before it happened, that would produce a benefit in hitting performance--but this is a more abstract description that doesn't necessarily require a mirror neuron explanation--another common theme of the discussion.)

This led to a lively discussion, and it seemed to me that the following were some of the most significant arguments, with my commentary on them:

1. It seems highly implausible that single cells are involved in mediating or controlling this behavior, and neither transcranial magnetic stimulation (TMS) nor fMRI is capable of isolating individual neurons. It is particularly implausible that single cells are implicated regarding a relationship to an action that is similar in that it's directed to the same goal (i.e., a semantic property). I agree, but I'm not sure why mirror neuron advocates should be taken as insisting that single cells are involved, as opposed to the "mirror neuron systems" described in Iacoboni's talk.

2. If assemblies of cells are involved instead of individual cells, how is this a distinctive or interesting theory? Doesn't it then just become a restatement of "neurons work together in the brain to make things happen?" Several people (including the person who asked that question) noted that it's still potentially interesting if these assemblies participate in both observation and action, and may provide support for theories that implicate motor programs of speech generation in speech perception.

3. Some other evidence for mirror neurons from TMS experiments in speech production. Two parts of the speech motor cortex, one active when people produce labial phonemes, and another part active when they produce dentals, were stimulated with TMS in the form of a double-pulse, which tends to provide a stimulative effect similar to priming. The result was that double-pulsing the region associated with labials facilitated the perception of labials, and double-pulsing the region associated with dentals facilitated the perception of dentals.

4. The inference to mirror neurons from fMRI evidence is choosing a single possible explanation without sufficient discriminatory evidence to exclude other explanations, such as priming. This seems like a quite reasonable objection, but one which doesn't preclude further research both within a mirror neuron framework and from outside--a battle between camps is probably a good way to provoke fruitful experimentation and mutual criticism until discriminatory evidence or arguments are obtained. There was some disagreement in the discussion about whether such discriminatory evidence could ever be obtained, but I'm inclined to think that someone will come along and provide some strong reasons to prefer going down one path rather than another.

5. In the cases where only a single or very few cells are measured, isn't that "a colossal sampling error"? One response was that the single-neuron measurement studies may have recorded from as many as 200 neurons, of which 75 showed mirror properties, of which 2/3 showed mirror properties in general (i.e., they matched individual actions and related actions directed at the same goal) and 1/3 only showed activation in response to the same exact action. I think this still presents a significant sampling issue in that there are likely hundreds of thousands of connections implicated for each neuron; I'm also a bit wary of the claims of mirror properties for related actions, where the relations may be semantic rather than simple associations, as there seems to be a potential for creative interpretation in determining what counts or doesn't count as related. That's independent of the implausibility of such properties at the individual neuron level.

6. The mirror neuron evidence and arguments seem to be like a cartoon version of science being presented to scientists and to the public (a criticism that explicitly excluded the original monkey studies). The use of the term "mirror neuron" seems like "a romantic notion that's taken on a life of its own," even though it is descriptive--you see someone else performing the same action, it's like looking in a mirror.

7. This is an unusual case in which, rather than psychology observing a behavior and theorizing neurological activity, the concept has been derived from observed neurophysiological behavior and "pyramided up," presenting challenges for theory comparison. Other competing theories don't have neural-level predictions. Are mirror neuron theories even falsifiable?

The seminar was closed with another quote from Iacoboni's book, from the end: "Mirroring People also ends on a hopeful note, the hope that science and scientific thinking may play an important role in our society."

I found it a fascinating discussion to observe, especially as issues came up pertaining to the norms of science and the demarcation between science and non-science, where scientists often appeal to criteria such as Karl Popper's falsifiability criterion. Most philosophers of science today agree that there is no sharp boundary between science and non-science (though there are certainly things that are clearly science or clearly not science), that the falsifiability criterion doesn't provide such a demarcation (and isn't strictly feasible given the nature of background assumptions and clusters of propositions involved in theory testing), and that the Mertonian norms of science are more of an ideal than reality. Science is a bit messier than that, and it seemed that some of the social aspects of "boundary-work" were in play in the discussion.

UPDATE: I should note that there were two papers of recommended reading for this discussion, which were:

Gregory Hickok, "Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans," Journal of Cognitive Neuroscience 21:7, pp. 1229-1243 (2008).

Giovanni Buccino, Ferdinand Binkofski, and Lucia Riggio, "The mirror neuron system and action recognition," Brain and Language 89 (2004) 370-376.

I didn't get a chance to read those before the seminar, but may update this post with further comments after I do.

UPDATE (September 5, 2013): Alison Gopnik piece on "Cells That Read Minds? What the myth of mirror neurons gets wrong about the brain" on Slate.

Friday, August 07, 2009

Investigating Atheism

The faculty of Divinity at the University of Cambridge and University of Oxford have put together a website on "Investigating Atheism." Although it's ironic that a bunch of theologians have done this, in my brief perusal of the site I haven't found anything objectionable--it does a good job of putting current atheist arguments and personalities in historical context.

(Via the Secular Outpost.)

UPDATE: Well, they do have an article from well-known net kook John A. Davison. That's a bit of an odd choice.

Thursday, August 06, 2009

The Amazing Meeting 7: SGU, Shermer, Savage

This is part four of my summary of TAM7, now up to Saturday, July 10. Part 1 is here, part 2 is here, part 3 is here, and my coverage of the Science-based Medicine conference begins here.

Skeptics Guide to the Universe
Both Friday and Saturday morning began with live recording sessions for the Skeptics Guide to the Universe podcast, for which I didn't bother to take notes, since it was being recorded (it's Skeptics Guide podcast episode #208 and may be found on the website archive or via the iTunes store). The Saturday morning event began with a satirical ghost hunter video by Jay Novella, "The G Hunters" (part one, part two). But the real surprise came during the listener Q&A session, when Sid Rodrigues asked a question "maybe for Rebecca," which turned out to be "Will you marry me?" A seemingly impromptu, but carefully planned wedding followed immediately, though there wasn't enough cake for everyone, nor a champagne toast. All present did receive after-the-fact invitations as a nice memento, and there was a first dance for those who wanted to participate.

Michael Shermer
Michael Shermer prefaced his talk with an overview of the Skeptics Society and Skeptic magazine that bore some resemblance to the introduction of his TED Talk of 2006. His talk, titled "Rise Above--Towards a Type I Civilization," argued that we should work to rise above our tribal instincts, our evolutionary heritage, and the left-right political spectrum. He began by noting that most of our decisions are judgments made on uncertainties (a reference to the classic book Judgment Under Uncertainty: Heuristics and Biases, by Daniel Kahneman, Paul Slovic, and Amos Tversky), made emotionally with intuitive leaps which are then followed by rationalization to provide reasons to justify what we've already decided to do. He observed that when the amygdala is damaged, this leads not only to loss of emotional capacity, but an inability to make decisions. We don't fall into categories of good and evil, but good and evil run through each person, he said, referencing Joseph Conrad's Heart of Darkness and Aleksandr Solzhenitsyn's The Gulag Archipelago. An individual's expanding circles of concern are based on genetic relationships and kin selection, he said, and reciprocal altruism operates within kin/kind/community. We're good to members of our in-group, but skeptical and cautious about other groups.

He spoke briefly about the left-right political spectrum, arguing instead for a three-dimensional Nolan chart that is used by libertarians with a misleading questionnaire as a recruiting tool. While I agree with Shermer that the left-right spectrum has serious weaknesses, I don't think the Nolan chart is much of an improvement, especially when the coordinates on the chart are determined by a limited set of questions that are worded in a way that glosses over details. Better, I think, is to recognize that the space of political positions really encompasses far more dimensions. Shermer asked the audience how many considered themselves to be left of center, right, or libertarians, and the answers were about 1-2 people right of center, 15-20% libertarian, and the rest self-described liberal. He put up a couple of slides containing exaggerated stereotypical descriptions of how conservatives view liberals and vice versa, which produced cheers to both. He put up the political map of red and blue states based on the last presidential election results, and pointed out that the map is misleading, because if you look at it on a more granular level the country is really a mass of purple. (Though he didn't mention or address the thesis of Bill Bishop's The Big Sort.) He noted that his speaking out about his libertarianism has raised more ire than his views on religion (theism), and stated that it's fine to disagree, but that political topics should be open to discussion. This was probably the most controversial talk of the conference, and it, along with Shermer's recent interview on the Point of Inquiry podcast, have led to some to argue that skepticism should be apolitical. Shermer said that he's been told that he should be apolitical, "like Carl Sagan," to which Shermer (correctly) responded that Sagan was not apolitical, as he argued for a number of liberal causes, including nuclear disarmament (a cause for which he was twice arrested during protests).

He then turned to some more interesting research, Jonathan Haidt's work on how people make moral judgments. Haidt has hypothesized that we make moral judgments based on five scales, which Shermer compared to "a five-channel moral equalizer":
  1. care: Protection from harm.
  2. fairness: Justice, equality.
  3. loyalty: Family, group, nation.
  4. authority: Respect for law, tradition, and traditional institutions.
  5. purity: Rules about sexual conduct, recognition of sacredness.
Liberals tend to emphasize the first two items, which place a focus on individual rights, while conservatives use those two and the remaining three about equally, and the last three focus on group cohesiveness. These tendencies seem to hold up across cultures.

Shermer apparently argued that all five of these scales are important, saying that "since 9/11, things have changed," and noting that group loyalty is now getting some emphasis from left-atheists like Richard Dawkins and Sam Harris. Shermer argued that religious extremists are dangerous, and are assisted by religious moderates. I think this is actually a badly mistaken inference to draw. Sure, there are extremists who are out to harm the U.S., but terrorism is a strategy of the militarily weak against the strong, and the right way to combat it is not by doing things like launching an invasion and occupying a country that had nothing to do with 9/11 (Iraq), engaging in torture and abuse, and causing religious moderates to join with the extremists, but rather by a divide-and-conquer strategy that isolates the extremists from the moderates and maintains the moral high ground. (Skeptic and physicist Taner Edis, from Turkey, has criticized Sam Harris for his misunderstanding of Islam, as has Chris Hedges who, despite his sometimes annoying attitude, made some good points on the subject in his Point of Inquiry interview.)

To support his point, Shermer showed a clip from the film "A Few Good Men" in which Jack Nicholson defends his position of ordering a "Code Red" to engage in self-enforcement to punish a slacker in the military ranks as an ugly and unpleasant necessity.

Shermer then turned to the Kardashev scale referenced in his title, which classifies civilizations into Type 0 (energy produced from dead plants and animals), Type I (planetary civilizations controlling the energy of an entire planet), Type II (stellar civilizations controlling the energy of an entire sun), and Type III (a civilization controlling all of the energy in an entire galaxy). Shermer gave an ordering from Type 0 to Type II, with tribal communities at 0.3, liberal democracies at 0.8, and then described Type I civilizations as including a global wireless (why wireless?) communication system (the Internet), a global language (English, most likely), a global culture (why not diverse cultures?), and global free trade, which breaks down tribal barriers. He didn't really provide an argument for the details of the how and why, apart from that short defense of global free trade and a little more he said later, pointing to the work of Fredric Bastiat (Bastiat's axiom: where goods cross frontiers, armies will not), which he augmented with the "Starbucks theory of war" (two nations with Starbucks won't fight each other) and the "Google Theory of Peace" (where information and knowledge cross frontiers, armies will not).

He then cited the work of Rudy Rummel on democracy and war, stating that between 1860 and 2005 there have been 371 wars, of which 205 were between non-democratic nations, 166 were between democracies and non-democracies, and 0 were between democracies. He said that some have challenged the details of the classifications, but that in general, democracies seem to be less likely to engage in war as a means of resolving disputes.

He concluded by saying that rising above tribal instincts is hard, and quoted Katherine Hepburn's line from "The African Queen": "Nature, Mr. Allnut, is what we must rise above."

I didn't get a chance to ask my question in the Q&A, but I went up to Shermer afterward and suggested that the tribal in-group seems to be a biological/mathematical limitation of our memories and processing capabilities with respect to the number of combinations of relationships we can track. Anthropologist Robin Dunbar's work on this topic has led to what is called the "Dunbar number" or a "Dunbar circle," which is the number of people you can keep track of and that make up your in-group, and it's about 150. Studies of Facebook users show that even those with thousands of friends still engage in most of their interactions with a group of 150 or fewer. So my question was, in light of that limitation, how can we rise above tribal membership? Shermer's answer was the same one I would have given, which is that although we may still be limited to that number of relationships, today they don't have to be limited by geography, and so the way to "rise above" is to have lots of these small groups. Shermer suggested that we need to avoid any such groups having a political monopoly, but the real concern is how those small groups build coalitions which obtain and exercise political power, and what they try to do with it. I'm not sure there's any getting around the problem of having political institutions which govern vastly larger numbers of people.

My own opinion on whether "skepticism" should be apolitical and avoid religious topics is that skeptical organizations should avoid taking positions on those topics, except where there are clear empirically testable hypotheses. (For example, it should be perfectly legitimate for a skeptical organization to publish an examination of the social and psychological factors that cause people to give credence to crackpots like Orly Taitz and Philip Berg, and their respective bogus Kenyan and Canadian Obama birth certificates--as well as to examine the facts around topics like the "birther" controversy.) Individual skeptics, however, should feel free to argue for whatever positions they hold, while being cognizant of what is within the realm of the empirical and what is more philosophical. I don't think Shermer's talk should have been ruled inappropriate for TAM, though I would have liked to have seen a bit more science and argument in the talk, and I wouldn't want to see a whole bunch of talks that all touched on politics or religion, especially if they all came from a single viewpoint.

(UPDATE: I recently came across something I wrote relevant to this point about ten years ago on Usenet, which I still agree with today:

"The skeptic's position should be, on any issue where there isn't conclusive evidence one way or another, either agnosticism or tentative acceptance of the view that seems to be best supported--but with tolerance for those who accept other views which are also inconclusively supported by the evidence. In other words, there is no and should be no official skeptic's position.

Further, there shouldn't be an official skeptic's position on subjects which are matters of political ideology, religious faith, or metaphysical views on which empirical science is silent.")

Adam Savage
Adam Savage of Mythbusters gave a talk not directly related to skepticism, but to which everyone could relate--a talk about personal failure. He said that he is often asked how he attained his success, and he said that he didn't follow a straight path and that he had a lot of failures along the way. He began by referring to Aaron Sorkin's "Sports Night," which he called the best 26 hours of television. In an episode of the second season, a billionaire who's going to buy the show says, "Dana. I'm what the world considers to be a phenomenally successful man. And I've failed much more than I've succeeded. And each time I fail, I get my people together, and I say, "Where are we going?" And it starts to get better. And that's what you should do."

Savage said that he wanted to present the details of how spectacular and painful some of his failures have been. He said that he's been fired from a production assistant job, he's been divorced, and he's yelled at his kids. All of our lives are two steps forward, one step back. He got a job at Industrial Light and Magic, working with his heroes, a job he'd wanted since he was 11. In the SFX industry, everybody is freelance, working on jobs for a time, and always looking for the next. But at ILM, there is no selling required. He said your resume is just three words--just four words--Industrial Light and Magic. And he would also take extra outside jobs.

His friend Ben called him with a job that he couldn't take because of the short turn-around time. A department store wanted a window display within five days, that depicted a ballpark fence. What they wanted was baseballs automatically being pitched over the fence on a continuous basis.

Savage bid his day rate, $300-$500/day, plus a market-rate rush fee. It was a really fat paycheck for five days work. He got pitching machines and a ballfeeder, built it, and watched it work 70 times in a row, and then fail. He figured this was a solvable problem. He stayed up all night Friday and Saturday morning trying to get it to work--it was originally supposed to be ready by Saturday, and needed on Monday (?)--and brought it to the store to assemble. It turned out that the size of the display area was different from what he was told, and in the new set up it was down to 30-40 balls in a row before failure, so would fail every 3 hours. He observed that there's a reason the displays in airports with balls moving around on tracks use fixed rails, rather than tubes like he was using--rails lead to balls moving in a predictable amount of time, while the air resistance in a tube makes the timing unpredictable. So he added an air blower to force the balls down the tubes.

The next problem was that when one pitching machine pitches, it takes more power, which causes the other two machines to slow down, increasing the failure rate. He had relatives coming into town at 6 p.m. on Saturday and it still wasn't working. He came to the conclusion that no amount of effort is going to make it work, and told his employer that in 30 minutes he would present three alternatives and have whichever one they chose implemented by 8 a.m. the next morning. He came up with a new solution using a monofilament chain connected to the balls, simulating the motion of a pitched ball--no pitching machines. He stayed up all night and visited Home Depot repeatedly, and finally got it working with 10 baseballs.

The National Head of Display came to look at the display, and said, "it looks great, but I don't like the balls--get rid of them."

Savage's second story of failure was from earlier in his career, when he "pretended to attend NYU for a year" and then worked with his film student friends on their films. He worked on a friend's film that was filmed at the Alexis Theater, and the film ended up winning the NYU Film Festival's best art direction prize. So he thought about becoming an art director, and put his name out. He was asked to work on a friend Gabby's film, with an $850 budget. He needed to build a set of a room with a glass door with an ATM in it, which he figured he could do with wood frames and canvas for the walls, a shell for a computer as the ATM, and a plexiglass door.

He never asked for help.

He worked Wednesday through Saturday morning, without sleep for 60 hours, and wasn't close. The screen on the ATM cracked--he figured, it's supposed to be an urban environment, it will be fine. He didn't pre-prep the canvas, so it all become horribly wrinkled. He put down linoleum on the carpet of the home where the set was being built for the floor. At some point, a member of the crew asked him, "Do you even know what you're doing?" He responded with what he thought was a clever line from Raiders of the Lost Ark, "I don't know, I'm making this up as I go along." The response from the crew member: "Go home." So he did.

The following Monday, he went to the set to pick up his toolbox, and it wasn't there. There was a note that said "We have your toolbox. Call me. Gabby." He called her, and she said, "What did you do to me? You screwed me. You pissed away the money. If you could do anything to destroy our friendship, this is it. I want you to account for every penny." He cried and called his father, who told him, "All you can do is move forward." He went and met with Gabby, and accounted for every penny that he had spent. She then said, "The crew is next door, and they want to talk to you."

He went to the room next door, and found a dark room with a chair in the middle, with a spotlight focused on it. He sat in it, and the director read from a pad of paper all of the things that Adam had said he would do, but didn't. This litany of offenses was periodically interrupted by a member of the crew adding something, like the fact that the linoleum he put down ruined the carpet in the apartment. There was also one point during the work where Savage was across town having sex instead of working on the set, and somehow the crew knew about that, too, and brought it up.

Finally, they asked him what he had to say for himself. He simply agreed--"You're absolutely right. I screwed up. I'm sorry." He added four meta-levels of sorry, and said that he knows it doesn't mean or help anything. At that point, the director said, after a long pause--"look, we're not trying to bring you down or anything."

Savage then quoted, from memory, from Ian McEwan's Enduring Love, which begins with four people in a public park running towards a balloon accident. In the opening, he writes something like "running towards a catastrophe, a kind of furnace in which are characters would be buckled into new shapes."

He said that he doesn't trust working with people who don't know or understand failure--failure builds character. And whatever you think now (about anything?), you're probably wrong.

He ended first by reading from Rilke's Letters to a Young Poet, which went something like this: "We find out moments of sadness terrifying because we are standing in a place we cannot stand. It's important to be lonely and attentive when one is sad, because that is when you learn." And then by saying that his favorite fictional character is Raymond Chandler's Philip Marlowe, because Chandler so clearly describes his flaws and foibles. He said that if the world were full of people like Marlowe, the world would be a safer place, but not boring.

There followed a Q&A, most of the questions were about Mythbusters, except for one question which Savage answered about Rilke's hatred of Rodin (and writing "what is fame but a collection of misunderstandings about a name?") and another which he answered by describing his "boyhood dream" to win an Ig Nobel Prize for writing a taxonomy of nonsense words for large and small numbers.

(Savage gave a similar talk at Defcon 17, available online.)

(Click on the link to continue to a summary of the rest of the Saturday sessions at TAM7--a panel on the ethics of deception, the Skeptical Citizen Award, a Jerry Andrus video, Stephen Bauer's talk on Jerry Andrus and his estate, a panel on skepticism and the media, Phil Plait on Doomsday 2012, and a JREF update.)

Wednesday, June 24, 2009

John Wilkins on atheism and agnosticism

John Wilkins has written a blog post on definitions of atheism and agnosticism, in which he suggests that the definition of atheism has been shifting of late (and encroaching upon agnosticism's territory). His discussion and that which follows in the comments is well worth reading.

Tuesday, June 09, 2009

A code of conduct for effective rational discussion

John Wilkins sets out "a code of conduct for effective rational discussion," a list of principles for debate and discussion that aims at approaching truth rather than winning a rhetorical battle, at the new location of his Evolving Thoughts blog.

The list of proposed principles is:
  1. The Fallibility Principle
  2. The Truth-Seeking Principle
  3. The Clarity Principle
  4. The Burden of Proof Principle
  5. The Principle of Charity
  6. The Relevance Principle
  7. The Acceptability Principle
  8. The Sufficiency Principle
  9. The Rebuttal Principle
  10. The Resolution Principle
  11. The Suspension of Judgement Principle
  12. The Reconsideration Principle
  13. Fleck’s Addendum
Check out Evolving Thoughts for discussion of each of these principles.

Thursday, February 19, 2009

Daniel Dennett at ASU


Last night, Daniel Dennett gave the 2009 Beyond Center lecture with a talk appropriate for the bicentennial of Charles Darwin's birthday, titled "Darwin's 'Strange Inversion of Reasoning.'" While not quite drawing the crowd that last year's lecture by Richard Dawkins did (3000 people at Gammage Auditorium), Dennett filled the 485-seat Galvin Playhouse and an overflow room was set up with a video link. The Phoenix Atheists Meetup group alone had about 57 members who attended.

The talk was videotaped by the Beyond Center, and what may be an unauthorized video has been made available on YouTube.

Skyhooks and Cranes
The content of Dennett's talk was largely drawn from his book, Darwin's Dangerous Idea, and centered on the idea that Darwin brought about a change from thinking of the world as the product of top-down design to a recognition of apparent design as the result of bottom-up processes. Dennett referred to the former as the "trickle-down theory of creation" and the latter as the "bubble-up theory of creation," and used his "intuition pump" of skyhooks vs. cranes to make the point.

"Skyhooks" are explanations of design in terms of miraculous intervention by an entity which itself has no explanation, a deus ex machina. Dennett illustrated that with the drawing above, a Guy Billout illustration titled "Deus ex Machina," from the May 1999 issue of The Atlantic Monthly. By contrast, "cranes" are built up from the ground to provide scaffolding for constructing new things. The dome of the Florence Cathedral (Santa Maria del Fiore), depicted in Billout's illustration, was a marvel of engineering by Filippo Brunelleschi, which used some innovative construction techniques to build something that many thought was not possible.

Darwin's "Strange Inversion of Reasoning"
The title of Dennett's talk came from a critique of Darwin's theory of natural selection by Robert Beverley MacKenzie in 1868, who wrote (as quoted by Dennett in DDA, p. 65):
In the theory with which we have to deal, Absolute Ignorance is the artificer; so that we may enunciate as the fundamental principle of the whole system that, IN ORDER TO MAKE A PERFECT AND BEAUTIFUL MACHINE, IT IS NOT REQUISITE TO KNOW HOW TO MAKE IT. This proposition will be found, on careful examination, to express, in condensed form, the essential purport of the Theory, and to express in a few words all Mr. Darwin's meaning; who, by a strange inversion of reasoning, seems to think Absolute Ignorance fully qualified to take the place of Absolute Wisdom in all the achievements of creative skill.
To which Dennett's response was: "Exactly!" He illustrated the point with an example that is now somewhat commonplace, the computer. Dennett observed that prior to Alan Turing, "computers" referred to people who were hired to perform tasks that today are performed by mechanical devices with the same name. In order to perform these functions, people had to understand arithmetic. Dennett cited Turing's 1936 paper, "On computable numbers, with an application to the Entscheidungsproblem" (PDF), a demonstration that arithmetic computation is a specific case where, in fact, understanding is not required to perform the action--another example of the same kind of "strange inversion of reasoning." Dennett quotes Turing: "The behaviour of the computer [meaning a person] at any moment is determined by the symbols which he is observing and his 'state of mind' at that moment," noting that "state of mind" is in quotes because Turing's showing a method by which no mental activity or understanding is actually required. Substituting into MacKenzie's argument, we get "IN ORDER TO BE A PERFECT AND BEAUTIFUL COMPUTING MACHINE, IT IS NOT REQUISITE TO KNOW WHAT ARITHMETIC IS."

Creationists and Mind-Creationists
Dennett observed that many people cannot abide Darwin, and we call them creationists. There are also people who can't abide Turing, and he suggests we call them mind-creationists. (Steve Novella's presentation at last year's The Amazing Meeting, on "Dualism and Creationism," drew this same analogy.) Dennett said that there are some people who can't abide either--including both Jerry Fodor and Thomas Nagel, referring to his paper "Public Education and Intelligent Design" in Philosophy and Public Affairs vol. 36, no. 2. I think Dennett mischaracterizes Nagel's position here--Nagel is an atheist who thinks that we don't have the full account of evolutionary theory, and who also thinks that if a god exists, there's no reason to think science couldn't study such a being and its effects. I agree with Nagel about that--methodological naturalism could potentially find its own limits and suggest the existence of entities that operate independently of the laws of physics we've discovered. I think we'd end up just modifying our understanding of those laws and continuing to call the result "natural." Jake Young, at the Pure Pedantry ScienceBlog, argues otherwise, defending Stephen J. Gould's "Nonoverlapping Magisteria" (NOMA), the view that science and religion are completely distinct subjects with no intersection, a view I find implausible unless religion is restricted to matters that are completely unobservable and have no causal consequences in the empirical world--which is not the case for any actual religion that I'm aware of.

A few of the "mind-creationists" Dennett pointed out were Jerry Fodor and John Searle. Another is Victor Reppert, author of C.S. Lewis's Dangerous Idea: In Defense of the Argument from Reason, the main argument of which I criticized in a short paper ("Historical But Indistinguishable Differences: Some Notes on Victor Reppert's Paper," Philo vol. 2, no. 1, 1999, pp. 45-47). Reppert's position is that Turing machines don't actually do arithmetic, because they have no semantics, only syntax, and that you only get meaning through original intentionality of the sort that John Searle argues is an irreducible feature of the world. Computers only have semantics when we impute it to them. My argument was that if you have two possible worlds that are exactly alike, except that one was created by a top-down designer and one evolved, there's no reason to say that one has semantics and the other one doesn't--how they got to the point at which they have creatures with internal representations that stand in the right causal relationships to the external world doesn't make a difference to whether or not those representations actually refer and have meaning. [UPDATE (March 3, 2009): Victor Reppert says I've misdescribed his position and elaborates a bit at his own blog.]

Hunting for Skyhooks
Dennett observed that people's issues with bubble-up theories of creation and design center around the fact that some designs seem to be too remarkable to have evolved. Michael Behe's notion of "irreducible complexity" is the idea that some structures require all of their parts in place to function at all, and cannot evolve step-by-step from a previous structure that doesn't also have all of those parts. (The mistake there is that the previous structure may have some other function.) So those arguing for intelligent design have gone "hunting for skyhooks," to try to find examples of design in nature that require a top-down designer's intervening hand to bring into existence. Dennett observed that all of the hunting for skyhooks has failed to come up with any actual examples, but instead has resulted in multiple new discoveries of cranes. This is certainly true for the main examples of "irreducible complexity," blood clotting systems and bacterial flagella. This has led to the quip, "evolution is cleverer than you are," which Dennett discussed in the Q&A as "Orgel's Second Rule."

Another example Dennett gave was the discovery of motor proteins, which he showed using a clip from the film "The Inner Life of the Cell," produced by XVIVO for Harvard University. Dennett didn't mention that this film was the subject of a controversy regarding the film "Expelled," pre-release versions of which used XVIVO footage without permission. Earlier still, intelligent design advocate William Dembski used an overdubbed version of their film in his lectures.

The Bubble-up Path
"We are made of trillions of mindless little robots," Dennett said, "but not a one of them knows who you are or cares." But we do know, and we do care. How is that possible? The bubble-up view has to provide an explanation. Dennett provided some examples of how certain evolutionary changes in the past have created entirely new ways for evolution to proceed. His first example was one that was championed for years by Lynn Margulis to much resistance, but which has now become mainstream, which is the idea of a symbiotic origin for eukaryotes.

For the first 2.5 billion years of life, everything was prokaryotic--single-celled organisms without a nucleus. But then, one form of single-celled organisms invaded another without destroying each other, and came to evolve together, forming eukaryotic life. Each of our cells has not only its own genome in the cell nucleus, but a separate genome in its mitochondria, which is inherited only from our mothers. This development allowed cells to become more complex and versatile, as well as allowed a division of labor that made multicellular life possible.

The Need-to-Know Principle
Dennett showed a video clip about the cuckoo (the link is to a different but similar one). The mother bird lays her egg in the nest of another bird, and removes one of the other bird's eggs. The other bird is then surprised to find that one of its eggs--the cuckoo's egg--hatches first, and the hatchling pushes the other eggs out of the nest. It seems evil, Dennett said, but "don't worry, the cuckoo chick doesn't know what it does. It doesn't need to know."

A principle something like the CIA's need-to-know principle applies in evolution as a matter of thrift, but matters are often confused because biologists tend to attribute more understanding when explaining a feature of living things than actually exists. This, Dennett says, is partly a linguistic matter, because we don't have a word for a "semi-understood quasi-representation" or a "hemi-semi-demi-understood quasi-representation." But Turing does give us models of competence without comprehension.

He then showed a video of a New Caledonian crow trying to use a bit of metal wire to get a worm out of a glass beaker. The crow bends the wire around the glass to make it into a hook, then uses it to fish the worm out of the beaker. This was an example of a creature that goes a step beyond the cuckoo chick. Dennett cited the work of Ruth Millikan, noting that the crow is an example of an animal that represents its goals in the same system in which it represents its facts--but not its reasons for those goals, which are produced by evolution and not represented within the organism.

The MacCready Explosion and Memes
Dennett observed that there has been about 3.5 billion years since the start of the whole tree of life, and only about 6 million years since the divergence of humans from chimps and bonobos, our closest hominid relatives. But a mere 10,000 years ago, as Paul MacCready pointed out, the total human population plus livestock and pets composed about a tenth of one percent of the terrestrial vertebrate biomass. Today, however, we consume 98% of it (most of which is cattle).

The Cambrian "explosion" in which multicellular life became dramatically more diversified took place over millions of years, while the "MacCready explosion" took place over a mere 500 generations, and the explanation is science and technology, communicated from parents to children not by biological evolution but through culture.

Here Dennett gave an introduction to memes by analogy--the cultural highway of transmission of ideas, once it exists, can be invaded by "rogue cultural variants," or "memes," as Richard Dawkins originally called them They are vehicles of information, like viruses, that invade our brains.

He then paused for a "skeptical interlude" to address the question of what's the evidence that memes even exist. He asked, "do you believe that words exist?" If so, then those are examples of a subset of memes, those that can be pronounced. (I'm not sure of the practical benefit of talk of memes as opposed to ideas, concepts, and language, but I'll save commentary on that until I read the meme chapters in DDI.)

So, said Dennett, we are apes with "infected" brains, or, on analogy to prokaryotes/eukaryotes, we are "euprimates." We carry with us virtual machines that give us new powers and versatility to bring organization of the world up another level.

Mind Tools
Dennett quoted one of his own students, Bo Dahlborn, who wrote, "Just as you cannot do very much carpentry with your bare hands, there is not much thinking you can do with your bare brain." We have conceptual tools and methods. At the very simplest level, there are words as tools, such as passwords or labels. Douglas Hofstadter's I Am A Strange Loop identifies a bunch of phrases that are frequently used as tools for analogies, such as "wild goose chases," "tackiness," "loose cannons," "feet of clay," "feedback," "slamdunks," "lip service," and "elbow grease." Dennett compared these to Java applets for the mind--collections of information transmitted from one person to another that allow them to do something more.

Long division is a more complex example. With a sufficiently well developed English (or other language) "virtual machine," you can "download" the procedure in the form of mathematical instruction or from a book, to be able to perform the process. Cost-benefit analysis is a bigger, more complex set of tools learned in the same way.

While some such tools have distinct authors, others have evolved. Language itself, money, and tonal music are examples of such mental tools that were not created at once by individual authors, but have evolved over time.

What this implies for who we are is that we are not Cartesian egos with original intentionality, but "an alliance of hemi-semi-demi-understood virtual machines."

Darwin's Trio
Darwin proposed three types of selection. First, two types of selection where the selective force is human beings. 1. Methodical selection, or intentional artificial selection, where humans intentionally breed creatures for particular characteristics. 2. Unconscious selection, where humans simply preferred certain organisms to others, and helped those to reproduce--such as in farming, and raising domestic animals. To those, Darwin added 3. Natural selection.

Now we've also added 4. Genetic engineering.

And the same categories can be applied to memes. There are original, synanthropic memes, those which live with us but are not domesticated, such as superstitions; these are analogous to memes created by natural selection. There are memes replicated by unconscious selection, such as differential replication of tunes based on how catchy they are. Dennett noted that the Germans call tunes that get stuck in your head "earworms." And then there is methodical selection of domesticated memes, which would include science, literature, and calculus. Dennett compared calculus to laying hens, for which broodiness has been selected out--you have to work hard to get it to reproduce.

And to these categories we can add memetic engineering--spin-doctoring, marketing, propaganda, etc.

Bootstrapping
Dennett asked, how do you draw a straight line? We use a straight edge. And how do we make straight edges? By drawing a line along a piece of metal with a straight edge, and cutting it. How do we get the first straight edge? He pointed to a book on the history of straight edges, and observed that over time we have gradually improved our technology for making straight edges, and can now measure far more precisely how we fall short in reaching the unattainable goal of a perfectly straight line. We can represent our goal, our reasons for achieving the goal, and the imperfections and errors in reaching that goal.

He suggested that the Platonic "form of the true" has a similar history, and that in science "memes have been selected for veridicality."

At this point, we really do have the capacity for genuine top-down design.

Dennett concluded his talk (apart from the next section, which seemed more like an afterword) by stating that "What makes us human is not our genetic children, but our brainchildren. We've finally reached genuine intelligent design."

Darwin Fish
Dennett concluded his prepared lecture by pointing out that he was wearing a Darwin fish lapel pin. The physicist Murray Gell-Mann observed to Dennett that this was patterned after the Jesus fish, a fish symbol which contains the Greek word for fish, which was apparently the first acronym. The Greek letters ΙΧΘΥΣ stand for the Greek words for Jesus Christ, God's Son, the Savior, said Gell-Mann. But what does "DARWIN" stand for?

Dennett took that as a challenge, and came up with a Latin expansion for "DARUUIN" (since there is no letter "W" in Latin):

Delere
Auctorem
Rerum
Ut Universum
Infinitum
Noscas

This translates into English as

Destroy
the author
of things
in order to understand
the infinite
universe

I'm not too fond of this--it confirms anti-evolutionists' worst fears of evolution, and refers to an "author of things" to be destroyed, as though there is one that exists, rather than a myth not to be believed. It's clever, though.

UPDATE (February 20, 2009):

Dennett then answered a few brief questions, and then signed a bunch of books. The first question (and the only one I'll note) was what it was like to work with W.V. Quine, his mentor. Dennett said that he transferred to Harvard University as an undergraduate specifically to work with Quine, and that two of the most significant influences from Quine were the view that science and philosophy are significantly overlapping and parts of the same larger project, and that the quality of Quine's writing (in contrast to his lecture style) was something to aspire to.

He's well-spoken, entertaining, and thought-provoking, and I encourage you to hear him speak if you have the opportunity. I think that his views, like those of Richard Dawkins, argue that science and evolution in particular either imply or at least cohere better with or provide evidence for atheism. I don't think there is a logical implication, and I'm not sure Dennett and Dawkins do, either--that's something that anti-evolutionist lawyer Phillip Johnson has argued, which I've critiqued at the talkorigins.org website, and which the views of Christian evolutionists like Kenneth Miller, Glenn Morton, and Mike Beidler contradict by their very existence. On the other hand, I'm not sure Miller's position is coherent (I really should get around to writing a summary of last year's Skeptics Society conference), and I reject the NOMA view that there is no overlap between the domains of religion and science and agree with Dennett's and Quine's views that there is significant overlap between science and philosophy (and history, for that matter).

The National Center for Science Education and many scientists argue for a sharp divide between science and philosophy, and between science and religion, and find cases like those made by Dawkins and Dennett (and P.Z. Myers) to be problematic, especially when it comes to the legal arena and the goal of keeping intelligent design and creationism out of the public schools (though public universities have more freedom). I think that this is ultimately due to a tension between the principles of separation of church and state, public education, and academic freedom, given that there is no sharp divide between the domains of science and religion (or science and philosophy). In my view, in any case where a religion makes an empirical claim, if there's scientific evidence against that claim, it should be legitimate to discuss that scientific evidence in a public school classroom even if that has the primary effect of inhibiting (or promoting) religion (violating the second prong of the Lemon Test for measuring whether a government action is a violation of the Constitution's establishment clause). I consider it a flaw in the Lemon Test that people can always create new religions which attempt to turn secular ideas into religious content with the specific intent of turning government actions into church-state violations (e.g., creating a doctrine that paying taxes is a sin), as well as the fact that it provides an unwarranted immunity to criticism in the classroom for religious claims, even if they are empirically falsified or conceptually incoherent. (See the comments of this Ed Brayton post at Dispatches from the Culture Wars on the Summum monument case for some legal puzzles. BTW, Justice O'Connor argued for a either a different test in Lynch v. Donnelly, the "endorsement test," which asks whether a reasonable person would conclude government is endorsing or disapproving religion from the action. This has sometimes been interpreted as a complement to the Lemon Test, and sometimes as a substitute for it. Judge Jones in the Dover case applied both the endorsement test and the Lemon Test, and argued that the Dover school district violated both, including all three prongs of Lemon.)

Another resolution is to finesse the issue by getting government out of the business of being a direct provider of education, and instead meet the goal of free public education by providing government funding and standards that include mandatory curriculum requirements that any school can exceed with content that expresses particular religious viewpoints. By providing a fixed amount of per-pupil funding and a mandatory minimum curriculum that doesn't include religious content, those two items are tied together and anything beyond it would be considered provided at the school's own expense, and thus not a church-state violation. In my view, more discussion and debate of religious claims at a younger age will yield better-educated adults (and probably more atheists). Ironically, it is western democracies without a strong history of separation of church and state where religion is weakest and acceptance of evolution is strongest.

Without finessing the problem like that or modifying the Lemon Test, views like those of Dennett and Dawkins must be excluded from public school classrooms along with creationism for the same reasons (to the extent that they express a religious viewpoint), and I think that ultimately the "exploring evolution" or "academic freedom" strategies of the creationists for getting critiques of evolution into the public school classrooms will succeed in passing constitutional muster. Ultimately, the reason their arguments should be excluded from science classrooms is not that they are religious, but that they are bad arguments, and there's no constitutional provision prohibiting the establishment of bad arguments.