Friday, October 30, 2009
Robert Balling on climate change
He began his talk by saying that in 1957, measurements of CO2 began to be made at Mauna Loa (by Charles David Keeling), which established that CO2 is increasing in our atmosphere, largely because of human activity--from fossil fuel emissions. It's approaching 390 parts per million (ppm). Last weekend, the "A" on A Mountain near the university was painted green by a bunch of people wearing shirts that say "350" on them, because they want atmospheric CO2 to be stabilized at 350 ppm, which was the level in 1990, which is the benchmark year for the Kyoto Protocol.
But this isn't remotely feasible, he said, citing the Intergovernmental Panel on Climate Change (IPCC). Even the most optimistic scenario in the IPCC Report has atmospheric carbon continuing to rise until 2100, hitting about 600 ppm. If we reduced emissions to 0, the best case would end up with stabilization at around 450 ppm. Our lifetime will see increasing CO2 levels, no matter what we do. (In other words, the Kyoto benchmark sets a standard for emissions levels to return to, not for a level of atmospheric carbon to return to.)
If you look at the earth's history on a longer scale, atmospheric carbon has been much higher in the past--it was at about 2500 ppm during the dinosaurs. During the last 600,000 years, however, it has been much lower, and fell below 200 ppm in the last glacial period. This, Balling said, shows what he would identify as a dangerous level of CO2--falling below 160 ppm, which causes plants to die.
There are other greenhouse gases besides CO2 that have an effect, such as methane and NO2, that humans are producing, he said.
At this point, he said the greenhouse effect is real--CO2 doubling causes warming--and this has been known for 120 years and "nobody is denying that."
There are climate models, which he said he has great respect for--it's basic physics plus fantastic computing and applied math. Climate modelers, he said, are their own worst critics. Problems for climate models include clouds, water vapor, rain, and the ocean, but lots of things are modeled correctly and the results are generally pretty good. Clouds, he said, are the biggest area of debate. The IPCC models say that clouds amplify warming, but satellite-based measurements suggest that clouds dampen (but don't eliminate) warming. Thus, he concluded, IPCC may be predicting more warming than will actually occur.
He next discussed empirical support for warming, and pointed out that the official plot of global temperatures has no error bars, and the numbers reported come from sensors that don't cover the entire world. How you come up with a global average can be done in different ways, and the different methods produce different results. You can take grid cells, average by latitudinal bands, get two hemispheric averages, and average them together. You can just average all of the data we have. He said that Roger Pielke Sr. questions the use of average temperatures and suggests looking at afternoon high temperatures. Looking at the older end of the chart, he asked, "where were the sensors in 1900? Why no error bars?"
He asked, "Is the earth warming," and said "right now the earth is not warming. I expect it to keep going up, but over the last decade there's been essentially none." He pointed to a recent article in Science magazine, "What happened to global warming?" Many are writing about this, he said, and there could be "1001 different things including sun and oceanic processes." (I don't believe this is correct unless you measure from 1998, which was an El Nino year. Most of the top 10 warmest years in history are post-1998.)
"Scientists are questioning the data," he said, showing photos from Anthony Watts' blog of poorly situated weather stations. "The albedo of the shelter in my backyard has changed as it has decayed," and caused it to report warmer temperatures. He said that people are having a field day taking photos of poor official sites. (What Balling didn't say is that what's important in the data is not absolute temperature but the temperature trends, and the good sites and bad sites both show the same trends.)
He pointed out that there are corrections to the temperature based on time of measurement, urban heat islands, instruments used, etc. If you look at the raw data for the U.S. from 1979-2000, you see 0.25 degrees Celsius of warming. Sonde data shows 0.24 degrees, MSU's measurements show 0.28, IPCC shows 0.28, and FILNET shows 0.33. He suggested that these corrections on the official data may be inflating the temperature (again, see my previous comment on trends vs. absolute temperature). Sky Harbor Airport produces the official temperature results for Phoenix, maximizing urban heat island effect. Many of the city records are from the worst sites, and he suggests looking at rural temperatures might give a different result.
Another factor is stratospheric turbidity from volcanic eruptions, and he showed a plot of orbital temperatures from satellites vs. stratospheric turbidity. He said that volcanism accounts for about 30% of the trend variability.
The big player in the game, he said, is the sun. Solar irradiance measures showed a significant decline in solar output in 1980, but earth temperature continued upward--he said he mentioned this because he thought it would be used as an objection. In response, he said that "the sun doesn't increase or decrease output over the entire spectrum and there are interactions with stratospheric clouds." He said that there are astrophysicists who argue that this is the major cause of global warming. In the Q&A, he said that there's one group that thinks cosmic ray flux is the major factor in global temperature because it stimulates cloud formation, while another group says that cosmic ray flux is little more than a trivial effect. He also said that this debate takes place in journals that "I find very difficult to read."
There are other confounding variables like El Nino and La Nina, but he said there has "definitely been warming over the last three decades with a discernable human contribution."
He put up a graph of the Vostok Reconstruction of temperature based on ice core data, on a chart labeled from -10 to +4 degree temperature changes in Celsius, which were mostly in the negative direction, and said we've seen periodic rapid changes up and down without any human contribution.
He talked about the IPCC "hockey stick" graph from 2001, which led to a huge debate about the possibility of bad statistical methods that guaranteed the hockey stick shape. He observed that 1000 years ago it was as warm or warmer than today, the Medieval Warming Period, which was missing from the "hockeystick" graph. There was also a "Little Ice Age," also missing from the graph. He said the IPCC has backed away from the hockey stick and its most recent report includes clear Medieval Warming Period and Little Ice Age periods in its graphs.
He showed a photo of a microwave sounding unit for temperature measurement, and the polar satellite record from 1978 to present, which showed a big peak in 1998 from El Nino. He said he wrote his book saying that there was no warming in 1992, when it was true. After 1998, the temperature came back down quickly, and, he said, the satellite record, like the Science article, hasn't seen warming. He then corrected himself to say, "well, some warming, but not consistent with the IPCC models."
He said there has been high latitude warming, and the difference between winter and summer warming has supported the numeric models. "But a problem has evolved, which is the most powerful argument of the skeptics." The models predict that there should be warming at the surface, which increases at higher altitude as you go up into the atmosphere. "There should be very strong warming in the middle of the atmosphere, but it's not in the data." This is the main anti-global warming argument of Joanne Nova's "The Skeptics Handbook" that has been distributed to churches throughout the U.S. by the Heartland Institute (an organization supported by the oil industry that has sometimes gotten into trouble due to its carelessness).
At this point, Balling started asking various questions and answering them by quoting from the IPCC reports:
More hurricanes? The IPCC doesn't say this. He cited the 1990, 1996, 2001 (executive summary, p. 5), and 2007 (p. 6) reports, all of which say that there's no indication or no clear trend of increase or decrease of frequency or intensity of hurricanes or tropical cyclones as a result of warming.
The southwestern United States may become drier? Here, he answered affirmatively, pointing out that an ASU professor has an article that just came out in Science on this topic. Atmospheric circulation is decreasing, and soil moisture measures show the southwest is becoming drier. On this, he said, there's "evidence everywhere," and the Colorado River basin in particular is being hit hard. And this is consistent with IPCC predictions. He cited Roy Spencer to say that "extraordinary prediction require extraordinary evidence." (This actually comes from Carl Sagan, who said "extraordinary claims require extraordinary evidence" in Cosmos.)
Frequency of tornadoes? It's down, not up, and IPCC 2007 p. 308 says there is no evidence to draw general conclusions.
Ice caps are melting? Balling said Arctic yes, Antarctic, no. He cited the IPCC 2007 p. 6 regarding Antarctic sea ice extent indicating a lack of warming, and p. 13 that it's too cold for widespread surface melting. He contrasted this with a slide of a homeless penguin used to argue for action on global warming. The Arctic ice cap "has its problems," he said, and its extent has declined though has "rebounded a bit" recently. (In the Q&A, he said that half of the loss in the last six years has been recovered.) He said that experts in sea ice extent identify relative temperature, ocean currents, and wind as more important than temperature--"it's not a thermometer of the planet." In the past, northern sea ice has dropped as southern sea ice has increased, with the overall global extent of sea ice relatively unchanged. In the Q&A, he made it clear that he wasn't saying that temperature wasn't a factor, but that global temperature is definitely not a factor and temperature is less important than the other factors he identified.
Sea levels changing? He said there's no doubt about this, but the important question is whether the rise is accelerating. He cited Church et al. J. Climate 2004, p. 2624 for a claim of "no detectible secular increase" in rate of sea level rise, but noted that another article this week says that there is. IPCC 2007, p. 5 says that it is unclear if increasing is a longer-term trend. The average has been 1.8 mm/year, but with variable rates of change. IPCC 2007, p. 9 says that 125,000 years ago sea levels were likely 4-6m higher than in the 20th century, due to retreat of polar ice.
He said that ice melting on Kilimanjaro has been a "poster child" for global warming, but that this sharp decline "started its retreat over 100 years ago," at the end of the Little Ice Age (1600-1850), and is related to deforestation and ocean patterns in the Indian Ocean rather than global warming. It's not in an area where significant warming is expected by climate models, and local temperatures don't show it.
He then talked about a few factors that cause temperature forcing in a negative direction (i.e., cooling)--SO2, which makes clouds last longer, increased dust, and ozone thinning. He said that his entry into the IPCC was his work at the U.S./Mexico border where he found that overgrazed land on the Mexican side caused warming, and it was much cooler on the U.S. side of the border. The dust, however, had a global cooling effect.
The 2001 IPCC report lists global radiative forcings in the negative direction: stratospheric ozone, sulfate, biomass burning, and mineral aerosols. In the positive direction include CO2 and solar irradiance. The 2007 report adds many more, including contrails from aircraft. A chart from the report lists the level of scientific understanding for each factor, and he observed that it's "low" for solar irradiance.
He cited a quote from James Hansen (Proc. Nat. Acad. Sci. p. 12,753, 1998) saying that we can't predict the long term, and said he agrees.
He observed that the Pew Foundation poll for Sep. 30-Oct. 4 asked Americans if they think there is evidence of global warming being caused by humans and only 36% said yes--he said he's one of those 36%.
He concluded by observing that if you look at the difference between doing nothing at all, or stabilizing at 1990 levels in 1990, that only produces changes of a few hundredths of a degree of temperature in 2050--so no matter what we do, "we won't live long enough to see any difference."
In the Q&A session, Prof. Billie Turner said that "our academy is about to issue a statement that we are 97% sure that we will not be at a 1-degree Celsius increase but a 2-degree Celsius increase by 2050" (or about double what Balling's final slide showed). He objected that Balling's talk began with the "lunatic green fringe" and contrasted it with the IPCC, which he said would be like him beginning a talk with Dick Cheney's views before giving his own. He said this may be an effective format, but it "gives a slant on the problem that isn't real in the expert community." Turner also pointed out that on the subject of mitigation, if you are going to make a calculation in economic terms you have to use a discount rate. The Stern Review used a high discount rate, and concluded that it is worth spending a lot of money now on mitigation; William Nordhaus and Partha Dasgupta, on the other hand, used a low discount rate and concluded that it's not worth spending money on now.
Balling said that he gets email from "lefties" that ask him to "please keep criticizing" because "this [global warming] is just an excuse to keep the developing world from catching up." In conversation with a small group afterward, Balling made it clear that he thinks people shouldn't be listening to Limbaugh and Hannity on climate change, and in answer to my question about what sources the educated layman should read and rely upon, he answered unequivocally "the IPCC," at least the scientific portions authored by scientists. He had some criticisms for the way that the technical summaries are negotiated by politicians, however, and said that S. Fred Singer has made hay out of comparing the summaries to the actual science sections and pointing out contradictions. He also said that Richard Lindzen at MIT, who he said may be the best climate scientist around, thinks the whole IPCC process is flawed, and that John Christy, lead author of the 2001 IPCC report, thinks the IPCC process should allow an "alternative views" statement by qualified scientists who disagree.
In a very brief discussion afterward [I had] with the climate modeling grad student in my climate change class, he [the student] said that the biggest weakness of the talk was that Balling didn't talk about ocean temperatures, being measured by the Argo project of NASA's Jet Propulsion Laboratory. These measures had shown some recent cooling (but a long-term warming trend), but after discovering an error, Joshua Willis found that warming has continued.
Balling supports the science, but he still leans to minimizing the negative effects, and uses some apparently bad arguments to do so. His position clearly advocates a "wait and see" approach, and argues that we needn't be in a hurry to mitigate since nothing we do will have any effect in our lifetimes--but it could have an enormous effect on what is required for mitigation and adaptation for future generations.
Posted by Lippard at 10/30/2009 03:01:00 PM 2 comments
Labels: Arizona, climate change, science
Thursday, October 29, 2009
State Press defends Ravi Zacharias
My letter to the editor, below, didn't get published, but another critic's letter did get published.
Here's mine:
Catherine Smith quotes Ravi Zacharias as stating that "irreligion and atheism have killed infinitely more than all religious wars of any kind cumulatively put together." This statement not only demonstrates Zacharias' innumeracy, it shows that he continues to make the mistake of attributing killing in the name of political ideologies like Stalinism and communism to atheism. I agree that Stalin, Mao, and Pol Pot killed more than religious wars, but it wasn't their atheism that caused that killing. Those killed by religious wars, the Inquisition, and witch trials, however, were killed in the name of religion. Out of fairness, there were no doubt political issues involved in many wars over religion as well, but if you take claims of religiously motivated killing at face value, the death tolls for those killed in the name of religion far exceed the death tolls for those killed in the name of irreligion.I first heard of Zacharias back around 1991, when I sat behind someone on an airplane flight who was reading his book (reviewed by Krueger, linked above), A Shattered Visage. The parts I read were truly awful, about the quality of M. Scott Huse arguments against evolution (a step below Kent Hovind and Ken Ham). I didn't bother to attend, but would be interested in hearing any reports of how it went.
Zacharias has a history of attacking atheism with misrepresentations in his books, as documented in Jeff Lowder's "An Emotional Tirade Against Atheism" and Doug Krueger's "That Colossal Wreck," both of which may be found on the Internet as part of the Secular Web (http://www.infidels.org/).
UPDATE (November 24, 2017): Steve Baughman has published an exposure of Zacharias' claims to have credentials he does not possess, and to have had academic appointments that did not exist.
The woman googled “Ravi Zacharias sex scandal” and found the blog RaviWatch, run by Steve Baughman, an atheist who had been tracking and reporting on Zacharias’s “fishy claims” since 2015. Baughman blogged on Zacharias’s false statements about academic credentials, the sexting allegations, and the subsequent lawsuit. When the woman read about what happened to Lori Anne Thompson, she recognized what had happened to that woman was what had happened to her.
As far as she could tell, this atheist blogger was the only one who cared that Zacharias had sexually abused people and gotten away with it. She reached out to Baughman and then eventually spoke to Christianity Today about Zacharias’s spas, the women who worked there, and the abuse that happened behind closed doors.
Posted by Lippard at 10/29/2009 08:33:00 PM 18 comments
Labels: Arizona, atheism, creationism, ethics, history, Ravi Zacharias, religion
Wednesday, October 28, 2009
Teaching the Bible in public schools
The letter below was written in response to Will Munsil's "Putting the Bible back in public schools," which was published on October 14.
I disagree with Will Munsil's assertion that the Bible is the foundation of American political thought. On the contrary, the American form of government was rooted in the work of enlightenment philosophers such as Locke, Montesquieu, and Rousseau. The U.S. Constitution's form of government has more resemblance to Caribbean pirate codes than to the Ten Commandments.I suspect this letter wasn't excluded by reason of content, but because they had already printed a couple of letters critical of Will Munsil's op-ed by the time I submitted this on October 16. Perhaps I should have mentioned that I'm an atheist, which makes the extent of my agreement with Munsil more interesting. Of course, my view is contrary to Munsil's in that I think Bible literacy is likely to decrease, rather than increase, religious belief. But it wouldn't surprise me if the NCBCPS curriculum is the one that Will Munsil had in mind.
That said, however, I agree with Munsil that knowledge of the Bible is worthwhile and should be taught in public schools for the purpose of cultural literacy, so long as it is done without endorsing Christianity or Judaism. The Bible Literacy Project's curriculum might be one way to do it. One way not to do it is to use the National Council on Bible Curriculum in Public Schools' curriculum--it takes a sectarian perspective, is full of errors, and has failed legal challenges in Texas and Florida for being unconstitutional.
I should point out that I think it should probably be taught as part of a world religions class that covers more than just Christianity--kids should not only get information about the Bible that they won't get in Sunday School, they should be informed about other religions, as well as the fact that history has been full of doubters of religion, as documented in Jennifer Michael Hecht's excellent Doubt: A History.
You can find out more about the NCBCPS curriculum that failed legal challenge in Texas here.
Munsil cited Stephen Prothero, whose op-ed piece, "We live in a land of biblical idiots," I wrote about at the Secular Web in early 2007.
Posted by Lippard at 10/28/2009 08:54:00 AM 4 comments
Monday, October 26, 2009
Hitler orders DMCA notices for "Downfall" parody videos
UPDATE (April 20, 2010): This video has been taken down from YouTube after a complaint from Constantin Films, which Brad Templeton has protested. The video is now available at Vimeo.
Posted by Lippard at 10/26/2009 07:54:00 PM 4 comments
Paul Haggis leaves Scientology
One of Haggis' main complaints is the Church's homophobia. Was Haggis really in Scientology for three and a half decades without realizing that homosexuality is 1.1 on the "tone scale"? Good for him for leaving, but he must have had blinders on regarding everything he complains about.
Posted by Lippard at 10/26/2009 06:12:00 PM 1 comments
Labels: ethics, religion, Scientology
Richard Carrier to speak in Phoenix
Posted by Lippard at 10/26/2009 09:44:00 AM 0 comments
Labels: Arizona, atheism, creationism, history, religion, science
Saturday, October 24, 2009
Personalized medicine research forum
The forum's speakers covered both the promise and problems and issues raised by the developing field of personalized medicine, which involves the use of molecular and genetic information in medical diagnosis and treatment. A few highlights:
Introduction (Dr. LaBaer)
Dr. LaBaer pointed out that these new diagnostics cost a great deal of money to develop, but they have the potential for cost savings, for instance, if they can be used to identify forms of disease that will not benefit from very expensive treatments. He gave the example of Genomic Health, which has developed a test for early stage breast cancer to determine if women will or won't benefit from adjuvant therapy (chemotherapy to prevent recurrence). A test that costs even a few thousand dollars to perform is something insurers will be willing to pay for if it has the potential of saving tens of thousands of dollars of expense on chemotherapy that will not provide any benefits. On the other hand, the mere promise of early detection of susceptibility for disease has the potential for overtreatment and an increase in healthcare expenses. This problem was discussed by a number of speakers, with particular bad potential consequences in the legal realm.
Personalized Diagnostics (Dr. LaBaer)
Dr. LaBaer talked briefly about his own lab's work in biomarker discovery and cell-based studies. In biomarker discovery, his lab is working in functional proteomics, using cloned copies of genes to produce proteins and building tests that allow examination of thousands of proteins at a time. His lab, formerly at Harvard and now at ASU, has 10,000 copies of human genes and 50,000 copies of genes from other animals, which are made available to other researchers. (There's more information at the DNASU website.)
The goal of biomarker discovery is to greatly improve the ability to find markers of human health using the human immune system, by identifying antigens that are markers for disease. The immune system generates antibodies not just in response to infectious disease, but against other proteins when we have cancer. Tumor antigens get into the bloodstream, though they may only appear in 10-15% of those who have the disease. Rather than testing one protein at a time, as is done with ELISA assays, LaBaer's lab is building protein microarrays with thousands of proteins, tested at once with blood serum. Unlike old array technology that purifies proteins and puts them into spots on arrays, where the proteins may degrade and lose function, their method involves printing the DNA that encodes the gene on the arrays, then capturing proteins in situ on the array at the time the experimental test is performed.
LaBaer's lab's cell-based work involves tryng to identify how proteins behave in cells when they are altered, in order to find out which pathways contribute to consequences such as drug resistance in women with breast cancer, as occurs with Tamoxifen. If you can find the genes that make cancer cells resistant, you can then knock them out and cause those cells to die. They tested 500 human kinases (5/7 of the total) and found 30 enzymes that consistently make the cancer cells resistant. Women with a high level of those enzymes who take Tamoxifen have quicker relapses of cancer.
Complex Adaptive Systems Initiative (George Poste)
George Poste, former director of ASU's Biodesign Institute and former Chief Scientist and Technology Officer at SmithKline Beecham, talked about the need to replace thinking about costs in the healthcare debate with thinking about value. The value proposition of personalized medicine is early detection, rational therapeutics where treatment is made based on the right subtype of disease being treated, and integrative care management where there's better monitoring of the efficacy of treatments. He said that the first benefits will come from targeted therapy and this will then overlap with individualized therapy, as we learn how our genome affects such things as drug interactions. He was critical of companies like 23andme, which he called "celebrity spit" companies, which do little more than give people a needless sense of anxiety about predispositions to disease that they currently can do nothing about except eat right and exercise.
Poste also had criticisms for physicians, pointing out that it takes 15-20 years for new innovations to become routinely adopted, and many physicians don't use treatment algorithms at all. Oncologists, he said, make money from distributing treatments empirically (that is, figuring out whether it's effective by using the treatment on the entire population with the disease) rather than screening first, even where tests exist to determine who the treatment is likely to work on. He said that $604 million/year in health care costs could be saved by the use of a single colon cancer screening test, and not proceeding with treatment where it isn't going to work. Today, where 12-40% of people are aided by treatments that cost tens of thousands of dollars, 60-88% of that spending is being wasted. With the aging population, he said that Humana will in the next several years see all profits disappear, spent on expensive treatments of people who don't respond to them.
Pharmaceutical companies are beginning to do diagnostic test development alongside drug development now, and insurers will push for these tests to be done. Poste suggested that we will see the emergence of "no cure, no pay" systems, and noted that Johnson & Johnson has a drug that has been introduced for use in the UK under the condition that the company will reimburse the national health care system for every case in which it is used but doesn't work. Merck's Januvia drug for type II diabetes similarly offers some kind of discount based on performance.
Poste pointed out another area for potential cost savings, related to drug safety. With some 3.1 billion prescriptions made per year, there are 1.5-3 million people hospitalized from drug interactions, 100,000 deaths, and $30 billion in healthcare costs, though he noted this latter figure includes caregiver error and patient noncompliance.
He bemoaned the "delusion of zero risk propagated by lawyers, legislatures, and the media," and pointed out that the FDA is in a no-win situation. (This is a topic that's been recently covered in two of my classes, my core program seminar and my law, science, and technology class with Prof. Gary Marchant. If the FDA allows unsafe drugs to be sold, then it comes under fire for not requiring sufficient evidence of safety. If, on the other hand, it delays the sale of effective drugs, it comes under fire for causing preventable deaths. The latter occurred during the 1980s with AIDS activists protesting against being denied treatments, described in books such as Randy Shilts' And the Band Played On and Steven Epstein's Impure Science. This led to PDUFA, the Prescription Drug User Fee Act of 1992, under which drug companies started funding FDA drug reviewer positions through application fees to help speed approval. That has been blamed for cases of the former, with the weight-loss drugs Pondimin and Redux being approved despite evidence that they caused heart problems. That story is told in the PBS Frontline episode "Dangerous Prescription" from November 2003.)
Poste pointed out that there have been 450,000 papers published which have claimed to find disease biomarkers, of which the FDA has approved only five. But he didn't blame the FDA for delay in this case, because this consists of a mass of bad studies which he characterized as "wasteful small studies" with insufficient statistical power. In the Q&A session, he argued that NIH needs to start dictating clear and strong standards for disease research, and that it has abrogated its role in doing good science. He said that "not a single national cancer study with sufficient statistical power" has been done in the last 20 years; instead research is fragmented across academic silos. He called for "go[ing] beyond R01 grant mentality" and building the large, expensive studies with 2,500 cases and 2,500 controls that need to be done.
He also raised challenges about the "very complex statistical analysis required" in order to do "multiplex tests" of the sort Dr. LaBaer is trying to develop. And he pointed out the challenge that personalized medicine presents for clinicians, in that "only about six medical schools have embraced molecular medicine and engineering-based medicine." Those that don't use these new techniques as they become available, he said, "will open themselves up to malpractice suits."
Science and Policy (David Guston)
David Guston, co-director of ASU's Consortium for Science, Policy, and Outcomes (CSPO) and director of ASU's Center for Nanotechnology in Society (CNS) spoke about "cognate challenges in social science" and how CNS has been trying to develop a notion of "anticipatory governance of emerging technology" and devising ways to build such a capacity into university research labs as well as broader society, to allow making policy decisions in advance of the emergence of the technology in society at large. He described three capacities of anticipatory governance--foresight, public engagement, and integration, and described how these have been used at ASU.
Foresight: Rather than looking at future consequences as a linear extrapolation, CNS has used scenario development and a process of structured discussions based on those scenarios with scientists, potential users, and other potential stakeholders, about social and technical events that may be subsequent consequences of the scenarios. This method has been tested with Stephen Johnston's "Doc-in-a-Box" project at ASU's Center for Innovations in Medicine, which Guston said led to some changes in the conceptualization of the technology.
Public Engagement: The "scope and inclusion of public values is important for success," Guston said, and gave as an example the "national citizens technology forum" that CNS conducted in six locations to look at speculative scenarios about nanotechnology used for human enhancement. These were essentially very large focus groups whose participants engaged in "informed deliberation" over the course of a weekend, after having read a 61-page background document and spending the prior month engaging in Internet-based interaction.
Integration: Guston described the "embedding of social scientists in science and engineering labs," to develop productive relationships that help lab scientists identify broader implications of their work while it's still in the lab rather than after it's introduced to the general public.
Guston suggested that there might be other ways of implementing "anticipatory governance" in the form of legislative requirements or standards and priorities set by program officers at funding organizations, but that the lab setting is "the best point of leverage at a university" and can set an example for others to follow.
Clinical Perspective (Larry Miller)
Larry Miller, Research Director at the Mayo Clinic in Scottsdale, spoke about the healthcare provider's approach to personalized medicine. He said that Mayo is committed to individualized care, and that now that we are beginning to understand the power of human variation, these new developments have "to be transformational for providers or they won't survive." He suggested that the future of medicine will move from reactive and probabilistic to more deterministic selection of treatments based on diagnoses. He emphasized the need for education for doctors, and pointed out that "standards of care will become outmoded," which is "disruptive to law and [insurance] coverage." He said that Mayo sees a big challenge of complexity, where what was one disease (breast cancer) is now at least ten different subdiseases. Doctors need to make their treatment decisions on the detail, to predict how the disease will behave, and choose the best drugs possible based on safety, effectiveness, and cost-effectiveness.
Miller pointed out that this requires interdisciplinary work, and said that Mayo in Arizona has a huge advantage with its relationship with ASU, where so much of this work is going on. While Mayo has scientific expertise in a number of areas, these new technologies draw on expertise from beyond medicine, in particular informatics and computational resources needed to build an effective decision support system that will become essential for doctors to use in a clinical setting.
He talked about Mayo's program for individualized medicine, which involves not just incorporating new developments in diagnostics and therapeutics, but in regenerative medicine for repair, renewal, and regeneration of deficits.
Mayo has had electronic medical records for the last 15 years, on 6 million people, but these are kept in multiple incompatible systems and were not built with research in mind. They hope to improve their systems so that it can be used in an iterative process to learn more about the efficacy of therapies, and so therapies can be combined with "companion diagnostics for monitoring progression, recurrences, and response to therapy."
Like Poste, he raised objections to the companies that market gene sequencing directly to individuals, which just "scare people inappropriately," but identified learning about disease predispositions as an important part of these developing technologies. We need to develop methods of risk analysis that can help people correctly understand what these predispositions mean.
He sees the future as having three waves--the first wave will be the new diagnostics, the second wave improvements in clinical practice and therapy, and the third wave embedding the new technology into the healthcare system, with significant changes to policy and education.
Health Informatics (Diana Petitti)
Diana Petitti, former CDC epidemiologist and former director of research for Kaiser Permanente, where she built a 20-year longitudinal data repository for its 35 million members, spoke about the importance of health informatics. (She is now a professor in ASU's Department of Biomedical Informatics.) Dr. Petitti raised concerns about how in the United States we are "loathe to deny anyone anything" in terms of medical treatments, but in fact "we do deny lots of people lots of things." She worried that personalized medicine has the potential to lead to greater maldistributions of healthcare, with the "haves" getting more and better treatment and the "have nots" getting less and worse treatment, unless we plan carefully. She advocated evidence-based medicine and assessing value of treatments to be deployed to the general population.
Dr. Petitti brought up as an example the fact that oral contraceptives result in a 2x-10x increase in the likelihood of a venous thrombotic event, and that the Factor V Leiden gene is predictive of susceptibility to that consequence, but no screening is done for it. Why not? Because the test only predicts 5% of those who will have the event, it's a very expensive test, and we don't have good alternatives for oral contraceptives. These kinds of issues, she suggested, will recur with multiplex diagnostics.
She explicitly worried that "we have dramatically oversold preventive medicine" and doesn't think it's likely that savings from prevention will allow coverage for more extensive treatment. She advocated that everyone in the field see the film "Gattaca," and stated that ASU provides "unique opportunities to train people to think about these issues" using "quantitative reasoning and probabilistic thought." She concluded by saying that we need to "work towards rational delivery of healthcare that optimizes public health."
Law (Gary Marchant)
Prof. Gary Marchant of the Sandra Day O'Connor School of Law at ASU, who has a Ph.D. in genetics and is the executive director of ASU's Center for the Study of Law, Science, and Innovation (formerly Center for the Study of Law, Science, and Technology), spoke about legal issues. First he listed the many programs available at ASU in the area, beginning with the genetics and law program that has been here for 10 years and was the reason he first came to ASU. Others include a new personalized medicine and law program at the Center for Law, Science, and Innovation, a planned center on ethical and policy issues regarding personalized medicine in conjunction with the Biodesign Institute, CSPO, TGEN, Mayo, etc., and research clusters at the law school on breast cancer, warfarin, and personalized medicine. He also gave a plug for an upcoming conference March 8-9, 2010 at the Arizona Biltmore sponsored by AAAS and Mayo, which also has a great deal of corporate support.
Prof. Marchant indicated that liability is the biggest issue regarding personalized medicine, and he sees doctors as "sitting ducks," facing huge risks. If a doctor prescribes a treatment without doing a corresponding new diagnostic test, and that has complications, he can be sued. If he does the diagnostic test, it shows a very low likelihood of a disease recurrence, and advises against the treatment, and then the patient ends up being one of the rare people who has the recurrence, the doctor can be sued. The doctor is really in a damned-if-you-do, damned-if-you-don't situation. The insurers and pharmaceutical companies are at less risk, since they have already developed enormous resources for dealing with the lawsuits that are a regular part of their existence. In a short discussion after the forum, I asked Prof. Marchant if doctors would be liable if they performed a diagnostic test, found that it showed a low likelihood of recurrence or benefit for a treatment, and then recommended the treatment anyway, knowing the insurance company would refuse to pay for it--would that shift the liability to the insurance company? He thought it might, though it would be unethical for a doctor to recommend treatment that he didn't actually think was necessary, and there's still the potential for liability if the insurance company pays for the treatment and the treatment itself produces complications. It seems that this problem really needs a legislative or regulatory fix of some sort, so that doctors have some limitation of liability in cases where they have made a recommendation that everyone would agree was the right course of action but a low-probability negative consequence occurs anyway.
Prof. Marchant observed that the liability issues are particularly problematic in states like Arizona, where each side in the suit is limited to a single expert witness. He said there is "no clear guidance or defense for doctors," and the use of clinical guidelines in a defense has not been effective in court, in part because doctors don't use them.
Q&A
A few additional points of interest from the Q&A sessions (some of which has already been combined into the above summaries):
Dr. LaBaer pointed out that most markers for diseases don't seem to have any role in the cause of the disease, such as CA25 and ovarian cancer. So his lab is looking not just for biomarkers, but for those that will affect clinical decisions. 4 out of 5 positive results in a mammography for breast cancer are actually cases where there is nothing wrong and the woman will not end up getting breast cancer, but some procedure ends up being undergone, with no value. So he wants to find a companion test that can tell which are the 4 that don't need further treatment.
George Poste pointed out that baby boomers are going to bankrupt the system as they reach the end of their lives, and about 70% of the $2.3 trillion in healthcare spending is spent in the last 2-3 years of life, with many treatments costing $60K-$100K per treatment cycle on drugs that add 2-3 weeks of life. The UK's National Institute of Clinical Excellence has been making what are, in effect, rationing decisions by turning down all of the new cancer drugs that have come along because they have such great cost and such minimal benefit. He asked, "how much money could you save with a 90% accurate test of who's going to die no matter what you do?"
Prof. Marchant said more about legal issues involving specimen repositories, including a case at ASU. The developer of the prostate-specific antigen (PSA) test, William Catalona, had a specimen repository with 30,000 tissue samples at Washington University, that he wished to take with him to Northwestern University when he took a new position there. He began asking patients for permission to move the samples, and 6,000 gave permission. But Washington University sued him, claiming that the samples were property of the university. Patients pointed out that their consent agreement gave them the right to withdraw their samples from future research and they had only consented to research on prostate cancer, but federal judge Stephen Limbaugh ruled in favor of the university and that patients had no property rights in their tissue. This ruling has reduced incentives for patients to consent to give specimens for research.
A current lawsuit against ASU by the Havasupai Indian tribe involves blood samples that were given for a study of diabetes by researchers who are no longer at ASU. They wanted to take the samples with them, and samples had also been given to other researchers for use in studies of schizophrenia and the historical origins of the tribe, even though informed consent was apparently only given for the diabetes research. Although this case was originally dismissed, it was recently reinstated.
Other cases involve patent protection of genetic information. About 25% of the human genome is patented, including Myriad Genetics' patent on the BRCA1 and BRCA2 genes which are predictive of breast cancer and can only legally be tested for by Myriad. This case is likely to go to the U.S. Supreme Court regarding the issue of whether human genes can be patented. The courts so far have ruled that a gene in isolation outside of the human body is patentable, even though (in my opinion) this seems at odds with the requirement that patents be limited to inventions, not discoveries. There has already been a legislative limitation of patent protection for surgical procedures for the clinical context, so that doctors can't be sued for patent infringement for performing a surgery that saves someone's life; it's possible that a similar limitation will be applied on gene patents in a clinical context, if they don't get overturned completely by the courts.
These gene patents create a further problem for the multiplex tests, since they inevitably include many patented genes. Prof. Marchant observed that someone from Affymetrix spoke at an ASU seminar and stood up and said they were building their GeneChip DNA microarrays for testing for the presence of thousands of genes, and were ignoring gene patents. They were subsequently sued. Dr. LaBaer stated that his lab is doing the same thing with cloned genes--they're cloning everything and giving them away, without regard to patents.
The session was videotaped and will be made available to the public online. I will add a link to this posting when it becomes available.
If you've read this far, you may also be interested in my summary of Dr. Fintan Steele's talk at this year's The Amazing Meeting 7, titled "Personalized Medicine or Personalized Mysticism?", in my summary of the Science-Based Medicine conference that took place just prior to TAM7, and in my short summary of Dr. Martin Pera's talk on regenerative medicine and embryonic stem cells at the Atheist Alliance International convention that took place earlier this month.
Posted by Lippard at 10/24/2009 01:54:00 PM 2 comments
Friday, October 23, 2009
Atheist Alliance International Convention summary in Arabic
Posted by Lippard at 10/23/2009 08:01:00 AM 0 comments
Labels: atheism
Wednesday, October 21, 2009
Skepticism, belief revision, and science
Do we have voluntary control over what we believe?
In general, no. The credence we place in various propositions--our belief or rejection of them--is largely out of our voluntary control and dependent upon our perceptual experiences, memories, other beliefs, and established habits and methods of belief formation and revision. We can indirectly cause our beliefs to change by engaging in actions which change our habits--seeking out contrary information, learning new methods like forms of mathematics and logic, scientific methods, reading books, listening to others, etc.
How does someone become a skeptic?
People aren't born as skeptics--they learn about skepticism and how it has been applied in various cases (only after learning a whole lot of other things that are necessary preconditions--like language and reasoning). If skepticism coheres with their other beliefs, established habits and methods of belief formation and revision, and/or they are persuaded by arguments in favor of it, either self-generated or from external sources, they accept it and, to some degree or another, apply it subsequently.
When someone becomes a skeptic, what happens to all of the other beliefs they already have?
They are initially retained, but may be revised and rejected as they are examined through the application of skeptical methods and other retained habits and methods of belief formation and revision. Levels of trust in some sources will likely be reduced, either within particular domains or in general, if they are discovered to be unreliable. It's probably not possible to start from a clean slate, as Descartes tried to do in his Meditations.
Is everything a skeptic believes something which is a conclusion reached by scientific methods?
No. Much of what we believe, we believe on the basis of testimony from other people who we trust, including our knowledge of our own names and date and place of birth, parts of our childhood history, the history of our communities and culture, and knowledge of places we haven't visited. We also have various beliefs that are not scientifically testable, such as that there is an external world that persists independently of our experience of it, that there are other minds having experiences, that certain experiences and outcomes are intrinsically or instrumentally valuable, that the future will continue to resemble the past in various predictable ways, etc. If you did believe that skeptics should only believe conclusions which are reached by scientific methods, that would be a belief that is not reached by scientific methods.
Posted by Lippard at 10/21/2009 10:57:00 PM 25 comments
Labels: atheism, mind and brain, philosophy, rationality, science, skepticism
Massimo Pigliucci on the scope of skeptical inquiry
He ends up drawing a Venn-style diagram which has an outer circle labeled with "critical thinking" and "rational analysis," within which is a series of three overlapping circles labeled "atheism," "skeptical inquiry," and "political philosophy." He argues that skeptical inquiry only overlaps with atheism where religions make empirical claims that are subject to scientific investigation, and likewise for political philosophy.
I offered a few critical comments at his blog, noting that it is odd that "atheism" is the only label on his diagram which is the name of a specific position rather than a method or discipline, and suggesting that it be labeled something like "views on religion." I also suggested that that circle extend beyond the scope of the "critical thinking" and "rational analysis" circle, though that's presupposing his diagram is descriptive rather than normative. [Note added 1:31 p.m.: If his diagram is understood as a diagram of what is appropriate subject matter for critical thinking, rational analysis, and skeptical inquiry with respect to atheism and political philosophy, then those two circles should arguably not extend outside the border of critical thinking/rational analysis.] Similar considerations should apply to the "political philosophy" circle. People hold religious and political views for reasons other than those produced as a result of critical thinking and rational analysis.
I also took issue with his identifying "skeptical inquiry" with scientific skepticism. Skeptics have always used philosophical tools as well as scientific ones, but I would find his diagram more accurate if the middle circle was labeled "scientific skepticism" or even "scientific inquiry."
I also have some skepticism about this taxonomic enterprise in general, which is arguably both philosophical and political itself--Pigliucci is not using scientific methods to set up this framework, it's philosophy, and there are political and pragmatic reasons for wanting us to accept it--to issue in a ruling that certain domains are off-limits for skepticism, namely the examination of religious and political claims that are not subject to empirical investigation.
I think there are good pragmatic reasons for skeptical organizations to restrict themselves in such a way--the methods of skepticism can be used by anyone, regardless of their political or religious views, and organized skepticism has tried to appeal to a broad audience to focus critical attention on paranormal claims where scientific methodology can be brought to bear. But I'm skeptical of this as a general picture of the applicable domain of the methods of skepticism or skeptical inquiry. (I should note that I don't think that atheism implies skepticism--thus the reason for extending a circle with that name outside the boundaries of critical thinking and rational analysis--nor that skepticism implies atheism. Skepticism is about the methods used, not the conclusions reached. An atheist might think that any consistent application of skepticism will lead to atheism, but that presumes both that atheism is true and that consistent application of skepticism is a guarantee of truth, which it is not.)
I agree with commenter Maarten that the boundaries of these circles are fuzzy--just as the boundary between science and non-science doesn't admit to a bright-line demarcation. People can conceptualize the boundaries differently, even granting Pigliucci's conception of "empirically investigatable" as the domain of skeptical inquiry or scientific skepticism. The boundaries between scientific disciplines are themselves fuzzy and they use different methodologies, with huge differences between experimental and historical sciences, for example.
Finally, I agree with commenter Scott (Scott Hurst), who observes that religious believers do make very specific claims "about the nature of the universe, how it works, and its history (including our own)," and specifically noting belief in the power of prayer. These things are empirically testable and do make at least some common (one could say "vulgar") conceptions of God and religion refutable by science. The fact that a more sophisticated believer or theologian can construct a view that uses the same words yet withdraws from the realm of the empirical doesn't mean that the vulgar conception hasn't been refuted. This is perhaps more obvious with modern religions such as Mormonism and Scientology, where in the former case historical evidence and DNA evidence falsifies some key claims, and in the latter case where scientific evidence falsifies a great number of its claims. Hubbard's cosmology, for example, includes the idea that Xenu dropped thetans into a volcano on Hawaii 75 million years ago, but Hawaii didn't exist 75 million years ago. His book History of Man includes Piltdown Man in the human lineage, even though that fossil was discovered to be a hoax shortly after the book was published. And so forth.
It's fine for Pigliucci to define and use the terms the way he wants, but I don't think he's given strong reasons for the rest of us to accept the specifics of his formulation.
UPDATE (October 24, 2009): Russell Blackford has written "Pigliucci on science and the scope of skeptical inquiry" at the Sentient Developments blog, which comes to similar conclusions with a somewhat more comprehensive argument.
Posted by Lippard at 10/21/2009 12:07:00 PM 7 comments
Labels: atheism, religion, science, skepticism
Tuesday, October 20, 2009
Vote for RESCUE!!
Voting for the first round ends October 25th. If they win this round, they receive $1,000 and advance to the next round. Please vote today and ask others to vote!
Posted by Kat Lippard at 10/20/2009 01:59:00 PM 0 comments
Labels: animal rescue, animals, Fred, RESCUE
No God on Twitter
The topic is generating lots of hilarity, as Attempts at Rational Behavior (@rationalbehavio) has pointed out in a couple of blog posts, with some people trying to start "Yes God" as an alternative topic--but including the words "No God" in their tweets!
UPDATE (4:41 p.m., Arizona time): Twitter has decided to censor its "Trending Topics" list, and has merged tweets matching either "No God" or "Know God" into a topic labeled "Know God." If you actually click on that link to see the matching tweets (it explicitly does a search for either string), there are still a lot more that match "No God" than "Know God."
UPDATE (10:10 p.m., Arizona time): Benjamin Black offers this entertaining commentary of what almost happened, which provides a better explanation of "Trending Topics" for those unfamiliar with Twitter.
Posted by Lippard at 10/20/2009 11:37:00 AM 1 comments
Labels: atheism, religion, technology
Monday, October 05, 2009
Atheist Alliance International conference, quick version
The Atheist Alliance International convention took place over the weekend, October 2-4, 2009, at the Burbank Airport Marriott hotel, and I took my usual level of notes for the talks I attended. But rather than (or perhaps temporarily in lieu of) giving detailed summaries of each talk over the next several weeks, this will be one post with brief comments on each. If there's demand, I can follow this up with more detailed posts on individual talks of interest.
There were over 700 attendees at the conference, and I believe I heard that last year's conference was about 450. It's not as big as The Amazing Meeting, but if that rate of growth isn't an artifact of say, the fact that this conference was co-sponsored by the Richard Dawkins Foundation for Science and Reason and featured an unbelievable set of high-powered speakers, then they'll catch up quickly. The AAI conference participation seemed to be more diverse than TAM, with a higher proportion of women and minorities, though it's still not close to representative of the population--there's still a white male dominance.
The conference talks were divided into "tracks" which were really more just rough categories than a system of tracks that could be followed, which were Science, Advocacy, Heritage, and Development. Events that weren't talks included an optional pre-conference event of attending a live studio taping of a TV show ("100 Questions"), an optional post-conference event of an L.A. bus tour and visit to the La Brea tarpits, a live viewing of "Real Time with Bill Maher" featuring Richard Dawkins (shortly before they both showed up in person), entertainment by Mr. Deity (including live performance and a few of the shows, as well as some personal background from Brian Keith Dalton), a live recording of the Dogma Free America podcast with a panel of speakers, a standup comedy showcase hosted by Comedy Jesus Troy Conrad, a "taste of Camp Quest" for kids, and an Atheist Nexus live music party.
Friday
I arrived a bit later than planned--my expected driving time of just under six hours turned out to take over seven due to a few traffic issues along the way--and I missed three things I had wanted to attend. Those were Rich Orman's panel discussion for his Dogma Free America podcast, with P.Z. Myers, William B. Davis, and Sunsara Taylor; Alpharabius' talk on atheism in the Arab world; and Russell Blackford's talk on attempts to regulate against "defamation of religion." Fortunately, Alpharabius gave me a capsule summary of his talk and I had a few chances to chat with Russell Blackford and Rich Orman, so that partly made up for it.
P.Z. Myers gave an entertaining talk on "Design v. Chance" that began with a parody of a typical intelligent design creationist presentation, argued that ID arguments are at root an "over-extended metaphor" of design accompanied by misrepresentations of science. He showed how the ID claim that Darwin thought cells were mere "balls of protoplasm" is false, and presented evidence that various features thought to be characteristic of multicellular life have been found to be present in choanoflagellate protists. He ended by sharing a couple of useful words, "kipple" (from Philip K. Dick's "Do Androids Dream of Electric Sheep," meaning accumulated useless objects) and "granfalloon" (from Kurt Vonnegut, Jr.'s Cat's Cradle, meaning a label on a group that doesn't really have anything significant in common, like "Hoosier"). For "granfalloon," he quoted the statement from Bokonon in Vonnegut's book, "if you wish to study a granfalloon, just remove the skin of a toy balloon." Isn't "atheist" a good example of a granfalloon, if all we share is lack of a belief in God? (This ended up being relevant to Brian Parra's talk at the end of the conference.)
After a cocktail and socializing session, the main ballroom showed "Real Time with Bill Maher" on a big screen, featuring his guests Janeane Garofalo, Rep. Marcy Kaptur, and Thomas Friedman, then joined by Richard Dawkins. Maher demonstrated the witty and incisive criticism of religion that won him the Dawkins award, though he also made some comments about environmental causes of cancer that have raised controversy about his receiving an award with "science" in its name when he has pseudoscientific opinions about matters such as medicine (as Orac has forcefully argued in a series of posts at his Respectful Insolence blog: one, two, three, four). This was followed by entertainment from Mr. Deity in the form of both live performance and videos, along with some personal history from Brian Keith Dalton. Then Bill Maher and Richard Dawkins entered the room. Dawkins recounted the highlights of Maher's "Religulous" as the reasons for the award, and Maher accepted the award, noting (accurately) that Dawkins summary was better than the movie itself, followed by his routine of reading from Rick Warren's The Purpose Driven Life which you can find on YouTube. Maher was the least-approachable celebrity at the entire conference; even those sitting at his VIP table were unable to ask him questions, as P.Z. Myers reported firsthand.
Saturday
Ed Buckner gave a talk about how atheist and freethought organizations are learning to cooperate, which I live-tweeted comments on.
Lawrence Krauss gave a talk on "Our Miserable Future" or "Life, the Universe, and Nothing: The Future of Life and Science in an Expanding Universe," in which he argued that the best evidence shows that we are in a flat, rather than an open or closed universe, which means that it will continue expanding towards some limit. He gave a history of cosmology from the discovery of the expansion of the universe to dark matter, and pointed out that we are fortunate to live in a time when the energy density of dark matter vs. ordinary matter in the universe is approximately the same, and the expanding universe is at a point where other galaxies are still visible. The upshot of this is that we are fortunate to live in a time where we have the evidence of Hubble expansion and the Big Bang. Intelligent civilizations of the distant future will be unable to see any galaxies other than their own, or any evidence of the Big Bang, and will conclude that they are in a static and eternal universe based on the best evidence that they have. Such people will be "lonely and ignorant, but dominant," which Krauss said those of us here in the U.S. are already used to. They will have an irreparably wrong picture of the universe from their epistemological blindness due to state of the universe around them. (What similar blindness do we suffer from due to our current place in the universe and observational abilities?)
Carolyn Porco gave a talk which began as a celebration of Galileo's steps towards a scientific method, which she said couldn't be applied to God because no experiment is possible that is relevant to God. (This strikes me as erroneous in a number of ways, since claims about God usually have empirical consequences, it's possible to make philosophical arguments which draw upon scientific data, and her picture of science seemed to be based on an overly simplified Popperian philosophy of science.) She argued that it is "very difficult to prove a negative" (as if "proof" is what science cares about--but at least she didn't make the mistake of saying it's impossible). She claimed that science and religion are "completely different" and are not only not equivalent (certainly true) but are "not intersecting"--apparently advocating something like Stephen Jay Gould's "non-overlapping magisteria" view, which is falsified by the fact that religions do make empirical claims. She complained about Hollywood's depiction of scientists in a negative light and blamed it for deterring young people from going into science, though she supplied no scientific evidence to support this (though she referred to a survey of science-related films from 1920-1994 by two researchers that concluded depictions were overwhelmingly negative). I think it's unlikely that such depictions have much of a negative effect at all, since polls in the U.S. and other countries about what professions are most trusted put doctors, teachers, and scientists very high compared to most other professions. Businessmen are similarly victims of negative portrayals in Hollywood, and are also less trusted, but that doesn't seem to translate into fewer undergraduates choosing to become business majors. I suspect a better explanation of any reduction in science enrollments (if that's actually happening) would be found in elementary and secondary education, along with the fact that people find science and math difficult. She concluded with a series of fantastic photographs of Saturn and its moon Enceladus from the Cassini mission.
Martin Pera spoke about embryonic stem cells, science and policy, arguing that it will revolutionize medicine by allowing restoration of cell loss through transplantation as well as the development of new methods of research using stem cells. He pointed out various challenges to "regenerative medicine," including rejection, tumor formation, and implanted stem cells developing the same pathologies that they're designed to treat, but observed that these also present new opportunities to learn. On the public policy side, he argued that scientists need to engage more with the public, patient advocacy plays a key role in policy discussions, and careful and thoughtful regulation is preferable to "premature prescriptive regulation." (This ties into a lot of the subject matter in law, science, and technology I'm studying this semester, and Pera's talk had considerable overlap with a talk I attended earlier this year at the Humanist Society of Greater Phoenix on embryonic stem cells by Prof. Jane Maienschein of ASU. If I write up a more detailed summary of this talk, I'll bring some of that into it.)
Jerry Coyne gave a summary of his book, Why Evolution is True. He defined five constituents of the theory of evolution and pointed out predictions, retrodictions, and evidence supporting each of them from a variety of scientific disciplines. He book-ended his discussion with the famous chart of rate of acceptance of evolution by country (from a study co-authored by Eugenie Scott) at the start, and a suggestion as to why that pattern of acceptance holds at the end (appealing to Greg Paul's evidence that belief in God is correlated with social dysfunction). He concluded that the real way to increase the effectiveness of teaching of evolution is to build a better, more just society. I'm skeptical--I think there are likely other causes behind the correlation, and that the strength of religious belief in the U.S. may be the result of religious competition due to the lack of an official state religion.
Daniel Dennett gave what I thought was the most interesting talk of the conference, titled "The Evolution of Confusion." His initial premise is that you reverse engineer things by trying to break them, and to reverse engineer religion, you can look for "experiments of nature" in the same way neuroscientists reverse engineer the brain by looking for cases of humans with particular brain lesions or damage due to accidents, and compare them to those without. In the case of religion, the form of pathology he chose to study is preachers who are atheists. Not former preachers who are atheists, but those who are still in the pulpit and in the closet, yet don't believe in God. Working with Linda LaScala, he's found six cases of such preachers (who themselves think there are many others), ages 37-72, one female and five male, three in liberal denominations and three in literalist/fundamentalist denominations. These people have fallen into what he called "the not so tender trap" where they have financial dependence upon their jobs, have lost opportunities for other training, and find it "difficult to say to the rest of the world I have wasted the last 40 years of my life." Half of them, though, he thinks will go public in the near future, while two will probably never do so, because they feel like they will do less harm by living a lie than by coming clean.
Dennett compared these closeted atheist clergy to homosexuals in the 1950's, either having no "gaydar" or being afraid to test it. They'll occasionally resort to the age-old subterfuge of saying things like "I have an uncle who thinks X, what do you think of that" to their colleagues to try to identify other possible atheists by expressing their doubts with a thin veil of plausible deniability.
Each traces the roots of their problems to seminary, because professors of Bible studies tend to tell the truth about the evidence, and the evidence isn't good for the Bible. But they do so with a theological spin that is an attempt to use clever ways of speaking to glide over problems and provide ministers with answers to "What can I say to the parishioners?" which have the features of not being a bare-faced lie, relieving skepticism without arousing curiosity, and seeming to be profound. Dennett introduced the concept of a "deepity"--propositions that seem to be profound, because they are actually logically ill-formed, having one meaning that is trivially true and another which is false but would be earth-shattering if true. A familiar deepity is "Love is just a word." On one reading, it's true--"love" is just a word. On another, it's false--"whatever love is, it isn't a word," he observed, and noted "You can't find love in a dictionary--that's almost a deepity itself." This is an elementary logical mistake, failing to distinguish the use of a word (in the latter case) from a mention of a word (in the former case). If you quote a word to talk about the word itself, that's a mention; if you use the word to convey its meaning, in order to refer to the things described by the word, that's a use. This is a common error in undergraduate philosophy papers, so common that many graders identify it as "UME" -- "use-mention error."
Dennett gave examples of such errors in statements by Karen Armstrong, including in the title of her book, A History of God. It's not a history of God, it's a history of the concept of God. Similarly for Robert Wright's recent The Evolution of God. And he provided some further examples from sociologist of religion Rodney Stark (who seems to me to be using the "symmetry principle" just as sociologists of science do) and from Karen Armstrong, including this answer from the latter in response to the question "Do you believe God exists?" from Terry Gross on NPR: "That's the wrong question. It presupposes that God is the sort of being that could exist or not exit. God is no being at all. God is being itself. God is the God beyond God." Dennett observed that "God is no being at all" is sophisticated theology, while "No being at all is God" is crude atheism, yet those are logically equivalent statements. Theology, Dennett argued, is "like a magician doing a trick where you can see the card up his sleeve."
At dinner, we watched a short three-minute promotional video for the Richard Dawkins Foundation that featured Michael Shermer, P.Z. Myers, and Brian Greene, among others, talking about what is science. Richard Dawkins then spoke, summarizing the last chapter of his latest book, The Greatest Show on Earth, which has chapter sections related to and titled from the words of the last paragraph of Darwin's Origin of Species. One of the more memorable sections was about "The Four Memories" we have--the memory of past successes encoded into our biology and preserved by natural selection, the immune system's memory of diseases we've experienced during our lifetime, the memories accumulated by our brains, and the collective memory of transmitted culture.
Dawkins spent a very, very long time signing books, and looked exhausted when he signed my copy of The Blind Watchmaker near the end of the line.
The evening ended with a live music and karaoke party put on by Atheist Nexus.
Sunday
The first talk of the morning I attended (and live-tweeted) was Gerardo Romero of Ateismo desde Mexico, about atheism in Mexico. His group has been around for about 10 years. It first started on MSN forums but migrated to its own website and forums, and has now begun to migrate into the real world with two atheist marches. The First World Atheist March occurred on September 28, 2008 in Mexico City and Guadalajara in Mexico, as well as in Italy, Spain, Peru, and Colombia. They received newspaper coverage in the Excelsior, a major Mexico City newspaper. A second march was held on September 27, 2009 (Spain did theirs on a different day due to a holiday conflict), with participation also from ArgAtea in Argentina and Ateos from Peru. He talked about ADM's plans for further activism to promote science and critical thinking, separation of church and state, and distribution of condoms. ADM has a podcast, Masa Critica, as well as an electronic magazine, Hidra, published on their website.
Jonathan Kirsch spoke about the "Inquisitorial toolbox," first in the context of the history of the Inquisition and then as applied to more recent events. The main tools he described were the use of torture as punishment for wrong belief (as opposed to wrong action), calling this torture by a different name to conceal the real purpose behind the act, and requiring the "naming of names" as an act of contrition to show the sincerity of a recantation. In practice, this was used to eliminate competition and accumulate wealth, as well as to combat heresy (a word that derives from the Greek word for a free choice). He described the beginnings of the Inquisition as a tool to root out and eliminate the Cathars or Albigensians, whose heresy was to disbelieve in the newly-introduced 13th century doctrine of transsubstantiation. The Cathars reasoned that this doctrine was the opposite of holy belief--if we believe it, we must believe that "when we go to the privy we will piss out the blood of our savior, and excrete the body of our savior." A crusade against them failed to wipe them out, and so the Inquisition was invented to root them out by using informants, the threat and actuality of torture, and the collection of names. The Inquisition was subsequently used to wipe out the Templars and seize their wealth--forfeiture was also a key tool in the toolbox, making the victims pay for the privilege of being tortured.
Kirsch gave more modern examples including the use of "spectral evidence" in the Salem witch trials, the show trials of Stalinist Russia, and Hitler's forcing Jews to wear identifying badges and the identification of Jews in terms of bloodline as elements consciously copied from the Spanish Inquisition. And although the last victim of the Inquisition was executed in 1826 (garroted and placed in a barrel with flames painted on it as a reminder of the glory days of burnings at the stake) and the Inquisition was formally ended in 1834, The Holy Office which was created to run the Inquisition still exists to this day under a different name ("Congregation of the Doctrine of the Faith"), headed by the Pope. I was reminded of how the Church of Scientology, after being prosecuted for criminal activity associated with its "Guardian Office," claimed to reform by changing the name of that unit to the "Office of Special Affairs."
Kirsch also observed that the tactics of the McCarthy Era and of the "global war on terror" have also used tools from the Inquisitor's toolbox. I think he could have also pointed out uses of the toolbox in the war on drugs (especially the use of civil forfeiture and "naming of names").
Eugenie Scott gave a talk about intelligent design which focused primarily on the strategies that have been used to try to get it into the public schools. While the direct approach failed in Kitzmiller v. Dover, the latest approach has been with "academic freedom" and "explore alternative evidence" bills and attempts to change state educational standards. She recounted recent events in Texas regarding attempts to put "teach the controversy"-style wording into the Texas Educational Knowledge and Skills document, which started as a requirement to teach "strengths and weaknesses" across all domains, and ended with "all sides of scientific reasoning." She then looked at some 1990's-2000 cases where individual teachers tried to teach creationism and were slapped down (Ray Webster, John Peloza, and Rodney LaVake), and noted that the "academic freedom" bills are essentially an attempt to legislate against such further slapdowns. Such a bill has passed in Louisiana, which allows teachers to bring in supplemental materials to critique biological evolution, global warming, and human cloning. She pointed to a phylogeny of these bills constructed by Anton Mates that showed how they have evolved.
These bills are constructed to try to avoid the possibility of legal challenge. They avoid any mention of religion to avoid establishment clause violations. They stress free speech and academic freedom. They are phrased as protective of a teacher's right to teach alternatives. And they are formulated as permissive rather than directive bills, which means that they have avoided a facial challenge--a judge isn't likely to grant an injunction against them on the vague language of the bill, but only to do so on the basis of an "as applied" challenge if there's a particular case of where a teacher following the bill engages in activity that infringes the constitution and a parent and student with standing can be found to sue.
The final talk of the day I attended was Brian Parra's talk, "All Together Now: Strategies for Growing the Freethought Community." He distinguished identity vs. beliefs, pointing out that the Pew polls on belief are structured by first asking how a person self-identifies, then asking them a series of questions about belief. Only 1.6% of the U.S. population self-identifies as atheist, and it can be daunting to look at 1.6% vs. 98.4% of everybody else. But if you add agnostics, you get another 2.6%, a total of 4.2%, which is a group larger than Jews and Mormons put together. If you add "none"'s, you get another 12.1%, and a total of 16.3%--about the size of the black community. If you add in the "don't know" answerers, and adherents of nontheistic religions, you get up to 18.5%. If you look at not-monotheists, you get 20.1%. And if you look at not-evangelical-Christians, you get 74.3%.
He further noted that if you look at how self-identified Catholics answer the question "Do you believe in God?", you'll find that 25% of them said no. (By the same token, though, if you look at how self-identified atheists answered that question, you'll find that 21% of them said yes.)
He suggested that we define positive aspects of atheism and create coalitions based on common ground, and drew squiggly circles around a diagram that showed all of these groups regarding their answers to the questions, for those who don't believe in God, who believe in a physical universe and natural cause (I believe he meant *only* in, i.e., rejection of the supernatural), who support secular government, humanistic ethics, and have confidence in science and reason. These he identified as the "Big Five" for creating an atheist worldview. Afterward, I asked him what's the difference between his "Big Five" and secular humanism, to which he answered "Nothing." If it is different, it is only in being somewhat more concisely (and vaguely) formulated.
He concluded by saying that a possible model for success is church minus the theology--it's just a community that plans varied activities aimed at different age groups and interests, not just about atheism but in the name of atheism, which stays in touch with constituents via various media, which brings new people into positions of responsibility, and which seeks out "public displays of atheism, not merely for protest and activism, but also to demonstrate that atheists exist and are nice people."
I'm not sure I'm optimistic about that approach. Not only is it already being done by the humanists (including both CFI and AHA), while his initial remarks were about ways to increase the scope and size of coalitions, his "Big Five," by looking at the intersection of those "squigglies" rather than the union, inherently shrinks them. And by far, the one that cuts down the group the most is the first one, nonbelief in Gods. I think this is, to some degree, an advantage that skeptics have over atheists, which is that they put the emphasis on the last item on his list, support for science and critical thinking, rather than the first. I'm inclined to think that the last three of the "Big Five" are far more important things to share in a civil society than the first two.
All in all, it was a great conference, despite a few glitches involving errors in room assignment, last-minute schedule changes, and technology. The most appealing aspects for me were the top-notch speakers on science and the chance to socialize and engage in discussion with many like-minded, intelligent people, even if they are part of a granfalloon.
UPDATE (October 11, 2009): Relevant to Brian Parra's talk is Luke Galen's sociological study of nonbelievers, the Non-Religious Identification Survey, as well as Bruce E. Hunsberger and Bob Altemeyer's book Atheists: A Groundbreaking Study of America's Nonbelievers, which I just read about in the presentation slides of a talk by Taner Edis of the Secular Outpost.
UPDATE (October 23, 2009): You can find a translation of this summary into Arabic at the Arab Atheists Network website.
UPDATE (November 16, 2009): Daniel Dennett's talk from the AAI conference is online here.
UPDATE (December 27, 2009): Lawrence Krauss's talk, Jerry Coyne's talk, Andy Thomson's talk, Richard Dawkins' talk, P.Z. Myers' talk, and Carolyn Porco's talk are all on YouTube as well.
Other Blogs on the AAI Convention
P.Z. Myers wrote about Russell Blackford's talk on defamation of religion, Toni Marano, Robert Richert's talk on Vietnam, Maurice Bisheff's apparently kooky talk on Thomas Paine, the Dogma Free America panel, the Maher/Dawkins Award ceremony, an exhibit on Evolutionary Genealogy, a gift of a bottle of wine supplied while having dinner with Daniel Dennett, acting in a forthcoming Mr. Deity episode, other gifts of wine and Surly-Ramics jewelry, proof of meeting Mr. Deity and Lucy supplied by your truly, and a challenge regarding the Atheist Nexus.
Paul Fidalgo wrote summaries of the Dogma Free America panel, Lawrence Krauss's talk, Caroline Porco's talk, and the Bill Maher award.
John Crippen describes his AAI convention experience in three posts: one, two, and three.
Surly Amy offers her observations on why "You Don't Have to Be a Skeptic to Be an Atheist," which nicely complements P.Z. Myers' review of Maurice Bisheff's talk. I agree with her, and also note that you don't have to be an atheist to be a skeptic. These two posts illustrate why I prefer to self-identify with skeptics.
Rich Orman interviewed a number of the speakers for his Dogma Free America podcast, including P.Z. Myers, Ed Buckner, Stuart Bechman, Sean Faircloth, Alpharabius, and Brother Richard of AtheistNexus.
If anyone comes across other summaries worthy of mention, note them in the comments or in email and I'll append them here.
(Photo of UFO sighting in Marriott lobby by Reed Esau.)
Posted by Lippard at 10/05/2009 02:02:00 PM 8 comments
Labels: atheism, Bill Maher, Daniel Dennett, Lawrence Krauss, religion, Richard Dawkins, science