Friday, October 30, 2009

Robert Balling on climate change

This afternoon I went to hear ASU Prof. Robert Balling, former head of ASU's Office of Climatology and current head of the Geographic Information Systems program, talk about climate change in a talk that was advertised as "Global Warming Became Climate Change: And the Story Continues," though I didn't notice if he had a title slide for his presentation.

He began his talk by saying that in 1957, measurements of CO2 began to be made at Mauna Loa (by Charles David Keeling), which established that CO2 is increasing in our atmosphere, largely because of human activity--from fossil fuel emissions. It's approaching 390 parts per million (ppm). Last weekend, the "A" on A Mountain near the university was painted green by a bunch of people wearing shirts that say "350" on them, because they want atmospheric CO2 to be stabilized at 350 ppm, which was the level in 1990, which is the benchmark year for the Kyoto Protocol.

But this isn't remotely feasible, he said, citing the Intergovernmental Panel on Climate Change (IPCC). Even the most optimistic scenario in the IPCC Report has atmospheric carbon continuing to rise until 2100, hitting about 600 ppm. If we reduced emissions to 0, the best case would end up with stabilization at around 450 ppm. Our lifetime will see increasing CO2 levels, no matter what we do. (In other words, the Kyoto benchmark sets a standard for emissions levels to return to, not for a level of atmospheric carbon to return to.)

If you look at the earth's history on a longer scale, atmospheric carbon has been much higher in the past--it was at about 2500 ppm during the dinosaurs. During the last 600,000 years, however, it has been much lower, and fell below 200 ppm in the last glacial period. This, Balling said, shows what he would identify as a dangerous level of CO2--falling below 160 ppm, which causes plants to die.

There are other greenhouse gases besides CO2 that have an effect, such as methane and NO2, that humans are producing, he said.

At this point, he said the greenhouse effect is real--CO2 doubling causes warming--and this has been known for 120 years and "nobody is denying that."

There are climate models, which he said he has great respect for--it's basic physics plus fantastic computing and applied math. Climate modelers, he said, are their own worst critics. Problems for climate models include clouds, water vapor, rain, and the ocean, but lots of things are modeled correctly and the results are generally pretty good. Clouds, he said, are the biggest area of debate. The IPCC models say that clouds amplify warming, but satellite-based measurements suggest that clouds dampen (but don't eliminate) warming. Thus, he concluded, IPCC may be predicting more warming than will actually occur.

He next discussed empirical support for warming, and pointed out that the official plot of global temperatures has no error bars, and the numbers reported come from sensors that don't cover the entire world. How you come up with a global average can be done in different ways, and the different methods produce different results. You can take grid cells, average by latitudinal bands, get two hemispheric averages, and average them together. You can just average all of the data we have. He said that Roger Pielke Sr. questions the use of average temperatures and suggests looking at afternoon high temperatures. Looking at the older end of the chart, he asked, "where were the sensors in 1900? Why no error bars?"

He asked, "Is the earth warming," and said "right now the earth is not warming. I expect it to keep going up, but over the last decade there's been essentially none." He pointed to a recent article in Science magazine, "What happened to global warming?" Many are writing about this, he said, and there could be "1001 different things including sun and oceanic processes." (I don't believe this is correct unless you measure from 1998, which was an El Nino year. Most of the top 10 warmest years in history are post-1998.)

"Scientists are questioning the data," he said, showing photos from Anthony Watts' blog of poorly situated weather stations. "The albedo of the shelter in my backyard has changed as it has decayed," and caused it to report warmer temperatures. He said that people are having a field day taking photos of poor official sites. (What Balling didn't say is that what's important in the data is not absolute temperature but the temperature trends, and the good sites and bad sites both show the same trends.)

He pointed out that there are corrections to the temperature based on time of measurement, urban heat islands, instruments used, etc. If you look at the raw data for the U.S. from 1979-2000, you see 0.25 degrees Celsius of warming. Sonde data shows 0.24 degrees, MSU's measurements show 0.28, IPCC shows 0.28, and FILNET shows 0.33. He suggested that these corrections on the official data may be inflating the temperature (again, see my previous comment on trends vs. absolute temperature). Sky Harbor Airport produces the official temperature results for Phoenix, maximizing urban heat island effect. Many of the city records are from the worst sites, and he suggests looking at rural temperatures might give a different result.

Another factor is stratospheric turbidity from volcanic eruptions, and he showed a plot of orbital temperatures from satellites vs. stratospheric turbidity. He said that volcanism accounts for about 30% of the trend variability.

The big player in the game, he said, is the sun. Solar irradiance measures showed a significant decline in solar output in 1980, but earth temperature continued upward--he said he mentioned this because he thought it would be used as an objection. In response, he said that "the sun doesn't increase or decrease output over the entire spectrum and there are interactions with stratospheric clouds." He said that there are astrophysicists who argue that this is the major cause of global warming. In the Q&A, he said that there's one group that thinks cosmic ray flux is the major factor in global temperature because it stimulates cloud formation, while another group says that cosmic ray flux is little more than a trivial effect. He also said that this debate takes place in journals that "I find very difficult to read."

There are other confounding variables like El Nino and La Nina, but he said there has "definitely been warming over the last three decades with a discernable human contribution."

He put up a graph of the Vostok Reconstruction of temperature based on ice core data, on a chart labeled from -10 to +4 degree temperature changes in Celsius, which were mostly in the negative direction, and said we've seen periodic rapid changes up and down without any human contribution.

He talked about the IPCC "hockey stick" graph from 2001, which led to a huge debate about the possibility of bad statistical methods that guaranteed the hockey stick shape. He observed that 1000 years ago it was as warm or warmer than today, the Medieval Warming Period, which was missing from the "hockeystick" graph. There was also a "Little Ice Age," also missing from the graph. He said the IPCC has backed away from the hockey stick and its most recent report includes clear Medieval Warming Period and Little Ice Age periods in its graphs.

He showed a photo of a microwave sounding unit for temperature measurement, and the polar satellite record from 1978 to present, which showed a big peak in 1998 from El Nino. He said he wrote his book saying that there was no warming in 1992, when it was true. After 1998, the temperature came back down quickly, and, he said, the satellite record, like the Science article, hasn't seen warming. He then corrected himself to say, "well, some warming, but not consistent with the IPCC models."

He said there has been high latitude warming, and the difference between winter and summer warming has supported the numeric models. "But a problem has evolved, which is the most powerful argument of the skeptics." The models predict that there should be warming at the surface, which increases at higher altitude as you go up into the atmosphere. "There should be very strong warming in the middle of the atmosphere, but it's not in the data." This is the main anti-global warming argument of Joanne Nova's "The Skeptics Handbook" that has been distributed to churches throughout the U.S. by the Heartland Institute (an organization supported by the oil industry that has sometimes gotten into trouble due to its carelessness).

At this point, Balling started asking various questions and answering them by quoting from the IPCC reports:

More hurricanes? The IPCC doesn't say this. He cited the 1990, 1996, 2001 (executive summary, p. 5), and 2007 (p. 6) reports, all of which say that there's no indication or no clear trend of increase or decrease of frequency or intensity of hurricanes or tropical cyclones as a result of warming.

The southwestern United States may become drier? Here, he answered affirmatively, pointing out that an ASU professor has an article that just came out in Science on this topic. Atmospheric circulation is decreasing, and soil moisture measures show the southwest is becoming drier. On this, he said, there's "evidence everywhere," and the Colorado River basin in particular is being hit hard. And this is consistent with IPCC predictions. He cited Roy Spencer to say that "extraordinary prediction require extraordinary evidence." (This actually comes from Carl Sagan, who said "extraordinary claims require extraordinary evidence" in Cosmos.)

Frequency of tornadoes? It's down, not up, and IPCC 2007 p. 308 says there is no evidence to draw general conclusions.

Ice caps are melting? Balling said Arctic yes, Antarctic, no. He cited the IPCC 2007 p. 6 regarding Antarctic sea ice extent indicating a lack of warming, and p. 13 that it's too cold for widespread surface melting. He contrasted this with a slide of a homeless penguin used to argue for action on global warming. The Arctic ice cap "has its problems," he said, and its extent has declined though has "rebounded a bit" recently. (In the Q&A, he said that half of the loss in the last six years has been recovered.) He said that experts in sea ice extent identify relative temperature, ocean currents, and wind as more important than temperature--"it's not a thermometer of the planet." In the past, northern sea ice has dropped as southern sea ice has increased, with the overall global extent of sea ice relatively unchanged. In the Q&A, he made it clear that he wasn't saying that temperature wasn't a factor, but that global temperature is definitely not a factor and temperature is less important than the other factors he identified.

Sea levels changing? He said there's no doubt about this, but the important question is whether the rise is accelerating. He cited Church et al. J. Climate 2004, p. 2624 for a claim of "no detectible secular increase" in rate of sea level rise, but noted that another article this week says that there is. IPCC 2007, p. 5 says that it is unclear if increasing is a longer-term trend. The average has been 1.8 mm/year, but with variable rates of change. IPCC 2007, p. 9 says that 125,000 years ago sea levels were likely 4-6m higher than in the 20th century, due to retreat of polar ice.

He said that ice melting on Kilimanjaro has been a "poster child" for global warming, but that this sharp decline "started its retreat over 100 years ago," at the end of the Little Ice Age (1600-1850), and is related to deforestation and ocean patterns in the Indian Ocean rather than global warming. It's not in an area where significant warming is expected by climate models, and local temperatures don't show it.

He then talked about a few factors that cause temperature forcing in a negative direction (i.e., cooling)--SO2, which makes clouds last longer, increased dust, and ozone thinning. He said that his entry into the IPCC was his work at the U.S./Mexico border where he found that overgrazed land on the Mexican side caused warming, and it was much cooler on the U.S. side of the border. The dust, however, had a global cooling effect.

The 2001 IPCC report lists global radiative forcings in the negative direction: stratospheric ozone, sulfate, biomass burning, and mineral aerosols. In the positive direction include CO2 and solar irradiance. The 2007 report adds many more, including contrails from aircraft. A chart from the report lists the level of scientific understanding for each factor, and he observed that it's "low" for solar irradiance.

He cited a quote from James Hansen (Proc. Nat. Acad. Sci. p. 12,753, 1998) saying that we can't predict the long term, and said he agrees.

He observed that the Pew Foundation poll for Sep. 30-Oct. 4 asked Americans if they think there is evidence of global warming being caused by humans and only 36% said yes--he said he's one of those 36%.

He concluded by observing that if you look at the difference between doing nothing at all, or stabilizing at 1990 levels in 1990, that only produces changes of a few hundredths of a degree of temperature in 2050--so no matter what we do, "we won't live long enough to see any difference."

In the Q&A session, Prof. Billie Turner said that "our academy is about to issue a statement that we are 97% sure that we will not be at a 1-degree Celsius increase but a 2-degree Celsius increase by 2050" (or about double what Balling's final slide showed). He objected that Balling's talk began with the "lunatic green fringe" and contrasted it with the IPCC, which he said would be like him beginning a talk with Dick Cheney's views before giving his own. He said this may be an effective format, but it "gives a slant on the problem that isn't real in the expert community." Turner also pointed out that on the subject of mitigation, if you are going to make a calculation in economic terms you have to use a discount rate. The Stern Review used a high discount rate, and concluded that it is worth spending a lot of money now on mitigation; William Nordhaus and Partha Dasgupta, on the other hand, used a low discount rate and concluded that it's not worth spending money on now.

Balling said that he gets email from "lefties" that ask him to "please keep criticizing" because "this [global warming] is just an excuse to keep the developing world from catching up." In conversation with a small group afterward, Balling made it clear that he thinks people shouldn't be listening to Limbaugh and Hannity on climate change, and in answer to my question about what sources the educated layman should read and rely upon, he answered unequivocally "the IPCC," at least the scientific portions authored by scientists. He had some criticisms for the way that the technical summaries are negotiated by politicians, however, and said that S. Fred Singer has made hay out of comparing the summaries to the actual science sections and pointing out contradictions. He also said that Richard Lindzen at MIT, who he said may be the best climate scientist around, thinks the whole IPCC process is flawed, and that John Christy, lead author of the 2001 IPCC report, thinks the IPCC process should allow an "alternative views" statement by qualified scientists who disagree.

In a very brief discussion afterward [I had] with the climate modeling grad student in my climate change class, he [the student] said that the biggest weakness of the talk was that Balling didn't talk about ocean temperatures, being measured by the Argo project of NASA's Jet Propulsion Laboratory. These measures had shown some recent cooling (but a long-term warming trend), but after discovering an error, Joshua Willis found that warming has continued.

Balling supports the science, but he still leans to minimizing the negative effects, and uses some apparently bad arguments to do so. His position clearly advocates a "wait and see" approach, and argues that we needn't be in a hurry to mitigate since nothing we do will have any effect in our lifetimes--but it could have an enormous effect on what is required for mitigation and adaptation for future generations.

Thursday, October 29, 2009

State Press defends Ravi Zacharias

ASU's State Press columnist Catherine Smith authored an op-ed piece promoting last night's appearance of Christian apologist Ravi Zacharias. This was at least her second such op-ed; a prior one was published on September 17.

My letter to the editor, below, didn't get published, but another critic's letter did get published.

Here's mine:
Catherine Smith quotes Ravi Zacharias as stating that "irreligion and atheism have killed infinitely more than all religious wars of any kind cumulatively put together." This statement not only demonstrates Zacharias' innumeracy, it shows that he continues to make the mistake of attributing killing in the name of political ideologies like Stalinism and communism to atheism. I agree that Stalin, Mao, and Pol Pot killed more than religious wars, but it wasn't their atheism that caused that killing. Those killed by religious wars, the Inquisition, and witch trials, however, were killed in the name of religion. Out of fairness, there were no doubt political issues involved in many wars over religion as well, but if you take claims of religiously motivated killing at face value, the death tolls for those killed in the name of religion far exceed the death tolls for those killed in the name of irreligion.

Zacharias has a history of attacking atheism with misrepresentations in his books, as documented in Jeff Lowder's "An Emotional Tirade Against Atheism" and Doug Krueger's "That Colossal Wreck," both of which may be found on the Internet as part of the Secular Web (http://www.infidels.org/).
I first heard of Zacharias back around 1991, when I sat behind someone on an airplane flight who was reading his book (reviewed by Krueger, linked above), A Shattered Visage. The parts I read were truly awful, about the quality of M. Scott Huse arguments against evolution (a step below Kent Hovind and Ken Ham). I didn't bother to attend, but would be interested in hearing any reports of how it went.

UPDATE (November 24, 2017): Steve Baughman has published an exposure of Zacharias' claims to have credentials he does not possess, and to have had academic appointments that did not exist.

UPDATE (September 26, 2020): Ravi Zacharias died of cancer earlier this year, but not before being caught in an online relationship scandal.

UPDATE (February 11, 2021): Ravi Zacharias International Ministries has publicly released a report on an investigation into abuse charges against Ravi Zacharias, and it found a significant pattern of predatory sexual abuse and a rape allegation.

The woman googled “Ravi Zacharias sex scandal” and found the blog RaviWatch, run by Steve Baughman, an atheist who had been tracking and reporting on Zacharias’s “fishy claims” since 2015. Baughman blogged on Zacharias’s false statements about academic credentials, the sexting allegations, and the subsequent lawsuit. When the woman read about what happened to Lori Anne Thompson, she recognized what had happened to that woman was what had happened to her.

As far as she could tell, this atheist blogger was the only one who cared that Zacharias had sexually abused people and gotten away with it. She reached out to Baughman and then eventually spoke to Christianity Today about Zacharias’s spas, the women who worked there, and the abuse that happened behind closed doors.

Wednesday, October 28, 2009

Teaching the Bible in public schools

The following is a letter to the editor of Arizona State University's State Press that the paper didn't print. It was written in response to an editorial by Will Munsil, son of Len Munsil, who was editor of the State Press when I was an undergraduate in the 1980s. Len Munsil is an extremely conservative Republican, failed Republican candidate for Governor in 2006, and founder of the Center for Arizona Policy, Arizona's version of the American Family Affiliation. His daughter, Leigh Munsil, is the State Press's current editor-in-chief. When Munsil Sr. edited the school paper, he sometimes refused to print my letters to the editor for shaky reasons.

The letter below was written in response to Will Munsil's "Putting the Bible back in public schools," which was published on October 14.
I disagree with Will Munsil's assertion that the Bible is the foundation of American political thought. On the contrary, the American form of government was rooted in the work of enlightenment philosophers such as Locke, Montesquieu, and Rousseau. The U.S. Constitution's form of government has more resemblance to Caribbean pirate codes than to the Ten Commandments.

That said, however, I agree with Munsil that knowledge of the Bible is worthwhile and should be taught in public schools for the purpose of cultural literacy, so long as it is done without endorsing Christianity or Judaism. The Bible Literacy Project's curriculum might be one way to do it. One way not to do it is to use the National Council on Bible Curriculum in Public Schools' curriculum--it takes a sectarian perspective, is full of errors, and has failed legal challenges in Texas and Florida for being unconstitutional.
I suspect this letter wasn't excluded by reason of content, but because they had already printed a couple of letters critical of Will Munsil's op-ed by the time I submitted this on October 16. Perhaps I should have mentioned that I'm an atheist, which makes the extent of my agreement with Munsil more interesting. Of course, my view is contrary to Munsil's in that I think Bible literacy is likely to decrease, rather than increase, religious belief. But it wouldn't surprise me if the NCBCPS curriculum is the one that Will Munsil had in mind.

I should point out that I think it should probably be taught as part of a world religions class that covers more than just Christianity--kids should not only get information about the Bible that they won't get in Sunday School, they should be informed about other religions, as well as the fact that history has been full of doubters of religion, as documented in Jennifer Michael Hecht's excellent Doubt: A History.

You can find out more about the NCBCPS curriculum that failed legal challenge in Texas here.

Munsil cited Stephen Prothero, whose op-ed piece, "We live in a land of biblical idiots," I wrote about at the Secular Web in early 2007.

Monday, October 26, 2009

Hitler orders DMCA notices for "Downfall" parody videos

Brad Templeton, chairman of the board of the Electronic Frontier Foundation, has produced his own "Downfall" parody video, making fun of the fact that Constantin Films has issued DMCA notices to remove all of the "Downfall" parody videos from YouTube:



UPDATE (April 20, 2010): This video has been taken down from YouTube after a complaint from Constantin Films, which Brad Templeton has protested. The video is now available at Vimeo.

Paul Haggis leaves Scientology

Paul Haggis, director of the film "Crash" (not to be confused with the David Cronenberg film of the same name), has left Scientology with an open letter published on ex-Scientologist Mark "Marty" Rathbun's blog (which has also supplied links to Scientology's reply).

One of Haggis' main complaints is the Church's homophobia. Was Haggis really in Scientology for three and a half decades without realizing that homosexuality is 1.1 on the "tone scale"? Good for him for leaving, but he must have had blinders on regarding everything he complains about.

Richard Carrier to speak in Phoenix

Richard Carrier will be speaking to the Humanist Society of Greater Phoenix on Sunday, November 8 at around 10 a.m.--it will likely be packed, so showing up for breakfast or just to get a seat at 9 a.m. is advised. Richard will be speaking about Christianity and science, ancient and modern, and you can get a bit more information about his talk at his blog.

Saturday, October 24, 2009

Personalized medicine research forum

Yesterday afternoon I attended a Personalized Medicine research forum at ASU's Biodesign Institute, sponsored by ASU's Office of the Vice President for Research and Economic Affairs (OVPREA) and hosted by Dr. Joshua LaBaer of ASU's Virginia G. Piper Center for Personalized Diagnostics.

The forum's speakers covered both the promise and problems and issues raised by the developing field of personalized medicine, which involves the use of molecular and genetic information in medical diagnosis and treatment. A few highlights:

Introduction (Dr. LaBaer)
Dr. LaBaer pointed out that these new diagnostics cost a great deal of money to develop, but they have the potential for cost savings, for instance, if they can be used to identify forms of disease that will not benefit from very expensive treatments. He gave the example of Genomic Health, which has developed a test for early stage breast cancer to determine if women will or won't benefit from adjuvant therapy (chemotherapy to prevent recurrence). A test that costs even a few thousand dollars to perform is something insurers will be willing to pay for if it has the potential of saving tens of thousands of dollars of expense on chemotherapy that will not provide any benefits. On the other hand, the mere promise of early detection of susceptibility for disease has the potential for overtreatment and an increase in healthcare expenses. This problem was discussed by a number of speakers, with particular bad potential consequences in the legal realm.

Personalized Diagnostics (Dr. LaBaer)
Dr. LaBaer talked briefly about his own lab's work in biomarker discovery and cell-based studies. In biomarker discovery, his lab is working in functional proteomics, using cloned copies of genes to produce proteins and building tests that allow examination of thousands of proteins at a time. His lab, formerly at Harvard and now at ASU, has 10,000 copies of human genes and 50,000 copies of genes from other animals, which are made available to other researchers. (There's more information at the DNASU website.)

The goal of biomarker discovery is to greatly improve the ability to find markers of human health using the human immune system, by identifying antigens that are markers for disease. The immune system generates antibodies not just in response to infectious disease, but against other proteins when we have cancer. Tumor antigens get into the bloodstream, though they may only appear in 10-15% of those who have the disease. Rather than testing one protein at a time, as is done with ELISA assays, LaBaer's lab is building protein microarrays with thousands of proteins, tested at once with blood serum. Unlike old array technology that purifies proteins and puts them into spots on arrays, where the proteins may degrade and lose function, their method involves printing the DNA that encodes the gene on the arrays, then capturing proteins in situ on the array at the time the experimental test is performed.

LaBaer's lab's cell-based work involves tryng to identify how proteins behave in cells when they are altered, in order to find out which pathways contribute to consequences such as drug resistance in women with breast cancer, as occurs with Tamoxifen. If you can find the genes that make cancer cells resistant, you can then knock them out and cause those cells to die. They tested 500 human kinases (5/7 of the total) and found 30 enzymes that consistently make the cancer cells resistant. Women with a high level of those enzymes who take Tamoxifen have quicker relapses of cancer.

Complex Adaptive Systems Initiative (George Poste)
George Poste, former director of ASU's Biodesign Institute and former Chief Scientist and Technology Officer at SmithKline Beecham, talked about the need to replace thinking about costs in the healthcare debate with thinking about value. The value proposition of personalized medicine is early detection, rational therapeutics where treatment is made based on the right subtype of disease being treated, and integrative care management where there's better monitoring of the efficacy of treatments. He said that the first benefits will come from targeted therapy and this will then overlap with individualized therapy, as we learn how our genome affects such things as drug interactions. He was critical of companies like 23andme, which he called "celebrity spit" companies, which do little more than give people a needless sense of anxiety about predispositions to disease that they currently can do nothing about except eat right and exercise.

Poste also had criticisms for physicians, pointing out that it takes 15-20 years for new innovations to become routinely adopted, and many physicians don't use treatment algorithms at all. Oncologists, he said, make money from distributing treatments empirically (that is, figuring out whether it's effective by using the treatment on the entire population with the disease) rather than screening first, even where tests exist to determine who the treatment is likely to work on. He said that $604 million/year in health care costs could be saved by the use of a single colon cancer screening test, and not proceeding with treatment where it isn't going to work. Today, where 12-40% of people are aided by treatments that cost tens of thousands of dollars, 60-88% of that spending is being wasted. With the aging population, he said that Humana will in the next several years see all profits disappear, spent on expensive treatments of people who don't respond to them.

Pharmaceutical companies are beginning to do diagnostic test development alongside drug development now, and insurers will push for these tests to be done. Poste suggested that we will see the emergence of "no cure, no pay" systems, and noted that Johnson & Johnson has a drug that has been introduced for use in the UK under the condition that the company will reimburse the national health care system for every case in which it is used but doesn't work. Merck's Januvia drug for type II diabetes similarly offers some kind of discount based on performance.

Poste pointed out another area for potential cost savings, related to drug safety. With some 3.1 billion prescriptions made per year, there are 1.5-3 million people hospitalized from drug interactions, 100,000 deaths, and $30 billion in healthcare costs, though he noted this latter figure includes caregiver error and patient noncompliance.

He bemoaned the "delusion of zero risk propagated by lawyers, legislatures, and the media," and pointed out that the FDA is in a no-win situation. (This is a topic that's been recently covered in two of my classes, my core program seminar and my law, science, and technology class with Prof. Gary Marchant. If the FDA allows unsafe drugs to be sold, then it comes under fire for not requiring sufficient evidence of safety. If, on the other hand, it delays the sale of effective drugs, it comes under fire for causing preventable deaths. The latter occurred during the 1980s with AIDS activists protesting against being denied treatments, described in books such as Randy Shilts' And the Band Played On and Steven Epstein's Impure Science. This led to PDUFA, the Prescription Drug User Fee Act of 1992, under which drug companies started funding FDA drug reviewer positions through application fees to help speed approval. That has been blamed for cases of the former, with the weight-loss drugs Pondimin and Redux being approved despite evidence that they caused heart problems. That story is told in the PBS Frontline episode "Dangerous Prescription" from November 2003.)

Poste pointed out that there have been 450,000 papers published which have claimed to find disease biomarkers, of which the FDA has approved only five. But he didn't blame the FDA for delay in this case, because this consists of a mass of bad studies which he characterized as "wasteful small studies" with insufficient statistical power. In the Q&A session, he argued that NIH needs to start dictating clear and strong standards for disease research, and that it has abrogated its role in doing good science. He said that "not a single national cancer study with sufficient statistical power" has been done in the last 20 years; instead research is fragmented across academic silos. He called for "go[ing] beyond R01 grant mentality" and building the large, expensive studies with 2,500 cases and 2,500 controls that need to be done.

He also raised challenges about the "very complex statistical analysis required" in order to do "multiplex tests" of the sort Dr. LaBaer is trying to develop. And he pointed out the challenge that personalized medicine presents for clinicians, in that "only about six medical schools have embraced molecular medicine and engineering-based medicine." Those that don't use these new techniques as they become available, he said, "will open themselves up to malpractice suits."

Science and Policy (David Guston)
David Guston, co-director of ASU's Consortium for Science, Policy, and Outcomes (CSPO) and director of ASU's Center for Nanotechnology in Society (CNS) spoke about "cognate challenges in social science" and how CNS has been trying to develop a notion of "anticipatory governance of emerging technology" and devising ways to build such a capacity into university research labs as well as broader society, to allow making policy decisions in advance of the emergence of the technology in society at large. He described three capacities of anticipatory governance--foresight, public engagement, and integration, and described how these have been used at ASU.

Foresight: Rather than looking at future consequences as a linear extrapolation, CNS has used scenario development and a process of structured discussions based on those scenarios with scientists, potential users, and other potential stakeholders, about social and technical events that may be subsequent consequences of the scenarios. This method has been tested with Stephen Johnston's "Doc-in-a-Box" project at ASU's Center for Innovations in Medicine, which Guston said led to some changes in the conceptualization of the technology.

Public Engagement: The "scope and inclusion of public values is important for success," Guston said, and gave as an example the "national citizens technology forum" that CNS conducted in six locations to look at speculative scenarios about nanotechnology used for human enhancement. These were essentially very large focus groups whose participants engaged in "informed deliberation" over the course of a weekend, after having read a 61-page background document and spending the prior month engaging in Internet-based interaction.

Integration: Guston described the "embedding of social scientists in science and engineering labs," to develop productive relationships that help lab scientists identify broader implications of their work while it's still in the lab rather than after it's introduced to the general public.

Guston suggested that there might be other ways of implementing "anticipatory governance" in the form of legislative requirements or standards and priorities set by program officers at funding organizations, but that the lab setting is "the best point of leverage at a university" and can set an example for others to follow.

Clinical Perspective (Larry Miller)
Larry Miller, Research Director at the Mayo Clinic in Scottsdale, spoke about the healthcare provider's approach to personalized medicine. He said that Mayo is committed to individualized care, and that now that we are beginning to understand the power of human variation, these new developments have "to be transformational for providers or they won't survive." He suggested that the future of medicine will move from reactive and probabilistic to more deterministic selection of treatments based on diagnoses. He emphasized the need for education for doctors, and pointed out that "standards of care will become outmoded," which is "disruptive to law and [insurance] coverage." He said that Mayo sees a big challenge of complexity, where what was one disease (breast cancer) is now at least ten different subdiseases. Doctors need to make their treatment decisions on the detail, to predict how the disease will behave, and choose the best drugs possible based on safety, effectiveness, and cost-effectiveness.

Miller pointed out that this requires interdisciplinary work, and said that Mayo in Arizona has a huge advantage with its relationship with ASU, where so much of this work is going on. While Mayo has scientific expertise in a number of areas, these new technologies draw on expertise from beyond medicine, in particular informatics and computational resources needed to build an effective decision support system that will become essential for doctors to use in a clinical setting.

He talked about Mayo's program for individualized medicine, which involves not just incorporating new developments in diagnostics and therapeutics, but in regenerative medicine for repair, renewal, and regeneration of deficits.

Mayo has had electronic medical records for the last 15 years, on 6 million people, but these are kept in multiple incompatible systems and were not built with research in mind. They hope to improve their systems so that it can be used in an iterative process to learn more about the efficacy of therapies, and so therapies can be combined with "companion diagnostics for monitoring progression, recurrences, and response to therapy."

Like Poste, he raised objections to the companies that market gene sequencing directly to individuals, which just "scare people inappropriately," but identified learning about disease predispositions as an important part of these developing technologies. We need to develop methods of risk analysis that can help people correctly understand what these predispositions mean.

He sees the future as having three waves--the first wave will be the new diagnostics, the second wave improvements in clinical practice and therapy, and the third wave embedding the new technology into the healthcare system, with significant changes to policy and education.

Health Informatics (Diana Petitti)
Diana Petitti, former CDC epidemiologist and former director of research for Kaiser Permanente, where she built a 20-year longitudinal data repository for its 35 million members, spoke about the importance of health informatics. (She is now a professor in ASU's Department of Biomedical Informatics.) Dr. Petitti raised concerns about how in the United States we are "loathe to deny anyone anything" in terms of medical treatments, but in fact "we do deny lots of people lots of things." She worried that personalized medicine has the potential to lead to greater maldistributions of healthcare, with the "haves" getting more and better treatment and the "have nots" getting less and worse treatment, unless we plan carefully. She advocated evidence-based medicine and assessing value of treatments to be deployed to the general population.

Dr. Petitti brought up as an example the fact that oral contraceptives result in a 2x-10x increase in the likelihood of a venous thrombotic event, and that the Factor V Leiden gene is predictive of susceptibility to that consequence, but no screening is done for it. Why not? Because the test only predicts 5% of those who will have the event, it's a very expensive test, and we don't have good alternatives for oral contraceptives. These kinds of issues, she suggested, will recur with multiplex diagnostics.

She explicitly worried that "we have dramatically oversold preventive medicine" and doesn't think it's likely that savings from prevention will allow coverage for more extensive treatment. She advocated that everyone in the field see the film "Gattaca," and stated that ASU provides "unique opportunities to train people to think about these issues" using "quantitative reasoning and probabilistic thought." She concluded by saying that we need to "work towards rational delivery of healthcare that optimizes public health."

Law (Gary Marchant)
Prof. Gary Marchant of the Sandra Day O'Connor School of Law at ASU, who has a Ph.D. in genetics and is the executive director of ASU's Center for the Study of Law, Science, and Innovation (formerly Center for the Study of Law, Science, and Technology), spoke about legal issues. First he listed the many programs available at ASU in the area, beginning with the genetics and law program that has been here for 10 years and was the reason he first came to ASU. Others include a new personalized medicine and law program at the Center for Law, Science, and Innovation, a planned center on ethical and policy issues regarding personalized medicine in conjunction with the Biodesign Institute, CSPO, TGEN, Mayo, etc., and research clusters at the law school on breast cancer, warfarin, and personalized medicine. He also gave a plug for an upcoming conference March 8-9, 2010 at the Arizona Biltmore sponsored by AAAS and Mayo, which also has a great deal of corporate support.

Prof. Marchant indicated that liability is the biggest issue regarding personalized medicine, and he sees doctors as "sitting ducks," facing huge risks. If a doctor prescribes a treatment without doing a corresponding new diagnostic test, and that has complications, he can be sued. If he does the diagnostic test, it shows a very low likelihood of a disease recurrence, and advises against the treatment, and then the patient ends up being one of the rare people who has the recurrence, the doctor can be sued. The doctor is really in a damned-if-you-do, damned-if-you-don't situation. The insurers and pharmaceutical companies are at less risk, since they have already developed enormous resources for dealing with the lawsuits that are a regular part of their existence. In a short discussion after the forum, I asked Prof. Marchant if doctors would be liable if they performed a diagnostic test, found that it showed a low likelihood of recurrence or benefit for a treatment, and then recommended the treatment anyway, knowing the insurance company would refuse to pay for it--would that shift the liability to the insurance company? He thought it might, though it would be unethical for a doctor to recommend treatment that he didn't actually think was necessary, and there's still the potential for liability if the insurance company pays for the treatment and the treatment itself produces complications. It seems that this problem really needs a legislative or regulatory fix of some sort, so that doctors have some limitation of liability in cases where they have made a recommendation that everyone would agree was the right course of action but a low-probability negative consequence occurs anyway.

Prof. Marchant observed that the liability issues are particularly problematic in states like Arizona, where each side in the suit is limited to a single expert witness. He said there is "no clear guidance or defense for doctors," and the use of clinical guidelines in a defense has not been effective in court, in part because doctors don't use them.

Q&A
A few additional points of interest from the Q&A sessions (some of which has already been combined into the above summaries):

Dr. LaBaer pointed out that most markers for diseases don't seem to have any role in the cause of the disease, such as CA25 and ovarian cancer. So his lab is looking not just for biomarkers, but for those that will affect clinical decisions. 4 out of 5 positive results in a mammography for breast cancer are actually cases where there is nothing wrong and the woman will not end up getting breast cancer, but some procedure ends up being undergone, with no value. So he wants to find a companion test that can tell which are the 4 that don't need further treatment.

George Poste pointed out that baby boomers are going to bankrupt the system as they reach the end of their lives, and about 70% of the $2.3 trillion in healthcare spending is spent in the last 2-3 years of life, with many treatments costing $60K-$100K per treatment cycle on drugs that add 2-3 weeks of life. The UK's National Institute of Clinical Excellence has been making what are, in effect, rationing decisions by turning down all of the new cancer drugs that have come along because they have such great cost and such minimal benefit. He asked, "how much money could you save with a 90% accurate test of who's going to die no matter what you do?"

Prof. Marchant said more about legal issues involving specimen repositories, including a case at ASU. The developer of the prostate-specific antigen (PSA) test, William Catalona, had a specimen repository with 30,000 tissue samples at Washington University, that he wished to take with him to Northwestern University when he took a new position there. He began asking patients for permission to move the samples, and 6,000 gave permission. But Washington University sued him, claiming that the samples were property of the university. Patients pointed out that their consent agreement gave them the right to withdraw their samples from future research and they had only consented to research on prostate cancer, but federal judge Stephen Limbaugh ruled in favor of the university and that patients had no property rights in their tissue. This ruling has reduced incentives for patients to consent to give specimens for research.

A current lawsuit against ASU by the Havasupai Indian tribe involves blood samples that were given for a study of diabetes by researchers who are no longer at ASU. They wanted to take the samples with them, and samples had also been given to other researchers for use in studies of schizophrenia and the historical origins of the tribe, even though informed consent was apparently only given for the diabetes research. Although this case was originally dismissed, it was recently reinstated.

Other cases involve patent protection of genetic information. About 25% of the human genome is patented, including Myriad Genetics' patent on the BRCA1 and BRCA2 genes which are predictive of breast cancer and can only legally be tested for by Myriad. This case is likely to go to the U.S. Supreme Court regarding the issue of whether human genes can be patented. The courts so far have ruled that a gene in isolation outside of the human body is patentable, even though (in my opinion) this seems at odds with the requirement that patents be limited to inventions, not discoveries. There has already been a legislative limitation of patent protection for surgical procedures for the clinical context, so that doctors can't be sued for patent infringement for performing a surgery that saves someone's life; it's possible that a similar limitation will be applied on gene patents in a clinical context, if they don't get overturned completely by the courts.

These gene patents create a further problem for the multiplex tests, since they inevitably include many patented genes. Prof. Marchant observed that someone from Affymetrix spoke at an ASU seminar and stood up and said they were building their GeneChip DNA microarrays for testing for the presence of thousands of genes, and were ignoring gene patents. They were subsequently sued. Dr. LaBaer stated that his lab is doing the same thing with cloned genes--they're cloning everything and giving them away, without regard to patents.

The session was videotaped and will be made available to the public online. I will add a link to this posting when it becomes available.

If you've read this far, you may also be interested in my summary of Dr. Fintan Steele's talk at this year's The Amazing Meeting 7, titled "Personalized Medicine or Personalized Mysticism?", in my summary of the Science-Based Medicine conference that took place just prior to TAM7, and in my short summary of Dr. Martin Pera's talk on regenerative medicine and embryonic stem cells at the Atheist Alliance International convention that took place earlier this month.

Friday, October 23, 2009

Atheist Alliance International Convention summary in Arabic

The Arab Atheists Network has begun posting an Arabic translation of my summary of the AAI convention here. Thanks to Alpharabius and the Arab Atheists Network for doing that, and for their promotion of atheism in the Arab world!

Wednesday, October 21, 2009

Skepticism, belief revision, and science

In the comments of Massimo Pigliucci's blog post about the scope of skepticism (which I've already discussed here), Skepdude pointed to a couple of blog posts he had written on similar topics some time ago, about what atheists have in common and skepticism and atheism. He argues that skeptics must be atheists and cannot be agnostics or theists, a position I disagree with. In an attempt to get to the bottom of our disagreement after a few exchanges in comments on his blog, I wrote the following set of questions which I first answered myself, so we can see how his answers differ.

Do we have voluntary control over what we believe?

In general, no. The credence we place in various propositions--our belief or rejection of them--is largely out of our voluntary control and dependent upon our perceptual experiences, memories, other beliefs, and established habits and methods of belief formation and revision. We can indirectly cause our beliefs to change by engaging in actions which change our habits--seeking out contrary information, learning new methods like forms of mathematics and logic, scientific methods, reading books, listening to others, etc.

How does someone become a skeptic?

People aren't born as skeptics--they learn about skepticism and how it has been applied in various cases (only after learning a whole lot of other things that are necessary preconditions--like language and reasoning). If skepticism coheres with their other beliefs, established habits and methods of belief formation and revision, and/or they are persuaded by arguments in favor of it, either self-generated or from external sources, they accept it and, to some degree or another, apply it subsequently.

When someone becomes a skeptic, what happens to all of the other beliefs they already have?

They are initially retained, but may be revised and rejected as they are examined through the application of skeptical methods and other retained habits and methods of belief formation and revision. Levels of trust in some sources will likely be reduced, either within particular domains or in general, if they are discovered to be unreliable. It's probably not possible to start from a clean slate, as Descartes tried to do in his Meditations.

Is everything a skeptic believes something which is a conclusion reached by scientific methods?

No. Much of what we believe, we believe on the basis of testimony from other people who we trust, including our knowledge of our own names and date and place of birth, parts of our childhood history, the history of our communities and culture, and knowledge of places we haven't visited. We also have various beliefs that are not scientifically testable, such as that there is an external world that persists independently of our experience of it, that there are other minds having experiences, that certain experiences and outcomes are intrinsically or instrumentally valuable, that the future will continue to resemble the past in various predictable ways, etc. If you did believe that skeptics should only believe conclusions which are reached by scientific methods, that would be a belief that is not reached by scientific methods.

Massimo Pigliucci on the scope of skeptical inquiry

Massimo Pigliucci, a biologist and philosopher at the City University of New York and regular writer for the Skeptical Inquirer, has offered up his thoughts about the relationship between skepticism, atheism, and politics. He wants to argue that skepticism and skeptical inquiry are identical with scientific skepticism, and mostly distinct from philosophy, religion, and politics. He restricts the domain of skeptical inquiry to "the critical examination of evidential claims of the para- or super-normal," and further restricts his notion of "evidential" to the empirical. (He subsequently refers to philosophical arguments and reasons as "non-evidence based approaches." I disagree, though this may be strictly a terminological dispute--I often use the word "evidence" to apply to reasons and arguments, not just empirical observations or reports of empirical observations, and I think this is common usage.)

He ends up drawing a Venn-style diagram which has an outer circle labeled with "critical thinking" and "rational analysis," within which is a series of three overlapping circles labeled "atheism," "skeptical inquiry," and "political philosophy." He argues that skeptical inquiry only overlaps with atheism where religions make empirical claims that are subject to scientific investigation, and likewise for political philosophy.

I offered a few critical comments at his blog, noting that it is odd that "atheism" is the only label on his diagram which is the name of a specific position rather than a method or discipline, and suggesting that it be labeled something like "views on religion." I also suggested that that circle extend beyond the scope of the "critical thinking" and "rational analysis" circle, though that's presupposing his diagram is descriptive rather than normative. [Note added 1:31 p.m.: If his diagram is understood as a diagram of what is appropriate subject matter for critical thinking, rational analysis, and skeptical inquiry with respect to atheism and political philosophy, then those two circles should arguably not extend outside the border of critical thinking/rational analysis.] Similar considerations should apply to the "political philosophy" circle. People hold religious and political views for reasons other than those produced as a result of critical thinking and rational analysis.

I also took issue with his identifying "skeptical inquiry" with scientific skepticism. Skeptics have always used philosophical tools as well as scientific ones, but I would find his diagram more accurate if the middle circle was labeled "scientific skepticism" or even "scientific inquiry."

I also have some skepticism about this taxonomic enterprise in general, which is arguably both philosophical and political itself--Pigliucci is not using scientific methods to set up this framework, it's philosophy, and there are political and pragmatic reasons for wanting us to accept it--to issue in a ruling that certain domains are off-limits for skepticism, namely the examination of religious and political claims that are not subject to empirical investigation.

I think there are good pragmatic reasons for skeptical organizations to restrict themselves in such a way--the methods of skepticism can be used by anyone, regardless of their political or religious views, and organized skepticism has tried to appeal to a broad audience to focus critical attention on paranormal claims where scientific methodology can be brought to bear. But I'm skeptical of this as a general picture of the applicable domain of the methods of skepticism or skeptical inquiry. (I should note that I don't think that atheism implies skepticism--thus the reason for extending a circle with that name outside the boundaries of critical thinking and rational analysis--nor that skepticism implies atheism. Skepticism is about the methods used, not the conclusions reached. An atheist might think that any consistent application of skepticism will lead to atheism, but that presumes both that atheism is true and that consistent application of skepticism is a guarantee of truth, which it is not.)

I agree with commenter Maarten that the boundaries of these circles are fuzzy--just as the boundary between science and non-science doesn't admit to a bright-line demarcation. People can conceptualize the boundaries differently, even granting Pigliucci's conception of "empirically investigatable" as the domain of skeptical inquiry or scientific skepticism. The boundaries between scientific disciplines are themselves fuzzy and they use different methodologies, with huge differences between experimental and historical sciences, for example.

Finally, I agree with commenter Scott (Scott Hurst), who observes that religious believers do make very specific claims "about the nature of the universe, how it works, and its history (including our own)," and specifically noting belief in the power of prayer. These things are empirically testable and do make at least some common (one could say "vulgar") conceptions of God and religion refutable by science. The fact that a more sophisticated believer or theologian can construct a view that uses the same words yet withdraws from the realm of the empirical doesn't mean that the vulgar conception hasn't been refuted. This is perhaps more obvious with modern religions such as Mormonism and Scientology, where in the former case historical evidence and DNA evidence falsifies some key claims, and in the latter case where scientific evidence falsifies a great number of its claims. Hubbard's cosmology, for example, includes the idea that Xenu dropped thetans into a volcano on Hawaii 75 million years ago, but Hawaii didn't exist 75 million years ago. His book History of Man includes Piltdown Man in the human lineage, even though that fossil was discovered to be a hoax shortly after the book was published. And so forth.

It's fine for Pigliucci to define and use the terms the way he wants, but I don't think he's given strong reasons for the rest of us to accept the specifics of his formulation.

UPDATE (October 24, 2009): Russell Blackford has written "Pigliucci on science and the scope of skeptical inquiry" at the Sentient Developments blog, which comes to similar conclusions with a somewhat more comprehensive argument.