Showing posts with label Arizona. Show all posts
Showing posts with label Arizona. Show all posts

Thursday, November 19, 2009

Joel Garreau on radical evolution

Yesterday I heard Joel Garreau speak again at ASU, as part of a workshop on Plausibility put on by the Consortium for Science, Policy, and Outcomes (CSPO). I previously posted a summary of his talk back in August on the future of cities. This talk was based on his book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies--and What It Means to Be Human.

Garreau was introduced by Paul Berman, Dean of the Sandra Day O'Connor School of Law at ASU, who also announced that Garreau will be joining the law school faculty beginning this spring, as the Lincoln Professor for Law, Culture, and Values.

He began by saying that we're at a turning point in history [has there ever been a time when we haven't thought that, though?], and he's going to present some possible scenarios for the next 2, 3, 5, 10, or 20 years, and that his book is a roadmap. The main feature of this turning point is that rather than transforming our environment, we'll be increasingly transforming ourselves, and we're the first species to take control of its own evolution, and it's happening now.

At some point in the not-too-distant future, he said, your kid may come home from school in tears about how he can't compete with the other kids who are more intelligent, more athletic, more attractive, more attentive, and so forth--because you haven't invested in the human enhancement technologies coming on the market. Your possible reactions will be to suck it up [somebody's still gotta do the dirty jobs in society?], remortgage the house again to make your kid competitive, or try to get the enhanced kids thrown out of school. What you can't do is ignore it.

He then asked people to raise their hands who could remember when things were still prevalent:
  • The Sony Walkman
  • When computer screens were black and white. (An audience member said "green and black!")
  • Rotary dial phones
  • Mimeograph machines and the smell of their fluid
  • Polio
This shows, he said, that we're going through a period of exponential change.

His talk then had a small amount of overlap with his previous talk, in his explanation of Moore's Law--that we've had 32 doublings of computer firepower since 1959, so that $1 of computing power is about 2 billion times more than it was then, and an iPhone has more computing power than all of NORAD had in 1965. Such doublings change our expectations of the future, so that the last 20 years isn't a guide to the next 20, but to the next 8; the last 50 years is a guide to the next 14. He pulled out a handkerchief and said this is essentially the sort of display we'll have in the future for reading a book or newspaper.

He then followed Ray Kurzweil in presenting some data points to argue that exponential change has been going on since the beginning of life on earth (see P.Z. Myers' "Singularly Silly Singularity" for a critique):

It took 400 million years (My) to go from organisms to mammals, and
  • 150My to monkeys
  • 30My to chimps
  • 16My to bipedality
  • 4My to cave paintings
  • 10,000 years to first settlements
  • 4,000 years to first writing
At this point, culture comes into the picture, which causes even more rapid change (a point also made by Daniel Dennett in his talk at ASU last February).
  • 4,000 years to Roman empire
  • 1800 years to industrial revolution
  • 100 years to first flight
  • 66 years to landing on the moon
And now we're in the information age, which Garreau identified as a third kind of evolution, engineered or radical evolution, where we're in control. [It seems to me that such planned changes are subject to the limits of human minds, unless we can build either AI or enhancement technologies that improve our minds, and I think the evidence for that possibility really has yet to be demonstrated--I see it as possible, but I place no bets on its probability and think there are reasons for skepticism.]

Garreau spent a year at DARPA (the Defense Advanced Research Projects Agency), the organization that invented the Internet (then the ARPANet), which is now in the business of creating better humans, better war fighters. [DARPA was also a subject of yesterday's Law, Science, and Technology class. It's a highly funded organization that doesn't accept grant proposals, rather, it seeks out people that it thinks are qualified to give funding to for its projects. It has become rather more secretive as a result of embarrassment about its Total Information Awareness and terrorism futures ideas that got negative press in 2003.]

Via DARPA, Garreau learned about their project at Duke University with an owl monkey named Belle, that he described as a monkey that can control physical objects at long distances with her mind. Belle was trained to play a video game with a joystick, initially for a juice reward and then because she enjoyed it. They then drilled a hole in her head and attached fine electrodes (single-unit recording electrodes like the sort used to discover mirror neurons), identified the active regions of her brain when she operated the joystick, and then disconnected the joystick. She became proficient and playing the game with the direct control of her brain. They then connected the system to a robotic arm at MIT which duplicated the movements of her arm with the joystick.

Why did they do this? Garreau said there's an official reason and a real reason. The official reason is that an F-35 jet fighter is difficult to control with a joystick, and wouldn't it be better to control it with your mind, and send information sensed by the equipment directly into the mind? The real reason is that the DARPA defense sciences office is run by Michael Goldblatt, whose daughter Gina Marie (who recently graduated from the University of Arizona) has cerebral palsy and is supposed to spend the rest of her life in a wheelchair. If machines can be controlled with the mind, machines in her legs could be controlled with her mind, and there's the possibility that she could walk.

Belle first moved the robotic arm 9 years ago, Garreau said, and this Christmas you'll be able to buy the first toy mind-machine interface from Mattel at Walmart for about $100. It's just a cheap EEG device and not much of a game--it lets you levitate a ping pong ball with your mind--but there's obviously more to come.

Garreau said that Matthew Nagel was the first person to send emails using his thoughts (back in 2006), and DARPA is interested in moving this technology out to people who want to control robots. [This, by the way, is the subject of the recent film "Sleep Dealer," which postulates a future in which labor is outsourced to robots operated by Mexicans, so that they can do work in the U.S. without immigrating.]

This exposure to DARPA was how Garreau got interested in these topics, which he called the GRIN technologies--Genetics, Robotics, Information science, and Nanotechnology, which he identified as technologies enabled by Moore's Law.

He showed a slide of Barry Bonds, and said that steroids are sort of a primitive first-generation human enhancement, and noted that the first uses of human enhancement tend to occur in sports and the military, areas where you have the most competition.

Garreau went over a few examples of each of the GRIN technologies that already exist or are likely on the way.

Genetics
Dolly the cloned sheep. "Manipulating and understanding life at the most primitive and basic level."

"Within three years, memory pills, originally aimed at Alzheimer's patients, will then move out to the needy well, like 78 million baby boomers who can't remember where they left their car, then out to the merely ambitious." He said there's already a $36.5 billion grey market for drugs like Ritalin and Provigil (midafonil), and asked, "Are our elite schools already filling up with the enhanced?" [There's some evidence, however, that the enhancement of cognitive function (as opposed to staying awake) is minimal for people who already operate at high ability, with the greatest enhancement effect for those who don't--i.e., it may have something of an egalitarian equalizing effect.]

He said DARPA is looking at ways to end the need for sleep--whales and dolphins don't sleep, or they'd drown, but they do something like sleeping with one half of the brain at a time.

DARPA is also looking at ways to turn off hunger signals. Special forces troops burn 12,000 calories per day, but can't carry huge amounts of food. The body carries extra calories in fat that are ordinarily inaccessible unless you're starving, at which point they get burned. If that switch to start burning fat could be turned on and off at will, that could be handy for military use. He observed that DARPA says "the civilian implications of this have not eluded us."

Sirtris Pharmaceuticals, started by David Sinclair of the Harvard Medical School, aims to have a drug to reverse aging based on resveratrol, an ingredient from grapes found in red wine. [Though Quackwatch offers some skepticism.]

Garreau looks forward to cures for obesity and addiction. He mentioned Craig Venter's plan to create an organism that "eats CO2 and poops gasoline" by the end of this year, that will simultaneously "end [the problems in] the Middle East and climate change." [That seems overly optimistic to me, but ExxonMobil has given Venter $600 million for this project.]

He said there are people at ASU in the hunt, trying to create life forms like this as well. [Though for some reason ASU doesn't participate in the iGEM synthetic biology competition.]

Robotics
Garreau showed a photo of a Predator drone, and said, "Ten years ago, flying robots were science fiction, now it's the only game in town for the Air Force." He said this is the first year that more Air Force personnel were being trained to operate drones than to be pilots. 2002 was the first year that a robot killed a human being, when a Predator drone launched a Hellfire missile to kill al Qaeda members in an SUV in Yemen. He said, "while there's still a human in the loop, philosophical discussions about homicidal robots could be seen as overly fine if you were one of the guys in the SUV."

"We're acquiring the superpowers of the 1930s comic book superheroes," he said, and went on to talk about a Berkeley exoskeleton that allows you to carry a 180-pound pack like it weighs four pounds, like Iron Man's suit. He asked the engineers who built it, "Could you leap over a tall building in a single bound?" They answered, "yes, but landing is still a problem."

Functional MRI (fMRI) is being used at the University of Pennsylvania to try to determine when people are lying. Garreau: "Then you're like the Shadow who knows what evil lurks in the hearts of men."

Cochlear implants to give hearing to people for whom hearing aids do nothing, connecting directly to the auditory nerve. Ocular implants to allow the blind to have some vision. Brain implants to improve memory and cognition. Garreau asked, "If you could buy an implant that would allow you to be fluent in Mandarin Chinese, would you do it?" About half the room raised their hands. [I didn't hear a price or safety information, so didn't raise my hand.]

Information
He showed a photo of a camera phone and said, "Fifteen years ago, a machine like this that can fit in your pocket, with a camera, GPS, and MP3 player, and can send email, was science fiction. Now it's a bottom-of-the-line $30 Nokia."

He asked, "Does anyone remember when music players were three big boxes that you put on your bookshelves? Now they're jewelry. Soon they'll be earrings, then implants."

Close behind, he said, are universal translators. "Google has pretty good universal translation on the web, and see it as moving out to their Droid phones." He observed that Sergey Brin was talking in 2004 about having all of the world's information directly attached to your brain, or having a version of Google on a chip implanted in your brain. [I won't get one unless they address network security issues...]

Nanotechnology
Garreau said, "Imagine anything you want, one atom or molecule at a time. Diamonds, molecularly accurate T-bone steaks." He said this is the least developed of the four GRIN technologies, "so you can say anything you want about it, it might be true." It's estimated to become a $1 trillion/year market in the next 10 years. There may be nanobots you can inject into your bloodstream by the thousands to monitor for things about to go wrong [see this video for the scenario I think he's describing], hunter-killers that kill cancer cells. "When you control matter at a fundamental level, you get a feedback loop between the [four] technologies."

At this point, Garreau said he's really not all that interested in the "boys and their toys" so much as he is the implications--"where does this take culture and society and values?" He presented three possible scenarios, emphasizing that he's not making predictions. He called his three scenarios Heaven, Hell, and Prevail.

Heaven
He showed a chart of an exponential curve going up (presumably something like technological capacity on the y axis and time on the x axis).

He said that at the NIH Institute on Aging, there's a bet that the first person to live to 150 is already alive today. He mentioned Ray Kurzweil, said that he pops 250 pills a day and is convinced that he's immortal, and is "not entirely nuts." [I am very skeptical that 250 pills a day is remotely sensible or useful.]

For the last 160 years, human life expectancy has increased at about 1/4 of a year every year. He asked us to imagine that that rate improves to one year per year, or more--at that point, "if you have a good medical plan you're effectively immortal." [I questioned this in the Q&A, below.]

Hell
He showed a chart that was an x-axis mirror of the Heaven one, and described this as a case where technology "gets into the hands of madmen or fools." He described the Australian mousepox incident, where researchers in Australia found a way to genetically alter mousepox so that it becomes 100% fatal, destroying the immune system, so that there's no possible vaccine or prevention. This was published in a paper available to anyone, and the same thing could be done to smallpox to wipe out human beings with no defense. He said the optimistic version is something that wipes out all human life; the pessimistic version is something that wipes out all life on earth. [In my law school class, we discussed this same topic yesterday in more detail, along with a similar U.S. paper that showed how to reconstruct the polio virus.]

The problem with both of these scenarios for Garreau is that they are both "techno-deterministic," assuming that technology is in control and we're "just along for the ride."

Prevail
He showed a chart that showed a line going in a wacky, twisty pattern. The y-axis may have been technological capacity of some sort, but the x-axis in this case couldn't have been time, unless there's time travel involved.

Garreau said, if you were in the Dark Ages, surrounding by marauding hordes and plagues, you'd think there wasn't a good future. But in 1450 came the printing press--"a new way of storing, sharing, collecting, and distributing information," which led to the Renaissance, enlightenment, science, democracy, etc. [Some of those things were rediscoveries of advancements previously made, as Richard Carrier has pointed out. And the up-and-down of this chart and example of the Dark Ages seems to be in tension, if not in conflict, with his earlier exponential curve, though perhaps it's just a matter of scale. At the very least, however, they are reason to doubt continued growth in the short term, as is our current economic climate.]

Garreau called the Prevail scenario more of a co-evolution scenario, where we face challenges hitting us in rapid succession, to which we quickly respond, which creates new challenges. He expressed skepticism of top-down organizations having any capacity to deal with such challenges, and instead suggested that bottom-up group behavior by humans not relying on leaders is where everything interesting will happen. He gave examples of eBay ("100 million people doing complex things without leaders"), YouTube ("no leaders there"), and Twitter ("I have no idea what it's good for, but if it flips out the Iranian government, I'm for it.") [These are all cases of bottom-up behavior facilitated by technologies that are operated by top-down corporations and subject to co-option by other top-down institutions in various ways. I'm not sure how good the YouTube example is considering that it is less profitable per dollar spent than Hulu--while some amateur content bubbles to the top and goes viral, there still seems to be more willingness to pay for professional content. Though it does get cheaper to produce professional content and there are amateurs that produce professional-quality content. And I'll probably offer to help him "get" Twitter.]

The Prevail scenario, he said, is "a bet on humans being surprising, coming together in unpredicted ways and being unpredictably clever."

He ended by asking, "Why have we been looking for intelligent life in the universe for decades with no success? I wonder if every intelligent species gets to the point where they start controlling their own destiny and what it means to be as good as they can get. What if everybody else has flunked. Let's not flunk. Thanks."

Q&A
I asked the first question, which was whether there is really so much grounds for optimism on extending human lifespan when our gains have increased the median lifespan but not made recent progress on the top end--the oldest woman in the world, Jeanne Calment, died at 122 in 1997 and no one else has reached that age. He answered that this was correct, that past improvements have come from nutrition, sanitation, reducing infant mortality, and so forth, but now that we spent $15 billion to sequence the first human genome and the cost of sequencing a complete human genome is approaching $1,000 and personalized medicine is coming along, he suspects we'll find the causes of aging and have the ability to reverse it through genetic engineering.

Prof. David Guston of CSPO asked "What's the relation between your Prevail scenario and the distribution of the success of the good stuff from GRIN technologies?" Looking at subgroups like males in post-Soviet Russia and adults in Africa, he said, things seem to be going in the wrong direction. Garreau answered that this is one of the nightmare scenarios--that humans split into multiple species, such as enhanced, naturals, and the rest. The enhanced are those that keep upgrading every six months. The naturals are those with access to enhancements that "choose not to indulge, like today's vegetarians who are so because of ethical or aesthetic reasons." The rest are those who don't have access to enhancements, and have envy for and despise those who do. "When you have more than one species competing for the same ecological niche," he said, "that ends up badly for somebody." But, he said, that's assuming a rich-get-richer, poor-get-poorer belief, "a hallmark of the industrial age." Suppose that instead of distributing scarcity, we are distributing abundance. He said that transplanted hearts haven't become cheap because they aren't abundant, but if we can create new organs in the body or in the lab in a manner that would benefit from mass production, it could become cheap. He pointed out that cell phones represent "the fastest update of technology in human history," going from zero to one phone for every two people in 26 years, and adapted to new uses in the developing world faster than in the developed world. He brought up the possibility of the developing world "leapfrogging" the developed world, "the way Europeans leapfrogged the Arab world a thousand years ago, when they were the leaders in science, math, and everything else." [I think this is a very interesting possibility--the lack of sunk costs in existing out-of-date infrastructure, the lack of stable, firmly established institutions are, I think, likely to make the developing world a chaotic experimental laboratory for emerging technologies.]

Prof. Gary Marchant of the Center for the Study of Law, Science, and Technology then said, "I'm worried about the bottom-up--it also gave us witch trials, Girls Gone Wild, and the Teabaggers." Garreau said his Prevail scenario shows "a shocking faith in human nature--a belief in millions of small miracles," but again said "I'm not predicting it, but I'm rooting for it."

Prof. Farzad Mahootian and Prof. Cynthia Selin of CSPO asked a pair of related questions about work on public deliberations and trying to extend decision-making to broader audiences, asking what Garreau thought about "DARPA driving this or being porous to any kind of public deliberation or extended decision-making?" Garreau responded that "The last thing in the world that I want to do is leave this up to DARPA. The Hell scenario could happen. Top-down hierarchical decision-making is too slow. Anyone waiting for the chairman of the House finance committe to save us is pathetic. Humans in general have been pulling ashes out of the fire by the skin of their teeth for quite a while; and Americans in particular have been at the forefront of change for 400 years and have cultural optimism about change." [I think these questions seemed to presuppose top-down thinking in a way that Garreau is challenging.]

He said he had reported a few years ago about the maquiladoras in Mexico and called it a "revolution," to which he got responses from Mexicans saying, "we're not very fond of revolutions, it was very messy and we didn't like it," and asking him to use a different word. By contrast, he said, "Americans view revolutions fondly, and think they're cool, and look forward to it." [Though there's also a strange conservatism that looks fondly upon a nonexistent ideal past here, as well.] With respect to governance, he said he's interested in looking for alternate forms of governance because "Washington D.C. can't conceivably respond fast enough. We've got a long way to go and a short time to get there." [Quoting the 'Smokey and the Bandit' theme song.]

He went on to say, "I don't necessarily think that all wisdom is based here in America. Other places will come up with dramatically different governance." He talked about the possibility of India, which wants to get cheaper drugs out to the masses, taking an approach different from FDA-style regulation (he called the FDA "a hopelessly dysfunctional organization that takes forever to produce abysmal results"). "Let's say the people of India were willing to accept a few casualties to produce a faster, better, cheaper cure for malaria, on the Microsoft model--get a 'good enough' version, send it out and see how many computers die. Suppose you did that with drugs, and were willing to accept 10,000 or 100,000 casualties if the payoff was curing malaria once and for all among a billion people. That would be an interesting development." By contrast, he said, "The French are convinced they can do it the opposite way, with top-down governance. Glad to see somebody's trying that. I'll be amazed if it works." His view, he said, was "try everything, see what sticks, and fast." [This has historically been the intent of the U.S. federal system, to allow the individual states to experiment with different rules to see what works before or in lieu of federal rules. Large corporations that operate across states, however, which have extensive lobbying power, push for federal regulations to pre-empt state rules, so that they don't have to deal with the complexity.]

There were a few more questions, one of which was whether anyone besides DARPA was doing things like this. Garreau said certainly, and pointed to both conventional pharmaceutical companies and startups working to try to cure addiction and obesity, as well as do memory enhancement, like Eric Kandel's Memory Pharmaceuticals. He talked about an Israeli company that has built a robotic arm which provides touch feedback, with the goal of being able to replace whatever functionality someone has lost, including abilities like throwing a Major League fastball or playing the piano professionally.

Prof. Selin reported a conversation she had with people at the law school about enhancement and whether it would affect application procedures. They indicated that it wouldn't, that enhancement was no different to them than giving piano lessons to children or their having the benefit of a good upbringing. Garreau commented that his latest client is the NFL, and observed that body building has already divided into two leagues, the tested and the untested. The tested have to be free of drugs, untested is anything goes. He asked, "can you imagine this bifurcation in other sports? How far back do you want to back out technology to get to 'natural'? Imagine a shoeless football league." He noted that one person suggested that football minus technology is rugby. [This reminded me of the old Saturday Night Live skit about the "All Drug Olympics."]

All-in-all, it was an interesting talk that had some overlap with things that I'm very interested in pursuing in my program, especially regarding top-down vs. bottom-up organizational structures. Afterward, I spoke briefly with Garreau about how bottom-up skeptical organizations are proliferating and top-down skeptical organizations are trying to capitalize on it, and I wondered to what extent the new creations of bottom-up organizations tend to get co-opted and controlled by top-down organizations in the end. In that regard, I also talked to him a bit about Robert Neuwirth's work on "shadow cities" and the Kowloon Walled City, where new forms of regulatory order arise in jurisdictional no-man's-lands (I could also have mentioned pirate codes). Those cases fall between the cracks for geographical reasons, while the cases that are occurring with regard to GRIN technologies fall between the cracks for temporal reasons, but it seems to me there's still the possibility of the old-style institutions to catch up and take control.

UPDATE: As a postscript, I recently listened to the episode of the Philosophy Bites podcast on human enhancement with philosopher Allen Buchanan, who was at the University of Arizona when I went to grad school there. Good stuff.

Friday, November 06, 2009

Charles Phoenix's retro slide show--in Phoenix


Tonight and tomorrow night at 8 p.m., Charles Phoenix will bring his Retro Slide Show Tour to the Phoenix Center for the Arts at 1202 N. 3rd St.

I've not seen his show before, but I've enjoyed his blog's slide-of-the-week feature and plan to go see this.

Here's the official description:

A laugh-out-loud funny celebration of '50s and '60s road trips, tourist traps, theme parks, world's fairs, car fashion fads, car culture and space age suburbia, will also include a selection of vintage images of the Valley of the Sun.

Click the above link for more details or to buy tickets.

Friday, October 30, 2009

Maricopa County Notices of Trustee's Sales for October 2009

I haven't posted one of these things in a while, so I figured it was about time.

The big peak was in March, with a total of 10,725 that month. October's total was 6,618.

Robert Balling on climate change

This afternoon I went to hear ASU Prof. Robert Balling, former head of ASU's Office of Climatology and current head of the Geographic Information Systems program, talk about climate change in a talk that was advertised as "Global Warming Became Climate Change: And the Story Continues," though I didn't notice if he had a title slide for his presentation.

He began his talk by saying that in 1957, measurements of CO2 began to be made at Mauna Loa (by Charles David Keeling), which established that CO2 is increasing in our atmosphere, largely because of human activity--from fossil fuel emissions. It's approaching 390 parts per million (ppm). Last weekend, the "A" on A Mountain near the university was painted green by a bunch of people wearing shirts that say "350" on them, because they want atmospheric CO2 to be stabilized at 350 ppm, which was the level in 1990, which is the benchmark year for the Kyoto Protocol.

But this isn't remotely feasible, he said, citing the Intergovernmental Panel on Climate Change (IPCC). Even the most optimistic scenario in the IPCC Report has atmospheric carbon continuing to rise until 2100, hitting about 600 ppm. If we reduced emissions to 0, the best case would end up with stabilization at around 450 ppm. Our lifetime will see increasing CO2 levels, no matter what we do. (In other words, the Kyoto benchmark sets a standard for emissions levels to return to, not for a level of atmospheric carbon to return to.)

If you look at the earth's history on a longer scale, atmospheric carbon has been much higher in the past--it was at about 2500 ppm during the dinosaurs. During the last 600,000 years, however, it has been much lower, and fell below 200 ppm in the last glacial period. This, Balling said, shows what he would identify as a dangerous level of CO2--falling below 160 ppm, which causes plants to die.

There are other greenhouse gases besides CO2 that have an effect, such as methane and NO2, that humans are producing, he said.

At this point, he said the greenhouse effect is real--CO2 doubling causes warming--and this has been known for 120 years and "nobody is denying that."

There are climate models, which he said he has great respect for--it's basic physics plus fantastic computing and applied math. Climate modelers, he said, are their own worst critics. Problems for climate models include clouds, water vapor, rain, and the ocean, but lots of things are modeled correctly and the results are generally pretty good. Clouds, he said, are the biggest area of debate. The IPCC models say that clouds amplify warming, but satellite-based measurements suggest that clouds dampen (but don't eliminate) warming. Thus, he concluded, IPCC may be predicting more warming than will actually occur.

He next discussed empirical support for warming, and pointed out that the official plot of global temperatures has no error bars, and the numbers reported come from sensors that don't cover the entire world. How you come up with a global average can be done in different ways, and the different methods produce different results. You can take grid cells, average by latitudinal bands, get two hemispheric averages, and average them together. You can just average all of the data we have. He said that Roger Pielke Sr. questions the use of average temperatures and suggests looking at afternoon high temperatures. Looking at the older end of the chart, he asked, "where were the sensors in 1900? Why no error bars?"

He asked, "Is the earth warming," and said "right now the earth is not warming. I expect it to keep going up, but over the last decade there's been essentially none." He pointed to a recent article in Science magazine, "What happened to global warming?" Many are writing about this, he said, and there could be "1001 different things including sun and oceanic processes." (I don't believe this is correct unless you measure from 1998, which was an El Nino year. Most of the top 10 warmest years in history are post-1998.)

"Scientists are questioning the data," he said, showing photos from Anthony Watts' blog of poorly situated weather stations. "The albedo of the shelter in my backyard has changed as it has decayed," and caused it to report warmer temperatures. He said that people are having a field day taking photos of poor official sites. (What Balling didn't say is that what's important in the data is not absolute temperature but the temperature trends, and the good sites and bad sites both show the same trends.)

He pointed out that there are corrections to the temperature based on time of measurement, urban heat islands, instruments used, etc. If you look at the raw data for the U.S. from 1979-2000, you see 0.25 degrees Celsius of warming. Sonde data shows 0.24 degrees, MSU's measurements show 0.28, IPCC shows 0.28, and FILNET shows 0.33. He suggested that these corrections on the official data may be inflating the temperature (again, see my previous comment on trends vs. absolute temperature). Sky Harbor Airport produces the official temperature results for Phoenix, maximizing urban heat island effect. Many of the city records are from the worst sites, and he suggests looking at rural temperatures might give a different result.

Another factor is stratospheric turbidity from volcanic eruptions, and he showed a plot of orbital temperatures from satellites vs. stratospheric turbidity. He said that volcanism accounts for about 30% of the trend variability.

The big player in the game, he said, is the sun. Solar irradiance measures showed a significant decline in solar output in 1980, but earth temperature continued upward--he said he mentioned this because he thought it would be used as an objection. In response, he said that "the sun doesn't increase or decrease output over the entire spectrum and there are interactions with stratospheric clouds." He said that there are astrophysicists who argue that this is the major cause of global warming. In the Q&A, he said that there's one group that thinks cosmic ray flux is the major factor in global temperature because it stimulates cloud formation, while another group says that cosmic ray flux is little more than a trivial effect. He also said that this debate takes place in journals that "I find very difficult to read."

There are other confounding variables like El Nino and La Nina, but he said there has "definitely been warming over the last three decades with a discernable human contribution."

He put up a graph of the Vostok Reconstruction of temperature based on ice core data, on a chart labeled from -10 to +4 degree temperature changes in Celsius, which were mostly in the negative direction, and said we've seen periodic rapid changes up and down without any human contribution.

He talked about the IPCC "hockey stick" graph from 2001, which led to a huge debate about the possibility of bad statistical methods that guaranteed the hockey stick shape. He observed that 1000 years ago it was as warm or warmer than today, the Medieval Warming Period, which was missing from the "hockeystick" graph. There was also a "Little Ice Age," also missing from the graph. He said the IPCC has backed away from the hockey stick and its most recent report includes clear Medieval Warming Period and Little Ice Age periods in its graphs.

He showed a photo of a microwave sounding unit for temperature measurement, and the polar satellite record from 1978 to present, which showed a big peak in 1998 from El Nino. He said he wrote his book saying that there was no warming in 1992, when it was true. After 1998, the temperature came back down quickly, and, he said, the satellite record, like the Science article, hasn't seen warming. He then corrected himself to say, "well, some warming, but not consistent with the IPCC models."

He said there has been high latitude warming, and the difference between winter and summer warming has supported the numeric models. "But a problem has evolved, which is the most powerful argument of the skeptics." The models predict that there should be warming at the surface, which increases at higher altitude as you go up into the atmosphere. "There should be very strong warming in the middle of the atmosphere, but it's not in the data." This is the main anti-global warming argument of Joanne Nova's "The Skeptics Handbook" that has been distributed to churches throughout the U.S. by the Heartland Institute (an organization supported by the oil industry that has sometimes gotten into trouble due to its carelessness).

At this point, Balling started asking various questions and answering them by quoting from the IPCC reports:

More hurricanes? The IPCC doesn't say this. He cited the 1990, 1996, 2001 (executive summary, p. 5), and 2007 (p. 6) reports, all of which say that there's no indication or no clear trend of increase or decrease of frequency or intensity of hurricanes or tropical cyclones as a result of warming.

The southwestern United States may become drier? Here, he answered affirmatively, pointing out that an ASU professor has an article that just came out in Science on this topic. Atmospheric circulation is decreasing, and soil moisture measures show the southwest is becoming drier. On this, he said, there's "evidence everywhere," and the Colorado River basin in particular is being hit hard. And this is consistent with IPCC predictions. He cited Roy Spencer to say that "extraordinary prediction require extraordinary evidence." (This actually comes from Carl Sagan, who said "extraordinary claims require extraordinary evidence" in Cosmos.)

Frequency of tornadoes? It's down, not up, and IPCC 2007 p. 308 says there is no evidence to draw general conclusions.

Ice caps are melting? Balling said Arctic yes, Antarctic, no. He cited the IPCC 2007 p. 6 regarding Antarctic sea ice extent indicating a lack of warming, and p. 13 that it's too cold for widespread surface melting. He contrasted this with a slide of a homeless penguin used to argue for action on global warming. The Arctic ice cap "has its problems," he said, and its extent has declined though has "rebounded a bit" recently. (In the Q&A, he said that half of the loss in the last six years has been recovered.) He said that experts in sea ice extent identify relative temperature, ocean currents, and wind as more important than temperature--"it's not a thermometer of the planet." In the past, northern sea ice has dropped as southern sea ice has increased, with the overall global extent of sea ice relatively unchanged. In the Q&A, he made it clear that he wasn't saying that temperature wasn't a factor, but that global temperature is definitely not a factor and temperature is less important than the other factors he identified.

Sea levels changing? He said there's no doubt about this, but the important question is whether the rise is accelerating. He cited Church et al. J. Climate 2004, p. 2624 for a claim of "no detectible secular increase" in rate of sea level rise, but noted that another article this week says that there is. IPCC 2007, p. 5 says that it is unclear if increasing is a longer-term trend. The average has been 1.8 mm/year, but with variable rates of change. IPCC 2007, p. 9 says that 125,000 years ago sea levels were likely 4-6m higher than in the 20th century, due to retreat of polar ice.

He said that ice melting on Kilimanjaro has been a "poster child" for global warming, but that this sharp decline "started its retreat over 100 years ago," at the end of the Little Ice Age (1600-1850), and is related to deforestation and ocean patterns in the Indian Ocean rather than global warming. It's not in an area where significant warming is expected by climate models, and local temperatures don't show it.

He then talked about a few factors that cause temperature forcing in a negative direction (i.e., cooling)--SO2, which makes clouds last longer, increased dust, and ozone thinning. He said that his entry into the IPCC was his work at the U.S./Mexico border where he found that overgrazed land on the Mexican side caused warming, and it was much cooler on the U.S. side of the border. The dust, however, had a global cooling effect.

The 2001 IPCC report lists global radiative forcings in the negative direction: stratospheric ozone, sulfate, biomass burning, and mineral aerosols. In the positive direction include CO2 and solar irradiance. The 2007 report adds many more, including contrails from aircraft. A chart from the report lists the level of scientific understanding for each factor, and he observed that it's "low" for solar irradiance.

He cited a quote from James Hansen (Proc. Nat. Acad. Sci. p. 12,753, 1998) saying that we can't predict the long term, and said he agrees.

He observed that the Pew Foundation poll for Sep. 30-Oct. 4 asked Americans if they think there is evidence of global warming being caused by humans and only 36% said yes--he said he's one of those 36%.

He concluded by observing that if you look at the difference between doing nothing at all, or stabilizing at 1990 levels in 1990, that only produces changes of a few hundredths of a degree of temperature in 2050--so no matter what we do, "we won't live long enough to see any difference."

In the Q&A session, Prof. Billie Turner said that "our academy is about to issue a statement that we are 97% sure that we will not be at a 1-degree Celsius increase but a 2-degree Celsius increase by 2050" (or about double what Balling's final slide showed). He objected that Balling's talk began with the "lunatic green fringe" and contrasted it with the IPCC, which he said would be like him beginning a talk with Dick Cheney's views before giving his own. He said this may be an effective format, but it "gives a slant on the problem that isn't real in the expert community." Turner also pointed out that on the subject of mitigation, if you are going to make a calculation in economic terms you have to use a discount rate. The Stern Review used a high discount rate, and concluded that it is worth spending a lot of money now on mitigation; William Nordhaus and Partha Dasgupta, on the other hand, used a low discount rate and concluded that it's not worth spending money on now.

Balling said that he gets email from "lefties" that ask him to "please keep criticizing" because "this [global warming] is just an excuse to keep the developing world from catching up." In conversation with a small group afterward, Balling made it clear that he thinks people shouldn't be listening to Limbaugh and Hannity on climate change, and in answer to my question about what sources the educated layman should read and rely upon, he answered unequivocally "the IPCC," at least the scientific portions authored by scientists. He had some criticisms for the way that the technical summaries are negotiated by politicians, however, and said that S. Fred Singer has made hay out of comparing the summaries to the actual science sections and pointing out contradictions. He also said that Richard Lindzen at MIT, who he said may be the best climate scientist around, thinks the whole IPCC process is flawed, and that John Christy, lead author of the 2001 IPCC report, thinks the IPCC process should allow an "alternative views" statement by qualified scientists who disagree.

In a very brief discussion afterward with the climate modeling grad student in my climate change class, he said that the biggest weakness of the talk was that Balling didn't talk about ocean temperatures, being measured by the Argo project of NASA's Jet Propulsion Laboratory. These measures had shown some recent cooling (but a long-term warming trend), but after discovering an error, Joshua Willis found that warming has continued.

Balling supports the science, but he still leans to minimizing the negative effects, and uses some apparently bad arguments to do so. His position clearly advocates a "wait and see" approach, and argues that we needn't be in a hurry to mitigate since nothing we do will have any effect in our lifetimes--but it could have an enormous effect on what is required for mitigation and adaptation for future generations.

Thursday, October 29, 2009

State Press defends Ravi Zacharias

ASU's State Press columnist Catherine Smith authored an op-ed piece promoting last night's appearance of Christian apologist Ravi Zacharias. This was at least her second such op-ed; a prior one was published on September 17.

My letter to the editor, below, didn't get published, but another critic's letter did get published.

Here's mine:
Catherine Smith quotes Ravi Zacharias as stating that "irreligion and atheism have killed infinitely more than all religious wars of any kind cumulatively put together." This statement not only demonstrates Zacharias' innumeracy, it shows that he continues to make the mistake of attributing killing in the name of political ideologies like Stalinism and communism to atheism. I agree that Stalin, Mao, and Pol Pot killed more than religious wars, but it wasn't their atheism that caused that killing. Those killed by religious wars, the Inquisition, and witch trials, however, were killed in the name of religion. Out of fairness, there were no doubt political issues involved in many wars over religion as well, but if you take claims of religiously motivated killing at face value, the death tolls for those killed in the name of religion far exceed the death tolls for those killed in the name of irreligion.

Zacharias has a history of attacking atheism with misrepresentations in his books, as documented in Jeff Lowder's "An Emotional Tirade Against Atheism" and Doug Krueger's "That Colossal Wreck," both of which may be found on the Internet as part of the Secular Web (http://www.infidels.org/).
I first heard of Zacharias back around 1991, when I sat behind someone on an airplane flight who was reading his book (reviewed by Krueger, linked above), A Shattered Visage. The parts I read were truly awful, about the quality of M. Scott Huse arguments against evolution (a step below Kent Hovind and Ken Ham). I didn't bother to attend, but would be interested in hearing any reports of how it went.

UPDATE (November 24, 2017): Steve Baughman has published an exposure of Zacharias' claims to have credentials he does not possess, and to have had academic appointments that did not exist.

UPDATE (September 26, 2020): Ravi Zacharias died of cancer earlier this year, but not before being caught in an online relationship scandal.

UPDATE (February 11, 2021): Ravi Zacharias International Ministries has publicly released a report on an investigation into abuse charges against Ravi Zacharias, and it found a significant pattern of predatory sexual abuse and a rape allegation.

The woman googled “Ravi Zacharias sex scandal” and found the blog RaviWatch, run by Steve Baughman, an atheist who had been tracking and reporting on Zacharias’s “fishy claims” since 2015. Baughman blogged on Zacharias’s false statements about academic credentials, the sexting allegations, and the subsequent lawsuit. When the woman read about what happened to Lori Anne Thompson, she recognized what had happened to that woman was what had happened to her.

As far as she could tell, this atheist blogger was the only one who cared that Zacharias had sexually abused people and gotten away with it. She reached out to Baughman and then eventually spoke to Christianity Today about Zacharias’s spas, the women who worked there, and the abuse that happened behind closed doors.

Wednesday, October 28, 2009

Teaching the Bible in public schools

The following is a letter to the editor of Arizona State University's State Press that the paper didn't print. It was written in response to an editorial by Will Munsil, son of Len Munsil, who was editor of the State Press when I was an undergraduate in the 1980s. Len Munsil is an extremely conservative Republican, failed Republican candidate for Governor in 2006, and founder of the Center for Arizona Policy, Arizona's version of the American Family Affiliation. His daughter, Leigh Munsil, is the State Press's current editor-in-chief. When Munsil Sr. edited the school paper, he sometimes refused to print my letters to the editor for shaky reasons.

The letter below was written in response to Will Munsil's "Putting the Bible back in public schools," which was published on October 14.
I disagree with Will Munsil's assertion that the Bible is the foundation of American political thought. On the contrary, the American form of government was rooted in the work of enlightenment philosophers such as Locke, Montesquieu, and Rousseau. The U.S. Constitution's form of government has more resemblance to Caribbean pirate codes than to the Ten Commandments.

That said, however, I agree with Munsil that knowledge of the Bible is worthwhile and should be taught in public schools for the purpose of cultural literacy, so long as it is done without endorsing Christianity or Judaism. The Bible Literacy Project's curriculum might be one way to do it. One way not to do it is to use the National Council on Bible Curriculum in Public Schools' curriculum--it takes a sectarian perspective, is full of errors, and has failed legal challenges in Texas and Florida for being unconstitutional.
I suspect this letter wasn't excluded by reason of content, but because they had already printed a couple of letters critical of Will Munsil's op-ed by the time I submitted this on October 16. Perhaps I should have mentioned that I'm an atheist, which makes the extent of my agreement with Munsil more interesting. Of course, my view is contrary to Munsil's in that I think Bible literacy is likely to decrease, rather than increase, religious belief. But it wouldn't surprise me if the NCBCPS curriculum is the one that Will Munsil had in mind.

I should point out that I think it should probably be taught as part of a world religions class that covers more than just Christianity--kids should not only get information about the Bible that they won't get in Sunday School, they should be informed about other religions, as well as the fact that history has been full of doubters of religion, as documented in Jennifer Michael Hecht's excellent Doubt: A History.

You can find out more about the NCBCPS curriculum that failed legal challenge in Texas here.

Munsil cited Stephen Prothero, whose op-ed piece, "We live in a land of biblical idiots," I wrote about at the Secular Web in early 2007.

Monday, October 26, 2009

Richard Carrier to speak in Phoenix

Richard Carrier will be speaking to the Humanist Society of Greater Phoenix on Sunday, November 8 at around 10 a.m.--it will likely be packed, so showing up for breakfast or just to get a seat at 9 a.m. is advised. Richard will be speaking about Christianity and science, ancient and modern, and you can get a bit more information about his talk at his blog.

Saturday, October 24, 2009

Personalized medicine research forum

Yesterday afternoon I attended a Personalized Medicine research forum at ASU's Biodesign Institute, sponsored by ASU's Office of the Vice President for Research and Economic Affairs (OVPREA) and hosted by Dr. Joshua LaBaer of ASU's Virginia G. Piper Center for Personalized Diagnostics.

The forum's speakers covered both the promise and problems and issues raised by the developing field of personalized medicine, which involves the use of molecular and genetic information in medical diagnosis and treatment. A few highlights:

Introduction (Dr. LaBaer)
Dr. LaBaer pointed out that these new diagnostics cost a great deal of money to develop, but they have the potential for cost savings, for instance, if they can be used to identify forms of disease that will not benefit from very expensive treatments. He gave the example of Genomic Health, which has developed a test for early stage breast cancer to determine if women will or won't benefit from adjuvant therapy (chemotherapy to prevent recurrence). A test that costs even a few thousand dollars to perform is something insurers will be willing to pay for if it has the potential of saving tens of thousands of dollars of expense on chemotherapy that will not provide any benefits. On the other hand, the mere promise of early detection of susceptibility for disease has the potential for overtreatment and an increase in healthcare expenses. This problem was discussed by a number of speakers, with particular bad potential consequences in the legal realm.

Personalized Diagnostics (Dr. LaBaer)
Dr. LaBaer talked briefly about his own lab's work in biomarker discovery and cell-based studies. In biomarker discovery, his lab is working in functional proteomics, using cloned copies of genes to produce proteins and building tests that allow examination of thousands of proteins at a time. His lab, formerly at Harvard and now at ASU, has 10,000 copies of human genes and 50,000 copies of genes from other animals, which are made available to other researchers. (There's more information at the DNASU website.)

The goal of biomarker discovery is to greatly improve the ability to find markers of human health using the human immune system, by identifying antigens that are markers for disease. The immune system generates antibodies not just in response to infectious disease, but against other proteins when we have cancer. Tumor antigens get into the bloodstream, though they may only appear in 10-15% of those who have the disease. Rather than testing one protein at a time, as is done with ELISA assays, LaBaer's lab is building protein microarrays with thousands of proteins, tested at once with blood serum. Unlike old array technology that purifies proteins and puts them into spots on arrays, where the proteins may degrade and lose function, their method involves printing the DNA that encodes the gene on the arrays, then capturing proteins in situ on the array at the time the experimental test is performed.

LaBaer's lab's cell-based work involves tryng to identify how proteins behave in cells when they are altered, in order to find out which pathways contribute to consequences such as drug resistance in women with breast cancer, as occurs with Tamoxifen. If you can find the genes that make cancer cells resistant, you can then knock them out and cause those cells to die. They tested 500 human kinases (5/7 of the total) and found 30 enzymes that consistently make the cancer cells resistant. Women with a high level of those enzymes who take Tamoxifen have quicker relapses of cancer.

Complex Adaptive Systems Initiative (George Poste)
George Poste, former director of ASU's Biodesign Institute and former Chief Scientist and Technology Officer at SmithKline Beecham, talked about the need to replace thinking about costs in the healthcare debate with thinking about value. The value proposition of personalized medicine is early detection, rational therapeutics where treatment is made based on the right subtype of disease being treated, and integrative care management where there's better monitoring of the efficacy of treatments. He said that the first benefits will come from targeted therapy and this will then overlap with individualized therapy, as we learn how our genome affects such things as drug interactions. He was critical of companies like 23andme, which he called "celebrity spit" companies, which do little more than give people a needless sense of anxiety about predispositions to disease that they currently can do nothing about except eat right and exercise.

Poste also had criticisms for physicians, pointing out that it takes 15-20 years for new innovations to become routinely adopted, and many physicians don't use treatment algorithms at all. Oncologists, he said, make money from distributing treatments empirically (that is, figuring out whether it's effective by using the treatment on the entire population with the disease) rather than screening first, even where tests exist to determine who the treatment is likely to work on. He said that $604 million/year in health care costs could be saved by the use of a single colon cancer screening test, and not proceeding with treatment where it isn't going to work. Today, where 12-40% of people are aided by treatments that cost tens of thousands of dollars, 60-88% of that spending is being wasted. With the aging population, he said that Humana will in the next several years see all profits disappear, spent on expensive treatments of people who don't respond to them.

Pharmaceutical companies are beginning to do diagnostic test development alongside drug development now, and insurers will push for these tests to be done. Poste suggested that we will see the emergence of "no cure, no pay" systems, and noted that Johnson & Johnson has a drug that has been introduced for use in the UK under the condition that the company will reimburse the national health care system for every case in which it is used but doesn't work. Merck's Januvia drug for type II diabetes similarly offers some kind of discount based on performance.

Poste pointed out another area for potential cost savings, related to drug safety. With some 3.1 billion prescriptions made per year, there are 1.5-3 million people hospitalized from drug interactions, 100,000 deaths, and $30 billion in healthcare costs, though he noted this latter figure includes caregiver error and patient noncompliance.

He bemoaned the "delusion of zero risk propagated by lawyers, legislatures, and the media," and pointed out that the FDA is in a no-win situation. (This is a topic that's been recently covered in two of my classes, my core program seminar and my law, science, and technology class with Prof. Gary Marchant. If the FDA allows unsafe drugs to be sold, then it comes under fire for not requiring sufficient evidence of safety. If, on the other hand, it delays the sale of effective drugs, it comes under fire for causing preventable deaths. The latter occurred during the 1980s with AIDS activists protesting against being denied treatments, described in books such as Randy Shilts' And the Band Played On and Steven Epstein's Impure Science. This led to PDUFA, the Prescription Drug User Fee Act of 1992, under which drug companies started funding FDA drug reviewer positions through application fees to help speed approval. That has been blamed for cases of the former, with the weight-loss drugs Pondimin and Redux being approved despite evidence that they caused heart problems. That story is told in the PBS Frontline episode "Dangerous Prescription" from November 2003.)

Poste pointed out that there have been 450,000 papers published which have claimed to find disease biomarkers, of which the FDA has approved only five. But he didn't blame the FDA for delay in this case, because this consists of a mass of bad studies which he characterized as "wasteful small studies" with insufficient statistical power. In the Q&A session, he argued that NIH needs to start dictating clear and strong standards for disease research, and that it has abrogated its role in doing good science. He said that "not a single national cancer study with sufficient statistical power" has been done in the last 20 years; instead research is fragmented across academic silos. He called for "go[ing] beyond R01 grant mentality" and building the large, expensive studies with 2,500 cases and 2,500 controls that need to be done.

He also raised challenges about the "very complex statistical analysis required" in order to do "multiplex tests" of the sort Dr. LaBaer is trying to develop. And he pointed out the challenge that personalized medicine presents for clinicians, in that "only about six medical schools have embraced molecular medicine and engineering-based medicine." Those that don't use these new techniques as they become available, he said, "will open themselves up to malpractice suits."

Science and Policy (David Guston)
David Guston, co-director of ASU's Consortium for Science, Policy, and Outcomes (CSPO) and director of ASU's Center for Nanotechnology in Society (CNS) spoke about "cognate challenges in social science" and how CNS has been trying to develop a notion of "anticipatory governance of emerging technology" and devising ways to build such a capacity into university research labs as well as broader society, to allow making policy decisions in advance of the emergence of the technology in society at large. He described three capacities of anticipatory governance--foresight, public engagement, and integration, and described how these have been used at ASU.

Foresight: Rather than looking at future consequences as a linear extrapolation, CNS has used scenario development and a process of structured discussions based on those scenarios with scientists, potential users, and other potential stakeholders, about social and technical events that may be subsequent consequences of the scenarios. This method has been tested with Stephen Johnston's "Doc-in-a-Box" project at ASU's Center for Innovations in Medicine, which Guston said led to some changes in the conceptualization of the technology.

Public Engagement: The "scope and inclusion of public values is important for success," Guston said, and gave as an example the "national citizens technology forum" that CNS conducted in six locations to look at speculative scenarios about nanotechnology used for human enhancement. These were essentially very large focus groups whose participants engaged in "informed deliberation" over the course of a weekend, after having read a 61-page background document and spending the prior month engaging in Internet-based interaction.

Integration: Guston described the "embedding of social scientists in science and engineering labs," to develop productive relationships that help lab scientists identify broader implications of their work while it's still in the lab rather than after it's introduced to the general public.

Guston suggested that there might be other ways of implementing "anticipatory governance" in the form of legislative requirements or standards and priorities set by program officers at funding organizations, but that the lab setting is "the best point of leverage at a university" and can set an example for others to follow.

Clinical Perspective (Larry Miller)
Larry Miller, Research Director at the Mayo Clinic in Scottsdale, spoke about the healthcare provider's approach to personalized medicine. He said that Mayo is committed to individualized care, and that now that we are beginning to understand the power of human variation, these new developments have "to be transformational for providers or they won't survive." He suggested that the future of medicine will move from reactive and probabilistic to more deterministic selection of treatments based on diagnoses. He emphasized the need for education for doctors, and pointed out that "standards of care will become outmoded," which is "disruptive to law and [insurance] coverage." He said that Mayo sees a big challenge of complexity, where what was one disease (breast cancer) is now at least ten different subdiseases. Doctors need to make their treatment decisions on the detail, to predict how the disease will behave, and choose the best drugs possible based on safety, effectiveness, and cost-effectiveness.

Miller pointed out that this requires interdisciplinary work, and said that Mayo in Arizona has a huge advantage with its relationship with ASU, where so much of this work is going on. While Mayo has scientific expertise in a number of areas, these new technologies draw on expertise from beyond medicine, in particular informatics and computational resources needed to build an effective decision support system that will become essential for doctors to use in a clinical setting.

He talked about Mayo's program for individualized medicine, which involves not just incorporating new developments in diagnostics and therapeutics, but in regenerative medicine for repair, renewal, and regeneration of deficits.

Mayo has had electronic medical records for the last 15 years, on 6 million people, but these are kept in multiple incompatible systems and were not built with research in mind. They hope to improve their systems so that it can be used in an iterative process to learn more about the efficacy of therapies, and so therapies can be combined with "companion diagnostics for monitoring progression, recurrences, and response to therapy."

Like Poste, he raised objections to the companies that market gene sequencing directly to individuals, which just "scare people inappropriately," but identified learning about disease predispositions as an important part of these developing technologies. We need to develop methods of risk analysis that can help people correctly understand what these predispositions mean.

He sees the future as having three waves--the first wave will be the new diagnostics, the second wave improvements in clinical practice and therapy, and the third wave embedding the new technology into the healthcare system, with significant changes to policy and education.

Health Informatics (Diana Petitti)
Diana Petitti, former CDC epidemiologist and former director of research for Kaiser Permanente, where she built a 20-year longitudinal data repository for its 35 million members, spoke about the importance of health informatics. (She is now a professor in ASU's Department of Biomedical Informatics.) Dr. Petitti raised concerns about how in the United States we are "loathe to deny anyone anything" in terms of medical treatments, but in fact "we do deny lots of people lots of things." She worried that personalized medicine has the potential to lead to greater maldistributions of healthcare, with the "haves" getting more and better treatment and the "have nots" getting less and worse treatment, unless we plan carefully. She advocated evidence-based medicine and assessing value of treatments to be deployed to the general population.

Dr. Petitti brought up as an example the fact that oral contraceptives result in a 2x-10x increase in the likelihood of a venous thrombotic event, and that the Factor V Leiden gene is predictive of susceptibility to that consequence, but no screening is done for it. Why not? Because the test only predicts 5% of those who will have the event, it's a very expensive test, and we don't have good alternatives for oral contraceptives. These kinds of issues, she suggested, will recur with multiplex diagnostics.

She explicitly worried that "we have dramatically oversold preventive medicine" and doesn't think it's likely that savings from prevention will allow coverage for more extensive treatment. She advocated that everyone in the field see the film "Gattaca," and stated that ASU provides "unique opportunities to train people to think about these issues" using "quantitative reasoning and probabilistic thought." She concluded by saying that we need to "work towards rational delivery of healthcare that optimizes public health."

Law (Gary Marchant)
Prof. Gary Marchant of the Sandra Day O'Connor School of Law at ASU, who has a Ph.D. in genetics and is the executive director of ASU's Center for the Study of Law, Science, and Innovation (formerly Center for the Study of Law, Science, and Technology), spoke about legal issues. First he listed the many programs available at ASU in the area, beginning with the genetics and law program that has been here for 10 years and was the reason he first came to ASU. Others include a new personalized medicine and law program at the Center for Law, Science, and Innovation, a planned center on ethical and policy issues regarding personalized medicine in conjunction with the Biodesign Institute, CSPO, TGEN, Mayo, etc., and research clusters at the law school on breast cancer, warfarin, and personalized medicine. He also gave a plug for an upcoming conference March 8-9, 2010 at the Arizona Biltmore sponsored by AAAS and Mayo, which also has a great deal of corporate support.

Prof. Marchant indicated that liability is the biggest issue regarding personalized medicine, and he sees doctors as "sitting ducks," facing huge risks. If a doctor prescribes a treatment without doing a corresponding new diagnostic test, and that has complications, he can be sued. If he does the diagnostic test, it shows a very low likelihood of a disease recurrence, and advises against the treatment, and then the patient ends up being one of the rare people who has the recurrence, the doctor can be sued. The doctor is really in a damned-if-you-do, damned-if-you-don't situation. The insurers and pharmaceutical companies are at less risk, since they have already developed enormous resources for dealing with the lawsuits that are a regular part of their existence. In a short discussion after the forum, I asked Prof. Marchant if doctors would be liable if they performed a diagnostic test, found that it showed a low likelihood of recurrence or benefit for a treatment, and then recommended the treatment anyway, knowing the insurance company would refuse to pay for it--would that shift the liability to the insurance company? He thought it might, though it would be unethical for a doctor to recommend treatment that he didn't actually think was necessary, and there's still the potential for liability if the insurance company pays for the treatment and the treatment itself produces complications. It seems that this problem really needs a legislative or regulatory fix of some sort, so that doctors have some limitation of liability in cases where they have made a recommendation that everyone would agree was the right course of action but a low-probability negative consequence occurs anyway.

Prof. Marchant observed that the liability issues are particularly problematic in states like Arizona, where each side in the suit is limited to a single expert witness. He said there is "no clear guidance or defense for doctors," and the use of clinical guidelines in a defense has not been effective in court, in part because doctors don't use them.

Q&A
A few additional points of interest from the Q&A sessions (some of which has already been combined into the above summaries):

Dr. LaBaer pointed out that most markers for diseases don't seem to have any role in the cause of the disease, such as CA25 and ovarian cancer. So his lab is looking not just for biomarkers, but for those that will affect clinical decisions. 4 out of 5 positive results in a mammography for breast cancer are actually cases where there is nothing wrong and the woman will not end up getting breast cancer, but some procedure ends up being undergone, with no value. So he wants to find a companion test that can tell which are the 4 that don't need further treatment.

George Poste pointed out that baby boomers are going to bankrupt the system as they reach the end of their lives, and about 70% of the $2.3 trillion in healthcare spending is spent in the last 2-3 years of life, with many treatments costing $60K-$100K per treatment cycle on drugs that add 2-3 weeks of life. The UK's National Institute of Clinical Excellence has been making what are, in effect, rationing decisions by turning down all of the new cancer drugs that have come along because they have such great cost and such minimal benefit. He asked, "how much money could you save with a 90% accurate test of who's going to die no matter what you do?"

Prof. Marchant said more about legal issues involving specimen repositories, including a case at ASU. The developer of the prostate-specific antigen (PSA) test, William Catalona, had a specimen repository with 30,000 tissue samples at Washington University, that he wished to take with him to Northwestern University when he took a new position there. He began asking patients for permission to move the samples, and 6,000 gave permission. But Washington University sued him, claiming that the samples were property of the university. Patients pointed out that their consent agreement gave them the right to withdraw their samples from future research and they had only consented to research on prostate cancer, but federal judge Stephen Limbaugh ruled in favor of the university and that patients had no property rights in their tissue. This ruling has reduced incentives for patients to consent to give specimens for research.

A current lawsuit against ASU by the Havasupai Indian tribe involves blood samples that were given for a study of diabetes by researchers who are no longer at ASU. They wanted to take the samples with them, and samples had also been given to other researchers for use in studies of schizophrenia and the historical origins of the tribe, even though informed consent was apparently only given for the diabetes research. Although this case was originally dismissed, it was recently reinstated.

Other cases involve patent protection of genetic information. About 25% of the human genome is patented, including Myriad Genetics' patent on the BRCA1 and BRCA2 genes which are predictive of breast cancer and can only legally be tested for by Myriad. This case is likely to go to the U.S. Supreme Court regarding the issue of whether human genes can be patented. The courts so far have ruled that a gene in isolation outside of the human body is patentable, even though (in my opinion) this seems at odds with the requirement that patents be limited to inventions, not discoveries. There has already been a legislative limitation of patent protection for surgical procedures for the clinical context, so that doctors can't be sued for patent infringement for performing a surgery that saves someone's life; it's possible that a similar limitation will be applied on gene patents in a clinical context, if they don't get overturned completely by the courts.

These gene patents create a further problem for the multiplex tests, since they inevitably include many patented genes. Prof. Marchant observed that someone from Affymetrix spoke at an ASU seminar and stood up and said they were building their GeneChip DNA microarrays for testing for the presence of thousands of genes, and were ignoring gene patents. They were subsequently sued. Dr. LaBaer stated that his lab is doing the same thing with cloned genes--they're cloning everything and giving them away, without regard to patents.

The session was videotaped and will be made available to the public online. I will add a link to this posting when it becomes available.

If you've read this far, you may also be interested in my summary of Dr. Fintan Steele's talk at this year's The Amazing Meeting 7, titled "Personalized Medicine or Personalized Mysticism?", in my summary of the Science-Based Medicine conference that took place just prior to TAM7, and in my short summary of Dr. Martin Pera's talk on regenerative medicine and embryonic stem cells at the Atheist Alliance International convention that took place earlier this month.

Saturday, September 05, 2009

ApostAZ podcast #18

The 18th episode of the ApostAZ podcast is available:
Episode 018 Atheism and Free Twizzering in Phoenix! Go to meetup.com/phoenix-atheists for group events! Mark 19? Criticism and analysis. http://arizonacor.org Intro- Immortal Technique- Freedom of Speech from Revolutionary Vol 2. Outro- Greydon Square 'Dream' from the Compton Effect.

Thursday, August 27, 2009

Marco Iacoboni on imitation and sociality

Thanks to a tip from Tony Barnhart, I learned this morning of a talk at ASU today relevant to my last post ("Imitation, isolation, and independence") by UCLA neuroscientist Marco Iacoboni. Although I wasn't able to stay for the Q&A session, I did get to hear his entire presentation, titled "Imitation and Sociality: The Role of Neural Mirroring." His talk covered the following points (from his initial agenda slide):
  • Imitation in human behavior
  • Potential neural precursors in primates
  • Neural mechanisms of human imitation
  • Neural circuitry for imitation and language
  • Imitation and empathy
Dr. Iacoboni was introduced by new ASU prof. Art Glenberg, who started right off by pointing out that the existence of mirror neurons is itself controversial, and some "don't think there's much of interest proved about mirror neuron systems." Dr. Iacoboni thanked Prof. Glenberg for beginning with the "elephant in the room," and said that the question has never been raised about the existence of mirror neurons in monkeys, and suggested that some people don't want there to be homologous systems in humans, e.g., for the sake of human exceptionalism or denial of evolution. (Has your blood pressure gone up yet, Tony?)

Imitation in human behavior
He started by briefly discussing the role of imitation in human behavior, citing Andrew Meltzoff's 1977 article in Science ("Imitation of Facial and Manual Gestures by Human Neonates," (PDF) 198:75-78), noting that Meltzoff is probably the only guy to publish a photograph of himself sticking out his tongue in Science. Imitation, the copying of the behavior of another, is pervasive by humans. People copy body positions and movements, and such imitation promotes liking. (As an aside, he said that he has been interviewed by Glamour (July 2003) about his work, and can have a second career as a consultant to Internet dating services if mirror neurons turn out not to exist.) Imitation facilitates communication and conversation, and people tend to even synchronize the way they talk. (I know I've heard multiple stories of people whose accents have been changed by being around people with different accents.)

Potential neural precursors in primates
Mirror neurons were first discovered in macaque monkeys, in the ventral premotor cortex. It was found that neurons in this area fired when monkeys engaged in grasping behavior, and also fired to a lesser extent when those monkeys observed other monkeys engaged in grasping behavior. (Here, Iacoboni cited Gallese et al., Brain, 1996.)

Neural mechanisms of human imitation
Iacoboni said that the term "mirror" may be good for marketing, but may also be misleading. Mirror neurons are defined physiologically rather than anatomically, by behavior rather than location in the brain. They have motor properties, and are specialized for actions, including sensory attributes of actions, but not mere peceptions. They are not simply "monkey-see, monkey-do" cells--while 1/3 tend to fire for very specific actions, 2/3 fire for other sorts of complementary actions. Mirror neurons have abstract codings for hidden actions, action sounds, and intentions, not just specific actions. Mirror neurons that fire in response to a grasping action of picking up a laser pointer would also fire if the details of the action were obscured by a screen. The sound of tearing paper can fire mirror neurons that fire when observing paper being ripped. And if there are variant actions that achieve the same purpose, such as bringing food to the mouth, the same mirror neurons can fire. Mirror neurons learn and have some degree of plasticity.

Iacoboni's model predicts that observing an action should have the lowest level of activation for mirror neurons, performing a motor task should have a medium level, and imitation--both seeing and doing an action--should have the strongest level of activation. And that is what his research has found.

At UCLA, they've done parallel work with monkeys and humans, and identified apparently homologous brain regions between the two. The specific region where mirror neurons were first discovered, the F5 region, appears to be homologous to the BA44 region of the human brain. The "BA" stands for Brodmann Area, a part of Broca's area associated with language--those with lesions to that area have Broca's aphasia, which reduces language fluency and makes speech slow and difficult. This raises the question of whether the mirroring is effectively covert verbalization in humans.

Experiments with transcranium magnetic stimulation (TMS), where a magnetic copper coil placed against the head creates an electrical flow in the brain, interfering with the underlying electrical activity in the brain--essentially adding noise and causing disruption--have enabled a way to demonstrate causation where functional magnetic resonance imaging (fMRI) could only show correlation. Iacoboni called this a shift "from brain mapping to brain zapping." If you zap an area and cause a deficit in a particular behavior or function, you show the causal involvement of that area in the production of that behavior. Doing experiments with TMS of Broca's area vs. a control site, using an imitation task and a control task, show the essential role of Broca's area in imitation. (Here, Iacoboni cited Heiser, et al., Eur. J. of Neuroscience, 2003.)

Iacoboni showed a diagram that he labeled the "core imitation circuit" which involved three locations of the brain--the superior temporal surface (STS), which manages visual input to the system via a visual or pictorial description of an action, which then feeds to the parietal mirror neuron system (MNS), which has the motor details of an action, which then feeds to the frontal MNS, which deals with the goal or intention of an action. (There were two-way arrows between STS and parietal MNS, and between parietal MNS and frontal MNS.)

Neural circuitry for imitation and language
Iacoboni said that an old theory of speech perception which had been abandoned has now been brought back by mirror neurons. That theory is the motor neuron theory, which says that to perceive speech sounds, you simulate the generation of the same speech. Speech perception involves speech simulation. In experiments that compared brain activation of speaking and listening, he suggested that he found evidence to support this. (This must be complicated by the fact that when you speak, you hear yourself. He cited Meister, et al., Current Biology, 2007.)

He discussed hemispheres of the brain and action sounds, where the right and left motor cortexes were subjected to TMS stimulation. I didn't quite get the details of this, but apparently a response was stronger for the left hemisphere, which is dominant for language. (He cited Azir-Zadeh et al., Eur. J. of Neuroscience, 2004.) He also referred to research of somatotopic maps, indicating that even when you read sentences about hand and foot actions (as opposed to seeing them), you get activation of the motor neurons for those areas.

He then spoke about how meaning is encoded in the brain, distinguishing a symbolic approach to "embodied semantics," favoring the latter view. In the embodied view, the meanings of words are grounded in sensorimotor experience and meaning is given by associations with sensorimotor activation.

He described an experiment in how mirror neurons code intentions, where subjects were shown short videos. There were first contexts, such as a set of cookies, a teapot, gnutella, etc., set up as though someone was going to have a snack; contrasted with this was the same items, with just cookie crumbs, and empty cup, and so forth, as though someone had already had a snack. There were contrasting actions--a hand grasping the edge of a cup (as though putting it down or picking it up to serve someone else), vs. a hand grasping the handle of a cup, for the action of drinking. And then there were intention conditions, with each combination of actions embedded in a context. The result was to find a difference in activation between the intention settings, as well as between action and intention; with the act of drinking generating more activation in the inferior frontal gyrus. (Here he cited Iacoboni, PLoS Biology, 2005, "Grasping the intentions of others with one's own mirror neuron system.")

He next showed a diagram of MNS interactions, showing imitative learning and social mirroring (or empathy, or "emotional contagion"). Imitative learning involves the MNS interacting with the pre-motor cortex, while social mirroring involves the MNS interacting with the insula and the limbic system.

Imitation and empathy
He spoke about "the chameleon effect"--some people are more imitative than others, and a tendency to imitate is correlated with a tendency to be more empathetic. He showed two photographs of President Jimmy Carter and his chief of staff, Hamilton Jordan, at two different times at the same event; in both cases the chief of staff was in the same physical position as Carter, standing next to or slightly behind him.

When feeling what others feel, the mirror neurons simulate facial expressions, which then feed through the insula to the limbic system, where you feel the emotion. He referred to research on imitating and observing facial expressions proposing a neural model of empathy in humans (Carr et al., PNAS, 2003).

We are "wired for empathy," he said, and notes that he used to quote a French phenomenologist on this point, but since that's not popular among U.S. philosophers he needed to find a champion of the analytic school of philosophy. He offered two quotes from Ludwig Wittgenstein, one which began "We see emotions. We do not see facial contortions and make the inference that he is feeling joy, grief, boredom. We describe a face immediately as sad, radiant, bored, even when we are unable to give any other description of the features." (From Remarks on the Philosophy of Psychology, vol. 2, p. 100.) The other began "'I see that the child wants to touch the dog but doesn't dare.' How can I see that? - Is this description of what is seen on the same level as a description of moving shapes and colors? Is an interpretation in question?Well, remember that you may also mimic a human being who would like to touch something, but doesn't dare. And what you mimic is after all a piece of behaviour." (From Remarks on the Philosophy of Psychology, vol. 1, p. 177.)

He then spoke of experiments with facial expression photos shown to kids and asked to imitate them, where they used fMRI and compared to measures of social competence, number of play dates, number of friends, etc., and found a high correlation between mirror neuron activation and social competence. (He cited Pfeifer et al., NeuroImage, 2008.)

This then led to the issue of autism, which he described with a slide heading titled, "Broken mirrors in autism?" He spoke of observation/imitation tasks with two groups of kids, those with autism spectrum disorder and a control set, which yielded differential activity in motor neurons. (He cited Dapretto et al., Nature Neuroscience, 2006.)

After a quote from Eric Hoffer ("When people are free to do as they please, they usually imitate each other"), he spoke about human single-neuron recordings done with depth electrode readings on epilepsy patients undergoing very invasive methods to identify the focal points of seizures for surgery to remove or destroy minimal amounts of brain tissue to stop the seizures. They have studied about 10 patients per year over the last three years, using modified depth electrodes that each have 9 microwires, extending from them into the brain, one ground, and eight which each record for a single cell. On these patients they've done experiments with observation and execution of a grasping task, and with observation and imitation of facial expressions. They've taken records from the temporal lobe, amygdala, hippocampus, and other parts of the brain, and found that about 8% of cells measured have mirroring properties.

He then described some differences between human and monkey mirror neurons, the key one of which is that in some cases where mirror neurons show an increase in firings from an execution or imitation, a decrease is seen when observing. For monkeys, by contrast, the activations always go up for both observation and execution. He suggested that this may be due to a human differentiation between self and other. Humans have cases where there are excitatory effects, inhibitory effects, and opposite effects between observation and execution. There are mirror responses in humans in areas where they are not found in monkeys, the results appear to be more flexible, and there can be more prolonged responses, perhaps due to greater complexity (e.g., the language and meaning aspect?).

He ended by saying he was proud to say that his work falls within the tradition and support of Darwinian evolution--that his book, Mirroring People: The New Science of How We Connect with Others (I think you should always be skeptical of any book with a subtitle that starts with the words "The New Science of ..."), argues that mirror neurons have been selected (naturally) to facilitate social interactions. He asserted that this solves the problem of other minds, and provokes a major revision of long-standing beliefs--that we need to change the idea that we've evolved for self-preservation, and instead we're "wired for involvement and care." He concluded that he is a believer in the importance of neuroscience to society, and that rather than being isolated in an ivory tower, scientists have a responsibility to go to society and communicate their work. (And his book is written for a popular audience.)