Thursday, November 19, 2009

Joel Garreau on radical evolution

Yesterday I heard Joel Garreau speak again at ASU, as part of a workshop on Plausibility put on by the Consortium for Science, Policy, and Outcomes (CSPO). I previously posted a summary of his talk back in August on the future of cities. This talk was based on his book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies--and What It Means to Be Human.

Garreau was introduced by Paul Berman, Dean of the Sandra Day O'Connor School of Law at ASU, who also announced that Garreau will be joining the law school faculty beginning this spring, as the Lincoln Professor for Law, Culture, and Values.

He began by saying that we're at a turning point in history [has there ever been a time when we haven't thought that, though?], and he's going to present some possible scenarios for the next 2, 3, 5, 10, or 20 years, and that his book is a roadmap. The main feature of this turning point is that rather than transforming our environment, we'll be increasingly transforming ourselves, and we're the first species to take control of its own evolution, and it's happening now.

At some point in the not-too-distant future, he said, your kid may come home from school in tears about how he can't compete with the other kids who are more intelligent, more athletic, more attractive, more attentive, and so forth--because you haven't invested in the human enhancement technologies coming on the market. Your possible reactions will be to suck it up [somebody's still gotta do the dirty jobs in society?], remortgage the house again to make your kid competitive, or try to get the enhanced kids thrown out of school. What you can't do is ignore it.

He then asked people to raise their hands who could remember when things were still prevalent:
  • The Sony Walkman
  • When computer screens were black and white. (An audience member said "green and black!")
  • Rotary dial phones
  • Mimeograph machines and the smell of their fluid
  • Polio
This shows, he said, that we're going through a period of exponential change.

His talk then had a small amount of overlap with his previous talk, in his explanation of Moore's Law--that we've had 32 doublings of computer firepower since 1959, so that $1 of computing power is about 2 billion times more than it was then, and an iPhone has more computing power than all of NORAD had in 1965. Such doublings change our expectations of the future, so that the last 20 years isn't a guide to the next 20, but to the next 8; the last 50 years is a guide to the next 14. He pulled out a handkerchief and said this is essentially the sort of display we'll have in the future for reading a book or newspaper.

He then followed Ray Kurzweil in presenting some data points to argue that exponential change has been going on since the beginning of life on earth (see P.Z. Myers' "Singularly Silly Singularity" for a critique):

It took 400 million years (My) to go from organisms to mammals, and
  • 150My to monkeys
  • 30My to chimps
  • 16My to bipedality
  • 4My to cave paintings
  • 10,000 years to first settlements
  • 4,000 years to first writing
At this point, culture comes into the picture, which causes even more rapid change (a point also made by Daniel Dennett in his talk at ASU last February).
  • 4,000 years to Roman empire
  • 1800 years to industrial revolution
  • 100 years to first flight
  • 66 years to landing on the moon
And now we're in the information age, which Garreau identified as a third kind of evolution, engineered or radical evolution, where we're in control. [It seems to me that such planned changes are subject to the limits of human minds, unless we can build either AI or enhancement technologies that improve our minds, and I think the evidence for that possibility really has yet to be demonstrated--I see it as possible, but I place no bets on its probability and think there are reasons for skepticism.]

Garreau spent a year at DARPA (the Defense Advanced Research Projects Agency), the organization that invented the Internet (then the ARPANet), which is now in the business of creating better humans, better war fighters. [DARPA was also a subject of yesterday's Law, Science, and Technology class. It's a highly funded organization that doesn't accept grant proposals, rather, it seeks out people that it thinks are qualified to give funding to for its projects. It has become rather more secretive as a result of embarrassment about its Total Information Awareness and terrorism futures ideas that got negative press in 2003.]

Via DARPA, Garreau learned about their project at Duke University with an owl monkey named Belle, that he described as a monkey that can control physical objects at long distances with her mind. Belle was trained to play a video game with a joystick, initially for a juice reward and then because she enjoyed it. They then drilled a hole in her head and attached fine electrodes (single-unit recording electrodes like the sort used to discover mirror neurons), identified the active regions of her brain when she operated the joystick, and then disconnected the joystick. She became proficient and playing the game with the direct control of her brain. They then connected the system to a robotic arm at MIT which duplicated the movements of her arm with the joystick.

Why did they do this? Garreau said there's an official reason and a real reason. The official reason is that an F-35 jet fighter is difficult to control with a joystick, and wouldn't it be better to control it with your mind, and send information sensed by the equipment directly into the mind? The real reason is that the DARPA defense sciences office is run by Michael Goldblatt, whose daughter Gina Marie (who recently graduated from the University of Arizona) has cerebral palsy and is supposed to spend the rest of her life in a wheelchair. If machines can be controlled with the mind, machines in her legs could be controlled with her mind, and there's the possibility that she could walk.

Belle first moved the robotic arm 9 years ago, Garreau said, and this Christmas you'll be able to buy the first toy mind-machine interface from Mattel at Walmart for about $100. It's just a cheap EEG device and not much of a game--it lets you levitate a ping pong ball with your mind--but there's obviously more to come.

Garreau said that Matthew Nagel was the first person to send emails using his thoughts (back in 2006), and DARPA is interested in moving this technology out to people who want to control robots. [This, by the way, is the subject of the recent film "Sleep Dealer," which postulates a future in which labor is outsourced to robots operated by Mexicans, so that they can do work in the U.S. without immigrating.]

This exposure to DARPA was how Garreau got interested in these topics, which he called the GRIN technologies--Genetics, Robotics, Information science, and Nanotechnology, which he identified as technologies enabled by Moore's Law.

He showed a slide of Barry Bonds, and said that steroids are sort of a primitive first-generation human enhancement, and noted that the first uses of human enhancement tend to occur in sports and the military, areas where you have the most competition.

Garreau went over a few examples of each of the GRIN technologies that already exist or are likely on the way.

Genetics
Dolly the cloned sheep. "Manipulating and understanding life at the most primitive and basic level."

"Within three years, memory pills, originally aimed at Alzheimer's patients, will then move out to the needy well, like 78 million baby boomers who can't remember where they left their car, then out to the merely ambitious." He said there's already a $36.5 billion grey market for drugs like Ritalin and Provigil (midafonil), and asked, "Are our elite schools already filling up with the enhanced?" [There's some evidence, however, that the enhancement of cognitive function (as opposed to staying awake) is minimal for people who already operate at high ability, with the greatest enhancement effect for those who don't--i.e., it may have something of an egalitarian equalizing effect.]

He said DARPA is looking at ways to end the need for sleep--whales and dolphins don't sleep, or they'd drown, but they do something like sleeping with one half of the brain at a time.

DARPA is also looking at ways to turn off hunger signals. Special forces troops burn 12,000 calories per day, but can't carry huge amounts of food. The body carries extra calories in fat that are ordinarily inaccessible unless you're starving, at which point they get burned. If that switch to start burning fat could be turned on and off at will, that could be handy for military use. He observed that DARPA says "the civilian implications of this have not eluded us."

Sirtris Pharmaceuticals, started by David Sinclair of the Harvard Medical School, aims to have a drug to reverse aging based on resveratrol, an ingredient from grapes found in red wine. [Though Quackwatch offers some skepticism.]

Garreau looks forward to cures for obesity and addiction. He mentioned Craig Venter's plan to create an organism that "eats CO2 and poops gasoline" by the end of this year, that will simultaneously "end [the problems in] the Middle East and climate change." [That seems overly optimistic to me, but ExxonMobil has given Venter $600 million for this project.]

He said there are people at ASU in the hunt, trying to create life forms like this as well. [Though for some reason ASU doesn't participate in the iGEM synthetic biology competition.]

Robotics
Garreau showed a photo of a Predator drone, and said, "Ten years ago, flying robots were science fiction, now it's the only game in town for the Air Force." He said this is the first year that more Air Force personnel were being trained to operate drones than to be pilots. 2002 was the first year that a robot killed a human being, when a Predator drone launched a Hellfire missile to kill al Qaeda members in an SUV in Yemen. He said, "while there's still a human in the loop, philosophical discussions about homicidal robots could be seen as overly fine if you were one of the guys in the SUV."

"We're acquiring the superpowers of the 1930s comic book superheroes," he said, and went on to talk about a Berkeley exoskeleton that allows you to carry a 180-pound pack like it weighs four pounds, like Iron Man's suit. He asked the engineers who built it, "Could you leap over a tall building in a single bound?" They answered, "yes, but landing is still a problem."

Functional MRI (fMRI) is being used at the University of Pennsylvania to try to determine when people are lying. Garreau: "Then you're like the Shadow who knows what evil lurks in the hearts of men."

Cochlear implants to give hearing to people for whom hearing aids do nothing, connecting directly to the auditory nerve. Ocular implants to allow the blind to have some vision. Brain implants to improve memory and cognition. Garreau asked, "If you could buy an implant that would allow you to be fluent in Mandarin Chinese, would you do it?" About half the room raised their hands. [I didn't hear a price or safety information, so didn't raise my hand.]

Information
He showed a photo of a camera phone and said, "Fifteen years ago, a machine like this that can fit in your pocket, with a camera, GPS, and MP3 player, and can send email, was science fiction. Now it's a bottom-of-the-line $30 Nokia."

He asked, "Does anyone remember when music players were three big boxes that you put on your bookshelves? Now they're jewelry. Soon they'll be earrings, then implants."

Close behind, he said, are universal translators. "Google has pretty good universal translation on the web, and see it as moving out to their Droid phones." He observed that Sergey Brin was talking in 2004 about having all of the world's information directly attached to your brain, or having a version of Google on a chip implanted in your brain. [I won't get one unless they address network security issues...]

Nanotechnology
Garreau said, "Imagine anything you want, one atom or molecule at a time. Diamonds, molecularly accurate T-bone steaks." He said this is the least developed of the four GRIN technologies, "so you can say anything you want about it, it might be true." It's estimated to become a $1 trillion/year market in the next 10 years. There may be nanobots you can inject into your bloodstream by the thousands to monitor for things about to go wrong [see this video for the scenario I think he's describing], hunter-killers that kill cancer cells. "When you control matter at a fundamental level, you get a feedback loop between the [four] technologies."

At this point, Garreau said he's really not all that interested in the "boys and their toys" so much as he is the implications--"where does this take culture and society and values?" He presented three possible scenarios, emphasizing that he's not making predictions. He called his three scenarios Heaven, Hell, and Prevail.

Heaven
He showed a chart of an exponential curve going up (presumably something like technological capacity on the y axis and time on the x axis).

He said that at the NIH Institute on Aging, there's a bet that the first person to live to 150 is already alive today. He mentioned Ray Kurzweil, said that he pops 250 pills a day and is convinced that he's immortal, and is "not entirely nuts." [I am very skeptical that 250 pills a day is remotely sensible or useful.]

For the last 160 years, human life expectancy has increased at about 1/4 of a year every year. He asked us to imagine that that rate improves to one year per year, or more--at that point, "if you have a good medical plan you're effectively immortal." [I questioned this in the Q&A, below.]

Hell
He showed a chart that was an x-axis mirror of the Heaven one, and described this as a case where technology "gets into the hands of madmen or fools." He described the Australian mousepox incident, where researchers in Australia found a way to genetically alter mousepox so that it becomes 100% fatal, destroying the immune system, so that there's no possible vaccine or prevention. This was published in a paper available to anyone, and the same thing could be done to smallpox to wipe out human beings with no defense. He said the optimistic version is something that wipes out all human life; the pessimistic version is something that wipes out all life on earth. [In my law school class, we discussed this same topic yesterday in more detail, along with a similar U.S. paper that showed how to reconstruct the polio virus.]

The problem with both of these scenarios for Garreau is that they are both "techno-deterministic," assuming that technology is in control and we're "just along for the ride."

Prevail
He showed a chart that showed a line going in a wacky, twisty pattern. The y-axis may have been technological capacity of some sort, but the x-axis in this case couldn't have been time, unless there's time travel involved.

Garreau said, if you were in the Dark Ages, surrounding by marauding hordes and plagues, you'd think there wasn't a good future. But in 1450 came the printing press--"a new way of storing, sharing, collecting, and distributing information," which led to the Renaissance, enlightenment, science, democracy, etc. [Some of those things were rediscoveries of advancements previously made, as Richard Carrier has pointed out. And the up-and-down of this chart and example of the Dark Ages seems to be in tension, if not in conflict, with his earlier exponential curve, though perhaps it's just a matter of scale. At the very least, however, they are reason to doubt continued growth in the short term, as is our current economic climate.]

Garreau called the Prevail scenario more of a co-evolution scenario, where we face challenges hitting us in rapid succession, to which we quickly respond, which creates new challenges. He expressed skepticism of top-down organizations having any capacity to deal with such challenges, and instead suggested that bottom-up group behavior by humans not relying on leaders is where everything interesting will happen. He gave examples of eBay ("100 million people doing complex things without leaders"), YouTube ("no leaders there"), and Twitter ("I have no idea what it's good for, but if it flips out the Iranian government, I'm for it.") [These are all cases of bottom-up behavior facilitated by technologies that are operated by top-down corporations and subject to co-option by other top-down institutions in various ways. I'm not sure how good the YouTube example is considering that it is less profitable per dollar spent than Hulu--while some amateur content bubbles to the top and goes viral, there still seems to be more willingness to pay for professional content. Though it does get cheaper to produce professional content and there are amateurs that produce professional-quality content. And I'll probably offer to help him "get" Twitter.]

The Prevail scenario, he said, is "a bet on humans being surprising, coming together in unpredicted ways and being unpredictably clever."

He ended by asking, "Why have we been looking for intelligent life in the universe for decades with no success? I wonder if every intelligent species gets to the point where they start controlling their own destiny and what it means to be as good as they can get. What if everybody else has flunked. Let's not flunk. Thanks."

Q&A
I asked the first question, which was whether there is really so much grounds for optimism on extending human lifespan when our gains have increased the median lifespan but not made recent progress on the top end--the oldest woman in the world, Jeanne Calment, died at 122 in 1997 and no one else has reached that age. He answered that this was correct, that past improvements have come from nutrition, sanitation, reducing infant mortality, and so forth, but now that we spent $15 billion to sequence the first human genome and the cost of sequencing a complete human genome is approaching $1,000 and personalized medicine is coming along, he suspects we'll find the causes of aging and have the ability to reverse it through genetic engineering.

Prof. David Guston of CSPO asked "What's the relation between your Prevail scenario and the distribution of the success of the good stuff from GRIN technologies?" Looking at subgroups like males in post-Soviet Russia and adults in Africa, he said, things seem to be going in the wrong direction. Garreau answered that this is one of the nightmare scenarios--that humans split into multiple species, such as enhanced, naturals, and the rest. The enhanced are those that keep upgrading every six months. The naturals are those with access to enhancements that "choose not to indulge, like today's vegetarians who are so because of ethical or aesthetic reasons." The rest are those who don't have access to enhancements, and have envy for and despise those who do. "When you have more than one species competing for the same ecological niche," he said, "that ends up badly for somebody." But, he said, that's assuming a rich-get-richer, poor-get-poorer belief, "a hallmark of the industrial age." Suppose that instead of distributing scarcity, we are distributing abundance. He said that transplanted hearts haven't become cheap because they aren't abundant, but if we can create new organs in the body or in the lab in a manner that would benefit from mass production, it could become cheap. He pointed out that cell phones represent "the fastest update of technology in human history," going from zero to one phone for every two people in 26 years, and adapted to new uses in the developing world faster than in the developed world. He brought up the possibility of the developing world "leapfrogging" the developed world, "the way Europeans leapfrogged the Arab world a thousand years ago, when they were the leaders in science, math, and everything else." [I think this is a very interesting possibility--the lack of sunk costs in existing out-of-date infrastructure, the lack of stable, firmly established institutions are, I think, likely to make the developing world a chaotic experimental laboratory for emerging technologies.]

Prof. Gary Marchant of the Center for the Study of Law, Science, and Technology then said, "I'm worried about the bottom-up--it also gave us witch trials, Girls Gone Wild, and the Teabaggers." Garreau said his Prevail scenario shows "a shocking faith in human nature--a belief in millions of small miracles," but again said "I'm not predicting it, but I'm rooting for it."

Prof. Farzad Mahootian and Prof. Cynthia Selin of CSPO asked a pair of related questions about work on public deliberations and trying to extend decision-making to broader audiences, asking what Garreau thought about "DARPA driving this or being porous to any kind of public deliberation or extended decision-making?" Garreau responded that "The last thing in the world that I want to do is leave this up to DARPA. The Hell scenario could happen. Top-down hierarchical decision-making is too slow. Anyone waiting for the chairman of the House finance committe to save us is pathetic. Humans in general have been pulling ashes out of the fire by the skin of their teeth for quite a while; and Americans in particular have been at the forefront of change for 400 years and have cultural optimism about change." [I think these questions seemed to presuppose top-down thinking in a way that Garreau is challenging.]

He said he had reported a few years ago about the maquiladoras in Mexico and called it a "revolution," to which he got responses from Mexicans saying, "we're not very fond of revolutions, it was very messy and we didn't like it," and asking him to use a different word. By contrast, he said, "Americans view revolutions fondly, and think they're cool, and look forward to it." [Though there's also a strange conservatism that looks fondly upon a nonexistent ideal past here, as well.] With respect to governance, he said he's interested in looking for alternate forms of governance because "Washington D.C. can't conceivably respond fast enough. We've got a long way to go and a short time to get there." [Quoting the 'Smokey and the Bandit' theme song.]

He went on to say, "I don't necessarily think that all wisdom is based here in America. Other places will come up with dramatically different governance." He talked about the possibility of India, which wants to get cheaper drugs out to the masses, taking an approach different from FDA-style regulation (he called the FDA "a hopelessly dysfunctional organization that takes forever to produce abysmal results"). "Let's say the people of India were willing to accept a few casualties to produce a faster, better, cheaper cure for malaria, on the Microsoft model--get a 'good enough' version, send it out and see how many computers die. Suppose you did that with drugs, and were willing to accept 10,000 or 100,000 casualties if the payoff was curing malaria once and for all among a billion people. That would be an interesting development." By contrast, he said, "The French are convinced they can do it the opposite way, with top-down governance. Glad to see somebody's trying that. I'll be amazed if it works." His view, he said, was "try everything, see what sticks, and fast." [This has historically been the intent of the U.S. federal system, to allow the individual states to experiment with different rules to see what works before or in lieu of federal rules. Large corporations that operate across states, however, which have extensive lobbying power, push for federal regulations to pre-empt state rules, so that they don't have to deal with the complexity.]

There were a few more questions, one of which was whether anyone besides DARPA was doing things like this. Garreau said certainly, and pointed to both conventional pharmaceutical companies and startups working to try to cure addiction and obesity, as well as do memory enhancement, like Eric Kandel's Memory Pharmaceuticals. He talked about an Israeli company that has built a robotic arm which provides touch feedback, with the goal of being able to replace whatever functionality someone has lost, including abilities like throwing a Major League fastball or playing the piano professionally.

Prof. Selin reported a conversation she had with people at the law school about enhancement and whether it would affect application procedures. They indicated that it wouldn't, that enhancement was no different to them than giving piano lessons to children or their having the benefit of a good upbringing. Garreau commented that his latest client is the NFL, and observed that body building has already divided into two leagues, the tested and the untested. The tested have to be free of drugs, untested is anything goes. He asked, "can you imagine this bifurcation in other sports? How far back do you want to back out technology to get to 'natural'? Imagine a shoeless football league." He noted that one person suggested that football minus technology is rugby. [This reminded me of the old Saturday Night Live skit about the "All Drug Olympics."]

All-in-all, it was an interesting talk that had some overlap with things that I'm very interested in pursuing in my program, especially regarding top-down vs. bottom-up organizational structures. Afterward, I spoke briefly with Garreau about how bottom-up skeptical organizations are proliferating and top-down skeptical organizations are trying to capitalize on it, and I wondered to what extent the new creations of bottom-up organizations tend to get co-opted and controlled by top-down organizations in the end. In that regard, I also talked to him a bit about Robert Neuwirth's work on "shadow cities" and the Kowloon Walled City, where new forms of regulatory order arise in jurisdictional no-man's-lands (I could also have mentioned pirate codes). Those cases fall between the cracks for geographical reasons, while the cases that are occurring with regard to GRIN technologies fall between the cracks for temporal reasons, but it seems to me there's still the possibility of the old-style institutions to catch up and take control.

UPDATE: As a postscript, I recently listened to the episode of the Philosophy Bites podcast on human enhancement with philosopher Allen Buchanan, who was at the University of Arizona when I went to grad school there. Good stuff.

State of the world on drug decriminalization

Personal possession of any drug decriminalized: Spain, Portugal, Italy, Czech Republic, Baltic states, some German states and Swiss cantons, Mexico.

Partial decriminalization/minimal criminal prosecution: England, Denmark, Slovakia, Latvia, Croatia, Poland, Austria, Germany, France, Netherlands (see chart in the Economist story linked below--it's interesting that the Netherlands has the highest percentage of prison outcomes on this list)

Unconstitutional to prosecute people for drug possession (any drug) per Supreme Court ruling: Argentina, Colombia

Marijuana decriminalized: 14 U.S. states (Alaska, California, Colorado, Maine, Massachusetts, Minnesota, Mississippi, Nebraska, Nevada, New York, North Carolina, Ohio, Oregon)

States with some localities that have decriminalized marijuana: Arkansas, Illinois, Kansas, Michigan, Missouri, Montana, Washington, Wisconsin

Considering marijuana legalization: California, Massachusetts, possibly Oregon

Considering decriminalization (any drug): Brazil, Ecuador

Source: The Economist, "Virtually legal," November 14, 2009; state decriminalization details from Wikipedia.

Tuesday, November 17, 2009

William Dembski would like to use copyright to quash criticism

Although when it comes to other people's works, William Dembski hasn't seen a problem with taking copyrighted material and using it wholesale, dubbing over a computer animated video from Harvard and XVIVO of the inner workings of a cell with his own intelligent design-based commentary, when it comes to his own work he has a different standard.

Mark Chu-Carroll points out at his Good Math, Bad Math blog that Dembski is talking about using threats of claimed copyright infringement to shut down criticism of a recent paper he published with Robert Marks. That criticism includes pointing out that sources cited by Dembski don't say what he says they do, and providing counterexamples to Dembski's mathematical claims. Rather than respond to the criticism, Dembski would rather shut it down.

There are just a few problems with that--first, the criticism may well be fair use. Although it does quote a great deal of the paper by Dembski and Marks, it does so for the purpose of putting commentary and criticism side-by-side with quotations from the paper. Second, papers published by the IEEE require that copyright be transferred to the IEEE, so Dembski lacks standing even if there were infringement.

Check out the RationalWiki critique of the Dembski and Marks paper.

Monday, November 16, 2009

Daniel Dennett, The Evolution of Confusion

Daniel Dennett's talk from the 2009 Atheist Alliance International convention (link is to my summary) is now online:

Sunday, November 08, 2009

Richard Carrier on the ancient creation/evolution debate

Richard Carrier, an independent scholar with a Ph.D. in Ancient History from Columbia University, gave a talk this morning to the Humanist Society of Greater Phoenix titled "Christianity and Science (Ancient and Modern)." He argued that there was a creation/evolution debate in ancient Rome that had interesting similarities and differences to the current creation/evolution debate.

He began with Michael Behe and a short description of his irreducibly complexity argument regarding the bacterial flagellum--that since it fails to function if any piece is removed, and it's too complex to have originated by evolution in a single step, it must have been intelligently designed and created. He observed that 2,000 years ago, Galen made the same argument about the human hand and other aspects of human and animal anatomy. Galen wrote that "the mark of intelligent design is clear in those works in which the removal of any small component brings about the ruin of the whole."

Behe, Carrier said, hasn't done what you'd expect a scientist to do with respect to his theory. He hasn't looked at the genes that code the flagellum and tried to identify correlate genes in other microbes, for example.

In the ancient context, the debate was between those who argued for natural selection on random arrangements of features that were spontaneously generated, such as Anaxagoras and atomists like Democritus and Epicurus, vs. those who argued for some kind of intelligent design, like Plato, Aristotle, Cicero, and Galen. Carrier set the stage by describing a particular debate about the function of the kidneys between Asclepiades and Galen. Asclepiades thought that the kidneys were either superfluous, with urine forming directly in the bladder, or was an accidental sieve. Galen set out to test this with a public experiment on an anesthetized pig, which had been given water prior to the operation. He opened up the pig, ligated (tied knots in) its ureters, and they started to balloon and the bladder stayed empty. Squeezing the ureter failed to reverse the flow back into the kidney. When one ureter was cut, urine came out. Thus, Galen demonstrated that the kidneys extract urine from the blood and it is transported to the bladder by the ureters. The failure of the flow to operate in reverse showed that the kidneys were not simple sieves, but operated by some power that only allowed it to function in one direction. This, argued Galen, was demonstration of something too complex to have arisen by chance, and refuted the specific claims of Asclepiades.

Galen's 14-volume De Usu Portium (On the Usefulness of Parts) made similar arguments for intelligent design about all aspects of human anatomy--the nerve transport system, biomechanics of arm, hand, and leg movement, the precision of the vocal system, etc. He also asked questions like "How does a fetus know how to build itself?" He allowed for the possibility of some kind of tiny instructions present in the "seed," on analogy with a mechanical puppet theater, programmed with an arrangement of cogs, wheels, and ropes.

Galen also investigated the question of why eyebrows and eyelashes grow to a fixed length and no longer, and found that they grow from a piece of cartilage, the tarsal plate. He concluded that while his evidence required an intelligent designer, they entailed that God is limited and uses only available materials. Galen, a pagan, contrasted his view with that of Christians. For Christians, a pile of ashes could become a horse, because God could will anything to be the case. But for Galen, the evidence supported a God subject to the laws of physics, who was invisibly present but physically interacting to make things happen, and that God realizes the best possible world within constraints.

Which intelligent design theory better explains facts like the growth of horses from fetuses, the fact that fetuses sometimes come out wrong, and why we have complex bodies at all, rather than just willing things into existence via magic? If God can do anything, why wouldn't he just make us as "simple homogenous soul bodies that realize functions by direct will" (or "expedient polymorphism," to use Carrier's term)?

The difference between Galen's views and those of the Christians was that Galen thought of theology as a scientific theory that had to be adjusted according to facts, that facts about God are inferred from observations, and those facts entail either divine malice or a limited divinity. What we know about evolution today places even more limits on viable theories of divinity than in Galen's time. (Carrier gave a brief overview of evolution and in particular a very brief account of the evolution of the bacterial flagellum.)

Galen's views allowed him to investigate, conduct experiments to test the theories of his opponents as well as his own, and make contributions to human knowledge. He supported the scientific values of curiosity as a moral good, empiricism as the primary mode of discovery, and progress as both possible and valuable, while Christianity denigrated or opposes these. The views of early church fathers were such that once Christianity gained power, it not only put a halt to scientific progress, it caused significant losses of knowledge that had already been accumulated. (Carrier later gave many examples.)

Tertullian, a contemporary of Galen, asked, "What concern have I with the conceits of natural science?" and "Better not to know what God has not revealed than to know it from man."

Thales, from the 6th century B.C., was revered by pagans as the first natural scientist--he discovered the natural causes of eclipses, explained the universe as a system of natural causes, performed observations and developed geometry, made inquiries into useful methods, and subordinated theology to science. There was a story that he was so focused on studying the stars that he fell into a well. Tertullian wrote of this event that Thales had a "vain purpose" and that his fall into the well prefigured his fall into hell.

Lactantius, an early Christian writer and tutor of Constantine the Great, denied that the earth was round (as part of a minority faction of Christians at the time), said that only knowledge of good and evil is worthwhile, and argued that "natural science is superfluous, useless, and inane." This despite overwhelming evidence already accumulated of a round earth (lighthouses sinking below the horizon as seen from ships sailing away, astronomical observations of lunar eclipses starting at different times in different locations, the fact that different stars are visible at different latitudes, and the shadow of the earth on the moon), which Lactantius simply was uninterested in.

Eusebius, the first historian of the Christian church, said that all are agreed that only scriptural knowledge is worthwhile, anything contrary to scripture is false, and pursuing scientific explanations is to risk damnation. Armchair speculation in support of scripture, however, is good.

Amid factors such as the failure of the pagan system, civil wars in the Roman empire, and a great economic depression, Christianity came to a position of dominance and scientific research came to a halt from about the 4th century to the 12th-14th centuries.

Carrier compared these Christian views to specific displays at the Answers in Genesis Creation Museum in Kentucky, which compared "human reason" to "God's word." One contrasted Rene Descartes saying "I think therefore I am" to God saying "I am that I am." Galen wouldn't have put those into opposition with each other.

Another display labeled "The First Attack--Question God's Word" told the story of Satan tempting Adam to eat from the fruit of the tree of knowledge of good and evil, which highlights the "questioning" of Satan for criticism, and argues that putting reason first is Satanic.

Another diagram comparing "human reason" to "God's Word" showed evolution as a 14-billion-year winding snake-like shape, compared to the short and straight arrow of a 6,000-year creation.

Carrier noted, "It doesn't have to be that way. Galen's faith didn't condemn fundamental scientific values; Galen's creationism was science-based."

He then gave numerous examples of knowledge lost or ignored by Christianity--that Eratosthenes had calculated the size of the earth (a case described in Carl Sagan's "Cosmos" series), Ptolemy's projection cartography and system of latitude and longitude, developments in optics, hydrostatics, medicine, harmonics and acoustics, pneumatics, tidal theory, cometary theory, the precession of the stars, mathematics, robotics (cuckoo clocks, coin-operated vending machines for holy water and soap dispensing), machinery (water mills, water-powered saws and hammers, a bread-kneading machine), and so on. He described the Antikythera mechanism, an analog computer similar to WWI artillery computers, which was referred to in various ancient texts but had been dismissed by historians as impossible until this instance was actually found in 1900.

Another example was the Archimedes Codex, where Christians scraped the ink from the text and wrote hymns on it, and threw the rest away. The underlying writing has now been partially recovered thanks to modern technology, revealing that Archimedes performed remarkably advanced calculations about areas, volumes, and centers of gravity.

Carrier has a forthcoming book on the subject of this ancient science, called The Scientist in the Early Roman Empire.

A few interesting questions came up in the Q&A. The first question was about why early Christians didn't say anything about abortion. Carrier said it probably just wasn't on the radar, though abortion technology already existed in the form of mechanical devices for performing abortions and abortifacients. He also observed that the ancients knew the importance of cleanliness and antiseptics in medicine, while Jesus said that washing before you eat is a pointless ritual (Mark 7:1-20). Carrier asked, if Jesus was God, shouldn't he have known about the germ theory of disease?

Another question was whether Christianity was really solely responsible for 1,000 years of stangnation. Carrier pointed out that there was a difference between Byzantine and Western Christianity, with the former preserving works like those of Ptolemy without condemning them, but without building upon them. He said there are unerlying cultural, social, and historical factors that explain the differences, so it's not just the religion. He also pointed out that there was a lost sect of Christianity that was pro-science, but we have nothing of what they wrote, only references to them by Tertullian, criticizing them for supporting Thales, Galen, and so forth.

Another questioner asked how he accounts for cases of Christians who have contributed to science, such as Kepler, Boyle, Newton, and Bacon. Carrier said "Not all Christians have to be that way--there's no intrinsic reason Christianity has to be that way." But, he said, if you put fact before authority, scripture will likely end up not impressing you, being contradicted by evidence you find, and unless you completely retool Christianity, you'll likely abandon it. Opposition to scientific values is necessary to preserve Christianity as it is; putting weight on authority and scripture leads to the anti-science position as a method of preservation of the dogma.

It was a wonderfully interesting and wide-ranging talk. He covered a lot more specifics than I've described here. If you find that Carrier is giving a talk in your area, I highly recommend that you go hear him speak.

You can find more information about Richard Carrier at his web site.

Philosophy Bites podcast

I've been listening to past episodes of the Philosophy Bites podcast, and I highly recommend it--they are short (about 15 minute) discussions with prominent philosophers about specific philosophical topics and questions. I've found them to be consistently of high quality and interesting, even in the one case where I think the philosophical argument was complete nonsense (Robert Rowland Smith on Derrida on forgiveness). Even there, the interviewers asked the right questions.

I particularly have enjoyed listening to topics that are outside the areas of philosophy I've studied, like Alain de Botton on the aesthetics of architecture. Other particularly good ones have been Hugh Mellor on time, David Papineau on physicalism, A.C. Grayling on Descartes' Meditations, and Peter Millican on the significance of Hume. I've still got a bunch more past episodes to listen to; I'm going to be somewhat disappointed when I catch up.

Saturday, November 07, 2009

Robert B. Laughlin on "The Crime of Reason"

The 2009 Hogan and Hartson Jurimetrics Lecture in honor of Lee Loevinger was given on the afternoon of November 5 at Arizona State University's Sandra Day O'Connor School of Law by Robert B. Laughlin. Laughlin, the Ann T. and Robert M. Bass Professor of Physics at Stanford University and winner of the 1998 Nobel Prize in Physics (along with Horst L. Stormer and Daniel C. Tsui), spoke about his recent book, The Crime of Reason.

He began with a one-sentence summary of his talk: "A consequence of entering the information age is probably that we're going to lose a human right that we all thought we had but never did ..." The sentence went on but I couldn't keep up with him in my notes to get it verbatim, and I am not sure I could identify precisely what his thesis was after hearing the entire talk and Q&A session. The main gist, though, was that he thinks that a consequence of allowing manufacturing to go away and being a society based on information is that "Knowledge is dear, therefore there has to be less of it--we must prevent others from knowing what we know, or you can't make a living from it." And, he said, "People who learn on their own are terrorists and thieves," which I think was intentional hyperbole. I think his talk was loaded with overgeneralizations, some of which he retracted or qualified during the Q&A.

It certainly doesn't follow from knowledge being valuable that there must be less of it. Unlike currency, knowledge isn't a fungible commodity, so different bits of knowledge have different value to different people. There are also different kinds of knowledge--know-how vs. knowledge that, and making the latter freely available doesn't necessarily degrade the value of the former, which is why it's possible to have a business model that gives away software for free but makes money from consulting services. Further, the more knowledge there is, the more valuable it is to know where to find the particular bits of knowledge that are useful for a given purpose, and the less it is possible for a single person to be an expert across many domains. An increasing amount of knowledge means there's increasing value in various kinds of specializations, and more opportunities for individuals to develop forms of expertise in niches that aren't already full of experts.

Laughlin said that he is talking about "the human rights issue of the 21st century," that "learnign some things on your own is stealing from people. What we think of as our rights are in conflict with the law, just as slavery is in conflict with human rights." He said that Jefferson was conflicted on this very issue, sayng on the one hand that "knowledge is like fire--divinely designed to be copyable like a lit taper--I can light yours with mine, which in no way diminishes my own." This is the non-rival quality of information, that one person copying information from another doesn't deprive the other of their use of it, though that certainly may have an impact on the commercial market for the first person to sell their information.

"On the other hand," said Laughlin, "economics involves gambling. [Jefferson] favored legalized gambling. Making a living involves bluff and not sharing knowledge." He said that our intellectual property laws derive from English laws that people on the continent "thought ... were outrageous--charging people to know things."

He put up a photo of a fortune from a fortune cookie, that said "The only good is knowledge, and the only evil ignorance." He said this is what you might tell kids in school to get them to study, but there's something not right about it. He then put up a drawing of Dr. Frankenstein and his monster (Laughlin drew most of the slides himself). He said, we're all familiar with the Frankenstein myth. "The problem with open knowledge is that some of it is dangerous. In the U.S. some of it is off-limits, you can't use it in business or even talk about it. It's not what you do with it that's exclusive, but that you have it at all."

His example was atomic bomb secrets and the Atomic Energy Act of 1954, which makes it a federal felony to reveal "nuclear data" to the public, which has been defined very broadly in the courts. It includes numbers and principles of physics.

Laughlin returned to his fortune cookie example, and said there's another problem. He put up a drawing of a poker game. "If I peeked at one guy's cards and told everyone else, the poker game would stop. It involves bluffing, and open access to knowledge stops the game." He suggested that this is what happened last year with the world financial sector--that the "poker game in Wall Street stopped, everyone got afraid to bet, and the government handled it by giving out more chips and saying keep playing, which succeeded." I agree that this was a case where knowledge--specifically knowledge of the growing amounts of "toxic waste" in major world banks--caused things to freeze up, it wasn't the knowledge that was the ultimate cause, it was the fact that banks engaged in incredibly risky behavior that they shouldn't have. More knowledge earlier--and better oversight and regulation--could have prevented the problem.

Laughlin said "Economics is about bluff and secrecy, and open knowledge breaks it." I don't think I agree--what makes markets function is that price serves as a public signal about knowledge. There's always going to be local knowledge that isn't shared, not necessarily because of bluff and secrecy, but simply due to the limits of human capacities and the dynamics of social transactions. While trading on private knowledge can result in huge profits, trading the private knowledge itself can be classified as insider trading and is illegal. (Though perhaps it shouldn't be, since insider trading has the potential for making price signals more accurate more quickly to the public.)

Laughlin showed a painting of the death of Socrates (by Jacques-Louis David, not Laughlin this time), and said that in high school, you study Plato, Aristotle, and Descartes, and learn that knowledge is good. But, "as you get older, you learn there's a class system in knowledge." Plato etc. is classified as good, but working class technical knowledge, like how to build a motor, is not, he claimed. He went on to say, "If you think about it, that's exactly backwards." I'm not sure anyone is ever taught that technical knowledge is not valuable, especially these days, where computer skills seem to be nearly ubiquitous--and I disagree with both extremes. From my personal experience, I think some of my abstract thinking skills that I learned from studying philosophy have been among the most valuable skills I've used in both industry and academia, relevant to both theoretical and practical applications.

Laughlin said that "engines are complicated, and those who would teach you about it don't want to be clear about it. It's sequestered by those who own it, because it's valuable. The stuff we give away in schools isn't valuable, that's why we give it away." In the Q&A, a questioner observed that he can easily obtain all sorts of detailed information about how engines work, and that what makes it difficult to understand is the quantity and detail. Laughlin responded that sometimes the best way to hide things is to put them in plain sight (the Poe "purloined letter" point), as needles in a haystack. But I think that's a rather pat answer to something that is contradictory to his claim--the information really is freely available and easy to find, but the limiting factor is that it takes time to learn the relevant parts to have a full understanding. The limit isn't the availability of the knowledge or that some of it is somehow hidden. I'd also challenge his claim that the knowledge provided in schools is "given away." It's still being paid for, even if it's free to the student, and much of what's being paid for is the know-how of the educator, not just the knowledge-that of the specific facts, as well as special kinds of knowledge-that--the broader frameworks into which individual facts fit.

Laughlin went on to say, "You're going to have to pay to know the valuable information. Technical knowledge will disappear and become unavailable. The stuff you need to make a living is going away." He gave as examples defense-related technologies, computers, and genetics. He said that "people in the university sector are facing more and more intense moral criticism" for sharing information. "How life works--would we want that information to get out? We might want to burn those books. The 20th century was the age of physics, [some of which was] so dangerous we burned the books. It's not in the public domain. The 21st century is the age of biology. We're in the end game of the same thing. In genetics--e.g., how disease organisms work. The genetic structure of Ebola or polio." Here, Laughlin seems to be just wrong. The gene sequences of Ebola and polio have apparently been published (Sanchez, A., et al. (1993) "Sequence analysis of the Ebola virus genome: organization, genetic elements and comparison with the genome of Marburg virus," Virus Research 29, 215-240 and Stanway, G., et al. (1983) "The nucleotide sequence of poliovirus type 3 leon 12 a1b: comparison with poliovirus type 1," Nucleic Acids Res. 11(16), 5629-5643). (I don't claim to be knowledgeable about viruses, in the former case I am relying on the statement that "Sanchez et al (1993) has published the sequence of the complete genome of Ebola virus" from John Crowley and Ted Crusberg, "Ebola and Marburg Virus: Genomic Structure, Comparative and Molecular Biology."; in the latter case it may not be publication of the complete genome but is at least part.)

Laughlin talked about the famous issue of The Progressive magazine which featured an article by Howard Moreland titled "How H-Bombs Work." He showed the cover of the magazine, which read, "The H-Bomb Secret--How we got it--why we're telling it." Laughlin said that the DoJ enjoined the journal from publishing the article and took the issue into secret hearings. The argument was that it was a threat to national security and a violation of the Atomic Energy Act. The judge said that the rule against prior restraint doesn't apply because this is so dangerous that "no jurist in their right mind would put free speech above safety." Laughlin said, "Most people think the Bill of Rights protects you, but this case shows that it doesn't." After the judge forbid publication, it was leaked to a couple of "newspapers on the west coast," after which the DoJ dropped the case and the article was published. According to Laughlin, this was strategy, that he suspects they didn't prosecute the case because the outcome would have been to find the AEA unconstitutional. By dropping the case it kept the AEA as a potential weapon in future cases. He said there have only been two cases of the criminal provisions of the AEA prosecuted in the last 50 years, but it is "inconceivable that it was only violated twice. The country handles its unconstitutionality by not prosecuting." The U.S., he said, is like a weird hybrid of Athens and Sparta, favoring both being open and being war-like and secretive. These two positions have never been reconciled, so we live in an unstable situation that favors both.

He also discussed the case of Wen Ho Lee, a scientist from Taiwan who worked at Los Alamos National Laboratory, who took home items that were classified as "PARD" (protect as restricted data), even though everyone is trained repeatedly that you "Don't take PARD home." When he was caught, Laughlin said, he said "I didn't know it was wrong" and "I thought they were going to fire me, so I took something home to sell." The latter sounds like an admission of guilt. He was put into solitary confinement for a year (actually 9 months) and then the case of 50 counts of AEA violations was dropped. Laughlin characterized this as "extralegal punishment," and said "we abolish due process with respect to nuclear data." (Wen Ho Lee won a $1.5 million settlement from the U.S. government in 2006 before the Supreme Court could hear his case. Somehow, this doesn't seem to me to be a very effective deterrent.)

Laughlin said that we see a tradeoff between risk and benefit, not an absolute danger. The risk of buildings being blown up is low enough to allow diesel fuel and fertilizer to be legal. Bombs from ammonium nitrate and diesel fuel are very easy to make, and our protection isn't hiding technical knowledge, but that people just don't do it. But nuclear weapons are so much more dangerous that the technical details are counted as absolutely dangerous, no amount of benefit could possibly be enough. He said that he's writing a book about energy and "the possible nuclear renaissance unfolding" (as a result of need for non-carbon-emitting energy sources). He says the U.S. and Germany are both struggling with this legal morass around nuclear information. (Is the unavailability of nuclear knowledge really the main or even a significant issue about nuclear plant construction in the United States? General Electric (GE Energy) builds nuclear plants in other countries.)

Laughlin said that long pointy knives could be dangerous, and there's a movement in England to ban them. Everybody deals with technical issue of knowledge and where to draw lines. (Is it really feasible to ban knives, and does such a ban constitute a ban on knowledge? How hard is it to make a knife?)

At this point he moved on to biology, and showed a photograph of a fruit fly with legs for antennae. He said, "so maybe antennae are related to legs, and a switch in development determines which you get. The control machinery is way too complicated to understand right now." (Really?) "What if this was done with a dog, with legs instead of ears. Would the person who did that go to Stockholm? No, they'd probably lose their lab and be vilified. In the life sciences there are boundaries like we see in nuclear--things we shouldn't know." (I doubt that there is a switch that turns dog ears into legs, and this doesn't strike me as plausibly being described as a boundary on knowledge, but rather an ethical boundary on action.) He said, "There are so many things researchers would like to try, but can't, because funders are afraid." Again, I suspect that most of these cases are ethical boundaries about actions rather than knowledge, though of course there are cases where unethical actions might be required to gain certain sorts of knowledge.

He turned to stem cells. He said that the federal government effectively put a 10-year moratorium on stem cell research for ethical reasons. Again, these were putatively ethical reasons regarding treatment of embryos, but the ban was on federally funded research rather than any research at all. It certainly stifled research, but didn't eliminate it.

Next he discussed the "Millennium Digital Copyright Act" (sic). He said that "people who know computers laugh at the absurdity" of claiming that computer programs aren't formulas and are patentable. He said that if he writes a program that "has functionality or purpose similar to someone else's my writing it is a violation of the law." Perhaps in a very narrow case where there's patent protection, yes, but certainly not in general. If he was arguing that computer software patents are a bad idea, I'd agree. He said "Imagine if I reverse-engineered the latest Windows and then published the source code. It would be a violation of law." Yes, in that particular example, but there are lots of cases of legitimate reverse engineering, especially in the information security field. The people who come up with the signatures for anti-virus and intrusion detection and prevention do this routinely, and in some cases have actually released their own patches to Microsoft vulnerabilities because Microsoft was taking too long to do it themselves.

He said of Microsoft Word and PDF formats that they "are constantly morphing" because "if you can understand it you can steal it." But there are legal open source and competing proprietary software solutions that understand both of the formats in question--Open Office, Apple's Pages and Preview, Foxit Reader, etc. Laughlin said, "Intentional bypassing of encryption is a violation of the DMCA." Only if that encryption is circumvention of "a technological measure that effectively controls access to" copyrighted material and the circumvention is not done for the purposes of security research, which has a big exception carved out in the law. Arguably, breakable encryption doesn't "effectively control access," though the law has certainly been used to prosecute people who broke really poor excuses for encryption.

Laughlin put up a slide of the iconic smiley face, and said it has been patented by Unisys. "If you use it a lot, you'll be sued by Unisys." I'm not sure how you could patent an image, and while there are smiley face trademarks that have been used as a revenue source, it's by a company called SmileyWorld, not Unisys.

He returned to biology again, to talk briefly about gene patenting, which he says "galls biologists" but has been upheld by the courts. (Though perhaps not for many years longer, depending on how the Myriad Genetics case turns out.) Natural laws and discoveries aren't supposed to be patentable, so it's an implication of these court decisions that genes "aren't natural laws, but something else." The argument is that isolating them makes them into something different than what they are when they're part of an organism, which somehow constitutes an invention. I think that's a bad argument that could only justify patenting the isolation process, not the sequence.

Laughlin showed a slide of two photos, the cloned dog Snuppy and its mother on the left, and a Microsoft Word Professional box on the right. He said that Snuppy was cloned when he was in Korea, and that most Americans are "unhappy about puppy clones" because they fear the possibility of human clones. I thought he was going to say that he had purchased the Microsoft Word Professional box pictured in Korea at the same time, and that it was counterfeit, copied software (which was prevalent in Korea in past decades, if not still), but he had an entirely different point to make. He said, about the software, "The thing that's illegal is not cloning it. If I give you an altered version, I've tampered with something I'm not supposed to. There's a dichotomy between digital knowledge in living things and what you make, and they're different [in how we treat them?]. But they're manifestly not different. Our legal system['s rules] about protecting these things are therefore confused and mixed up." I think his argument and distinction was rather confused, and he didn't go on to use it in anything he said subsequently. It seems to me that the rules are pretty much on a par between the two cases--copying Microsoft Word Professional and giving it to other people would itself be copyright infringement; transforming it might or might not be a crime depending on what you did. If you turned it into a piece of malware and distributed that, it could be a crime. But if you sufficiently transformed it into something useful that was no longer recognizable as Microsoft Word Professional, that might well be fair use of the copyrighted software. In any case in between, I suspect the only legally actionable offense would be copyright infringement, in which case the wrongdoing is the copying, not the tampering.

He put up a slide of Lady Justice dressed in a clown suit, and said that "When you talk to young people about legal constraints on what they can do, they get angry, like you're getting angry at this image of Lady Law in a clown suit. She's not a law but an image, a logos. ... [It's the] root of our way of relating to each other. When you say logos is a clown, you've besmirched something very fundamental about who you want to be. ... Legal constraints on knowledge is part of the price we've paid for not making things anymore." (Not sure what to say about this.)

He returned to his earlier allusion to slavery. He said that was "a conflict between Judeo-Christian ethics and what you had to do to make a living. It got shakier and shakier until violence erupted. War was the only solution. I don't think that will happen in this case. [The] bigger picture is the same kind of tension. ... Once you make Descartes a joke, then you ask, why stay?" He put up a slide of a drawing of an astronaut on the moon, with the earth in the distance. "Why not go to the moon? What would drive a person off this planet? You'd have to be a lunatic to leave." (I thought he was going to make a moon-luna joke, but he didn't, unless that was it.) "Maybe intellectual freedom might be that thing. It's happened before, when people came to America." He went on to say that some brought their own religious baggage with them to America. Finally, he said that when he presents that moon example to graduate students, he always has many who say "Send me, I want to go."

And that's how his talk ended. I was rather disappointed--it seemed rather disjointed and rambling, and made lots of tendentious claims--it wasn't at all what I expected from a Nobel prizewinner.

The first question in the Q&A was one very much like I would have asked, about how he explains the free and open source software movement. Laughlin's answer was that he was personally a Linux user and has been since 1997, but that students starting software companies are "paranoid about having stuff stolen," and "free things, even in software, are potentially pernicious," and that he pays a price for using open source in that it takes more work to maintain it and he's constantly having to upgrade to deal with things like format changes in PDF and Word. There is certainly such a tradeoff for some open source software, but some of it is just as easy to maintain as commercial software, and there are distributions of Linux that are coming closer to the ease of use of Windows. And of course Mac OS X, based on an open source, FreeBSD-derived operating system, is probably easier for most people to use than Windows.

I think there was a lot of potentially interesting and provocative material in his talk, but it just wasn't formulated into a coherent and persuasive argument. If anyone has read his book, is it more tightly argued?

Roger Pielke Jr. on climate change adaptation

A few hours after hearing Roger Pielke Jr. speak on climate change mitigation to CSPO, I heard him speak about climate change adaptation to a joint meeting of my seminar on human dimensions of climate change and another seminar with Dan Sarewitz, CSPO's director.

Like his previous talk, Pielke began this one with a slide on his positions, which was something like this:
  • Strong advocate of mitigation and adaptation.
  • He accepts the science of the IPCC.
  • There are other reasons behind impacts of climate--effects of inexorable development and growth.
  • The importance and challenge of climate change does not justify misrepresenting the science of adaptation--yet this happens on a regular basis (I’ll give a few examples).
  • We might choose to mitigate, but we will adapt.
He said (as he did in the earlier talk) that he has no disagreements with the science of IPCC working group I, lots of agreements with the economics and mitigation arguments of working group III (covered in the earlier talk), and some disagreements with the impacts, adaptation, and vulnerability arguments of working group II, which will be covered in this talk.

He then gave a slide of the outline of this talk:
  • The concept of adaptation is contested.
  • How we think about adaptation shapes how we think about research and policy.
  • Under the FCCC (Kyoto Protocol), adaptation is defined narrowly--as adaptation to climate change caused by the emissions of greenhouse gases.
  • The narrow definition creates a bias against adaptation.
  • Regardless, the primary factors underlying climate impacts on society are the result of development and growing wealth and vulnerability.
There are different definitions of "climate change" used between the IPCC and the UN FCCC. The IPCC defines it as "...change arising from any source," while the FCCC defines it as "...a change of climate which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability observed over comparable time periods."

On the former definition, if the sun causes the earth to warm, which causes climate change effects, that's a climate change. On the latter, it's not. The latter definition restricts climate change to impacts caused by human-caused changes to greenhouse gases in the atmosphere.

Adaptation under the logic of the FCCC is that any increase of atmospheric carbon above 450 ppm (a 2 degree Centigrade temperature increase) is "dangerous" climate change that requires human adaptation. If we happen to stabilize at 449 ppm, then no adaptation at all is required. Under this definition, the more adaptation we need, the more we have failed in climate policy.

Under the IPCC's cost-benefit analysis, adaptation is considered a cost with no benefits.

Al Gore's Earth in the Balance calls adaptation a "laziness."

Tim Flannery, author of The Weather Makers, says that adaptation is "genocide."

IPCC's working group I uses the IPCC definition of climate change; working group III uses the FCCC definition; working group II shifts back and forth between the two.

But climate impacts are caused by a combination of effects: vulnerability (with sociological and ecological components) and by climate change and variability (which includes natural internal and natural external components, human greenhouse gas changes, and non-human greenhouse gas changes). In order to deal with those impacts, you can back up the causal chain to each of those causes, from the IPCC perspective. But from the FCCC perspective, it's as though none of those other factors are available except for the human contribution to greenhouse gases.

Why did the FCCC use this definition? Because the UN already has other frameworks for disaster preparedness, water management, desertification prevention, and biodiversity prevention, and they didn't want any overlap of responsibility.

Choice of definition of climate change can thus create a bias against adaptation, and puts science in impossible situations (requiring conclusive attribution of cause on human greenhouse gases). In reality, adaptation has broad benefits, such as contributing to sustainable development.

The Global Environment Facility of the UN, which releases funds for adaptation, will only pay out in proportion to effects caused by human greenhouse gases. Because of this requirement for attribution of cause, very little has been paid out. Oxfam said that the UNFCCC's global spending from the GEF is equal to what the UK spends annually on flood defense. If a developing nation has a disaster attributable to climate change and asks for funds, it is required to provide evidence for the percentage of damage attributable to climate change caused by human-produced greenhouse gases. One effect of this is that governmental spokespersons are likely to make such attributions in the media.

Swiss Re did a report on adaptation in the broad sense, without regard to attribution of cause, and added up deaths from natural disasters to get a total of $50T and 850,000 lives over 50 years; CNN reported this as meaning that human greenhouse gases caused all of that damage and death.

Another problem with the narrow definition is illustrated by malaria scenarios. Jeffrey Sachs, 2003, projects that without malaria, African GDP might be 3%/year higher. If you plug that into the Kaya Identity, emissions would be about 17 GtC vs. less than 1 today, by 2050. Without malaria mitigation, emissions will not even hit 6 GtC by 2050. The IPCC's projections presuppose that malaria will be unmitigated, which seems to be NOT how we should be thinking about climate policy.

Pielke argued that the broader notion of climate change and broader notion of adaptation are more useful. Adaptation is not in opposition to mitigation, and it has benefits as well as costs. In reality, we don't care just about greenhouse gases, we care about the impacts regardless of cause. By drawing a circle around human contributions to greenhouse gases and setting goals that focus only on that, we've engaged in "goal substitution," where addressing a single cause has become our goal instead of addressing the effects.

He then put up a slide of various book and magazine covers, as well as the poster for "An Inconvenient Truth," and said that "hazards are a centerpiece of the climate debate." One of the magazine covers, an issue of Newsweek from January 1996 with a cover story labeled "The Hot Zone: Blizzards, Floods, and Hurricanes, Blame Global Warming," was what got Pielke interested in doing research. The period 1991-1994, before that story, was a very quiet period for hurricanes hitting the U.S., but also the most expensive in terms of damage. Although he didn't study blizzards, he did study floods and hurricanes, and said he found that "the biggest signal in disasters wasn't climate."

Pielke then wanted to explain how his research has been used by the IPCC and the Bush and Obama administrations, looking at two reports: Climate Change 2007: Impacts, Adaptation, and Vulnerability from the IPCC (the report of working group II), and the U.S. Climate Change Science Program (CCSP) Report, Weather and Climate Extremes in a Changing Climate, Regions of Focus: North America, Hawaii, Caribbean, and U.S. Pacific Islands.

He gave this quote from the IPCC report:
1.3.8.5 Summary of disasters and hazards
Global losses reveal rapidly rising costs due to extreme weather-related events since the 1970s. One study has found that while the dominant signal remains that of the significant increases in the value of exposure at risk, once losses are normalised for exposure, there still remains an underlying rising trend.
He pointed out that the reference to "one study" is interesting, because he has published dozens of studies in this area, none of which show such a trend. The study in question mentioned here is "Muir Wood, et al., 2006," which is by R. Muir Wood, S. Miller, and A. Boissonade, titled "The search for trends ..." which is one of 24 papers commissioned as background by Peter Hoppe and Pielke for a workshop they conducted with experts from multiple countries, Munich Re, the Tyndall Centre, NSF, etc. The plan for that workshop was to be a "dissensus consensus," to identify areas of disagreement for further study, but they ended up reaching consensus on 20 statements.

The motivation for the workshop was a graph from Munich Re that showed that the cost of disasters, adjusted for inflation, has been increasing. The workshop wanted to find out what was causing this to happen and whether any percentage of it could be attributed to climate change.

The types of disasters in question were:
  • Earthquake, tsunami, and volcano, which couldn't be attributed to climate change.
  • Windstorms and floods, which could possibly be attributed to climate change and have been responsible for most of the increasing damage.
  • Disasters of temperature extremes such as heatwaves, drought, and wildfires, which could also be attributed to climate change, but which aren't responsible for most of the increasing damage.
Three of the consensus statements agreed to by all participants, including Muir Wood, were:
  1. Analyses of long-term records of disaster losses indicate that societal change and economic development are the principal factors responsible for the documented increasing losses to date.
  2. Because of issues related to data quality, the stochastic nature of extreme event impacts, length of time series, and various societal factors present in the disaster loss record, it is still not possible to determine the portion of the increase in damages that might be attributed to climate change due to GHG emissions.
  3. In the near future the quantitative link (attribution) of trends in storm and flood losses to climate changes related to GHG emissions is unlikely to be answered unequivocally.
The first statement is accurately reflected in the IPCC statement, but the second is exactly the opposite of what it says.

The Muir Wood paper itself says that if you look at the period 1970-2005, you have an upward trend that can't be attributed to just societal factors. But 2005 was the year of Hurricane Katrina, and 1970-1974 was a period when the Atlantic was very quiet. If you look at 1950-2005, there is no trend, Pielke said. The IPCC not only took a single background paper from the workshop, they actually took a subset of the paper's data to draw their conclusion.

Pielke argued that the damage trends can't be due to storm intensity alone, based on a graph of major category 3, 4, and 5 hurricanes vs. year. The 177 U.S. coastal counties have seen huge population growth--for example, the population of Harris County, Texas in 2005 was equal to the entire U.S. coastal population from Florida to South Carolina in 1955.

He showed comparison photos of Miami Beach in 1926 vs. 2006, and then a graph of the estimated amount of U.S. damage per year if every hurricane season had occurred with 2005 population levels. That graph shows a huge spike in 1926, when a big hurricane hit Miami; it would have been 1.5 to 2 times the damage of Katrina. 2004 and 2005 were also years of very high damage, though not as high as 1926.

The trend, Pielke said, is no statistical change in damage since 1900, and is consistent with the physical characteristics of hurricanes at landfall over that same time period. Other signals show up in the data, such as El Nino. When the Pacific is cold you get more hurricanes, when it's warm you get fewer. 1927-1969 were very active for hurricanes, the 1970's and 80's were not very active. He said there have been two independent replications of the same results with different data sets and methodologies, and that insurance and reinsurance companies use this for their pricing models.

His summary slide said this:
  • Raw damages are increasing.
  • Normalized damages show no trend, consistent with the lack of trend in landfall
  • Increases in inflation, wealth, and development along the coastline account for increasing damages.
  • While coastal development in hurricane-prone regions is increasing, in aggregate it appears to be proportional to the rest of the United States, with large local variations.
It occurred to me that one factor that might counteract a genuine increase in storm intensity with respect to damage would be better construction, but I didn't raise the issue since I figured it would have been unlikely for such a factor to exactly offset storm intensity increases so that there was no trend. Afterward, though, I found this paper by Judy Curry (PDF) which argues that improvements in building codes have just such an effect, and that the pre-1930 data Pielke uses was a time of inflated property values before the Great Depression, and if you take it out you get an upward trend again.

In response to a student question about whether probabilities of landfall have changed, Pielke said that the overall odds of hurricane landfall are unchanged within the data set (though there are subsets where it is different) and that studies of the west coast of Mexico, South Korea, China, Japan, Southeast Asia, Africa, and Madagascar show no regions where hurricane landfalls have increased.

He reported three other studies which have shown no upward trend in normalized weather losses--a study of his own done with the head of the Cuba Weather Service, for 1900-1998 (Pielke et al., 2003), one for Australia for 1965-2005 (Crompton & McAneney, 2007), and one for India for a time period I didn't catch (Raghavan & Sen Sarma, 2003). He said there are about 15 other studies of the same sort, and that Lawrence Bauer of the Free University of Amsterdam has a review paper of all of these studies that is under review for publication.

When you look at U.S. flood losses, after adjusting for societal factors, there has been a slight (not statistically significant) downward trend in losses.

Pielke then said that he took a bunch of weather loss data sets, standardized them, took ten-year averages and overall averages, and then put them all on top of each other. These data sets included Munich Re's global losses for 1979-xx (I didn't catch the end year), U.S. flood losses, and Australian weather losses. While Munich Re's global losses correlate strongly with U.S. hurricane losses (0.80, 64% of the variance in global losses explained by U.S. losses), Pielke said, "there's no secular trend over the time period for which we have these data sets."

Regarding hurricanes, however, Pielke said his data is consistent with hurricanes becoming more intense. He referred to Kerry Emanuel's 4 August 2005 paper in Nature, titled "Increasing destructiveness of tropical cyclones over the past 30 years," which was featured in "An Inconvenient Truth." He showed a graph from the paper which shows windspeed cubed, or power dissipation index (PDI), has increased. Pielke noted that this is not a measure of "destructiveness," and the paper says nothing about destruction caused by hurricanes. He broke the Atlantic basin into five equal compartments with an equal number of observations of hurricane intensity (windspeed measurement) from the 1880s to the present, for all named storms, 39 knots and higher. He found that the strongest upward trends are farthest out to sea, and no trends in the locations where damage actually occurs.

He said he did the same with Emanuel's graph and got the same result, that all of the trends are out to sea. So, he argued, Emanuel's results could be due to real changes in storm intensity as a result of ocean temperature changes, or they could be due to increased storm counts due to more and better data collection out at sea. He submitted a letter to Geophysical Research Letters reporting his result, which was rejected with negative reviews that said "everybody already knows this." But, Pielke said, Emanuel didn't know it until he pointed it out to him.

Next, Pielke talked about the U.S. CCSP Report, which spanned the Bush and Obama administrations. This report said the following about U.S. extreme weather events:
  1. Over the long-term U.S. hurricane landfalls have been declining.
  2. Nationwide there have been no long-term increases in drought.
  3. Despite increases in some measures of precipitation, there have not been corresponding increases in peak streamflows (>90th percentile).
  4. There have been no observed changes in the occurrence of tornadoes or thunderstorms.
  5. There have been no long-term increases in strong East Coast winter storms, called Nor’easters.
  6. There are no long-term ...
With these conclusions, he said, you'd expect no claims of increasing losses from damage. But the report says:
Extremes are already having significant impacts on North America. ... both the climate and the socioeconomic vulnerability to weather and climate extremes are changing (Brooks and Doswell, 2001; Pielke et al., 2008; Downton et al., 2005).
Two of the three papers cited have Pielke as author or co-author, and the third applies his sort of methodology to tornadoes. The Harold E. Brooks and Charles A. Doswell III 2001 paper says: "We find nothing to suggest that damage from individual tornadoes has increased through time, except as a result of the increasing cost of goods and accumulation of wealth in the United States." The Pielke et al. 2008 paper finds no trends in absolute data or under a logarithmic transformation. The Downton, Miller, and Pielke 2005 paper talks about the National Weather Service flood loss database, says absolutely nothing about climate change, and shows a drop in losses. So of the three cited papers for the claim, two say the opposite of the claim and one is silent. Pielke says there is no published study that supports the claim. When he made a stink about this, he said he ended up being called a climate change denier. The IPCC and CCSP are supposed to be places we go to get reliable information, he said, and "I'm much more willing to listen to others who say their work was misrepresented since I know mine was."

In 2000, he co-authored an article with Dan Sarewitz on "Breaking the Global Warming Gridlock" in The Atlantic Monthly that argued for getting people engaged with adaptation rather than focusing exclusively on mitigation. After that came out, he says he was told privately by a representative of an environmental group that "we agree with what you say, but it's not helpful now because we're trying to win a [political] battle on mitigation."

He pointed out two recent cases of people in government being silenced for speaking out contrary to policy--David Nutt, the UK drug policy advisor, who was fired for saying that ecstasy use was of comparable risk to riding horses (and ecstasy is safer to give to a stranger than peanuts), and Clive Spash, an economist for Australia's CSIRO, who submitted a paper to a journal critical of cap and trade, which was accepted for publication but withdrawn when his supervisor wrote to the journal and asked for it to be retracted.

He asked, "If the public loses faith in the connection between authoritative scientific statements and policy, then what do we rely upon to make decisions?"

He suggested that we need to improve processes where there is potential for intellectual conflicts-of-interest, such as where people with a stake in an assessment highlight their own research over other research they don't favor. He thinks this doesn't seem to be a problem with IPCC working group I, but has been a problem with both working groups II and III and with the CCSP. In both of the cases he referred to regarding his own work, above, he said a single person was responsible (not the same person in both cases, but one in each).

I left about ten minutes before the end of the class and so missed any further wrapup, as I had to get to the opposite side of campus for another talk, by Robert B. Laughlin, one of the winners of the 1998 Nobel prize in physics.

UPDATE (24 September 2013): Michael Mann, on Twitter, called Pielke Jr.'s work on storm damage "deeply flawed work" because its "normalization procedure removes climate change signal," and pointed to this critique by Grinsted.

UPDATE (13 July 2014): An updated version of the information in this talk is found in Ch. 7 of Pielke Jr.'s book, The Climate Fix (2010, Basic Books).

Friday, November 06, 2009

Roger Pielke Jr. on climate change mitigation

Yesterday I heard Roger Pielke Jr. speak twice at Arizona State University, first in a talk to the Consortium for Science, Policy and Outcomes (CSPO) on climate change mitigation, and second in a class lecture on climate change adaptation. This post is about the former.

His talk was entitled "The Simple Math of Emissions Reduction," and began with a quote from Steve Raymer of Oxford University:
Wicked Problems
have Clumsy Solutions
requiring Uncomfortable Knowledge
which he then followed up with a slide on "Where I stand," which included the following bullet points (nearly, but probably not exactly verbatim):
  • Strong advocate for mitigation and adaptation policies
  • Continuing increase in atmospheric CO2 could pose large risks, as described by IPCC
  • Stabilizing concentrations at low levels can’t succeed if we underestimate the challenge (and we have)
  • Mitigation action will not result from elimination of all scientific uncertainty
  • Poisonous politics of the climate debate serves to limit a broader discussion of options
  • Ultimately technological innovation will precede political action, not vice versa
Regarding the IPCC, he says he has no debate with working group I on the science, some disagreements with working group II on impacts, adaptation, and vulnerability, and lots of debate with working group III on economics and mitigation, which this talk covers.

His slide for the outline of his talk looked like this:
  • Understanding the mitigation challenge
  • Where do emissions come from?
  • Decarbonization
  • The UK as a cautionary tale for U.S. policymakers
  • The U.S. situation and Waxman-Markey/Boxer-Kerry
  • How things might be different
Understanding the mitigation challenge

Although climate change involves other greenhouse gases besides CO2, he focused on CO2 and in this part of the talk gave a summary of CO2 accumulation in the atmosphere as a stock and flow problem, using a bathtub analogy. The inflow of CO2 into the atmosphere is like water pouring out of the faucet, there's outflow going out the drain, and the water in the tub is the accumulated CO2 in the atmosphere. The inflow is about 9 GtC (gigatons of carbon) per year and growing, and expected to hit 12 GtC per year by 2030. The current stock is a concentration of about 390 parts per million (ppm), increasing by 2-3 ppm/year. And the outflow is a natural removal of about 4 GtC/year. To stop the stock increase, the amount going in has to equal the amount going out. If we reach an 80% reduction in emissions by 2050, that is expected to limit the stock to 450 ppm.

Emissions have been growing faster than expected by the IPCC in 2000, with a 3.3% average increase per year between 2000 and 2007. While the economic slump has reduced emissions in 2009, it's expected that recovery and continued growth in emissions will occur.

Where do emissions come from?

Pielke used the following four lines to identify policy-relevant variables:
People
engage in economic activity that
uses energy
from carbon-emitting generation
The associated variables:
Population (P)
GDP per capita (GDP/P)
Energy intensity of the economy (Total Energy (TE)/GDP)
Carbon intensity of energy (C/TE)
The total carbon emissions = P * GDP/P * TE/GDP * C/TE.

This formula is known as the "Kaya Identity."

The policy tools available to reduce emissions by affecting these variables are: (1) population management to end up with fewer people, (2) limit the generation of wealth to have a smaller economy, (3) do the same or more with less energy by increasing efficiency, and (4) switch energy sources to generate energy with less emissions.

And that's it. Cap-and-trade, carbon taxes, etc. are designed to influence these variables.

Pielke then combined the first two variables (P * GDP/P) to get GDP, and the second two (TE/GDP * C/TE) he identified as Technology.

He argued that reducing GDP or GDP growth is not a policy option, so Technology is the only real policy option. Regarding the former point, he put up a graph very much like the Gapminder.org graph of world income, and observed that the Millennium Development Goals are all about pushing the people below $10/day--80% of the world's population--on that graph to the right. Even if all of the OECD nations were removed from the graph, there would still be a push to increase the GDP for the remainder and there would still be growing emissions.

He quoted Gwyn Prins regarding the G8 Summit to point out how policy makers are conflicted--they had a morning session on how to reduce gas prices for economic benefit, and an afternoon session on how to increase gas prices for climate change mitigation.

With this kind of a conflict, Pielke said, policy makers will choose GDP growth over climate change.

So that leaves Technology as an option, and he turned to the topic of decarbonization.

Decarbonization

Pielke put up a graph of CO2 emissions per $1,000 of GDP over time globally, which showed that there has been a steady improvement of efficiency. In 2006, emissions were 29.12 GtC, divided by $47.267 trillion of GDP, gives 0.62 tons of CO2 per $1,000 GDP. In 1980, that was above 0.90 tons of CO2 per $1,000 GDP.

Overall emissions track GDP, and the global economy has become more and more carbon intensive.

He looked at carbon dioxide per GDP (using purchasing power parity (PPP) for comparison between countries) for four different countries, Japan, Germany, U.S., and China (that's ordered from most to least efficient). Japan hasn't changed much over time, but is very carbon efficient (below 0.50 tons of CO2 per $1,000 GDP). Germany and the U.S. are about the same slightly above 0.50 tons of CO2 per $1,000 GDP, and both have improved similarly over time. China has gotten worse from 2002-2006 and is at about 0.75 tons of CO2 per $1,000 GDP.

He put up a slide of the EU-15 countries decarbonization rates pre- and post-Kyoto Protocol, and though there was a gap between them, the slopes appeared to be comparable. For the first ten years of Kyoto, then, he said, there's no evidence of any improvement in the background rate of decarbonization. The pre-Kyoto rate was from above 0.55 tons of CO2 per $1,000 GDP to about 0.50 tons of CO2 per $1,000 GDP. The post-Kyoto rates went from about 0.50 tons of CO2 per $1,000 GDP to below 0.45 tons of CO2 per $1,000 GDP.

At this point, Clark Miller (head of my program in Human and Social Dimensions of Science and Technology) pointed out that given Japan, there is no reason to assume that there should have been a continuing downward trend at all, but Pielke reiterated that since the slopes appeared to be the same there's no evidence that Kyoto made a difference.

The UK as a cautionary tale for U.S. policymakers

Pielke identified the emissions targets of the UK Climate Change Act of 2008:

Average annual reductions of 2.8% from 2007 to 2020, to reach 42% below 1990 levels by 2020.

Average annual reductions of 3.5% from 2020, to reach 80% below 1990 levels by 2050.

The former target of 42% below 1990 levels is contingent upon COP15 reaching an agreement this December; otherwise the unilateral target is 34% below 1990 levels.

Pielke showed a graph of the historical rate of decarbonization for the UK economy, and compared it to graphs of manufacturing output and manufacturing employment, observing that the success of decarbonization of the UK economy from 1980-2006 has been due primarily to offshoring of manufacturing, something that's not sustainable--once they reach zero, there's nowhere further down to go.

He then used France as a point of comparison, since it has the lowest CO2/GDP output of any developed country, due to its use of nuclear power for most of its energy--it's at 0.30 tons of CO2 per $1,000 GDP, and a lot of that is emissions from gasoline consumption for transportation.

It took France about 22 years, from 1984-2006, to get its emissions to that rate.

For the UK to hit its 2020 target, it needs to improve to France's rate in the next five years, by 2015. That means building 30 new nuclear power plants and reducing the equivalent coal and gas generation; Pielke said he would "go out on a limb" and say that this won't happen.

That will only get them 1/3 of the way to their 2020 goals.

The UK plan calls for putting 1.7 million electric cars on the road by 2020, which means doubling the current rate of auto sales and selling only electric cars.

For the entire world to reach France's level of efficiency by 2015 would require a couple of thousand nuclear power plants.

The U.S. situation and Waxman-Markey/Boxer-Kerry

The U.S., said Pielke, has had one of the highest rates of sustained decarbonization, from 1980-2006, going from over 1.00 tons of CO2 per $1,000 GDP to the current level of about 0.50 tons of CO2 per $1,000 GDP.

The Waxman-Markey target is an 80% reduction by 2050, not quite as radical as the UK.
The Boxer-Kerry target is a 17% reduction by 2020.

Pielke broke down the current U.S. energy supply by source in quadrillions of BTUs (quads), and pointed out that he got all of his data from the EIA and encouraged people to look it up for themselves:
Petroleum: 37.1
Natural gas: 23.8
Coal: 22.5
Renewable: 7.3
Nuclear: 8.5
Total energy was about 99.2 quads in 2007, of which 83.4 came from coal, natural gas, and petroleum.

Emissions by source:
Coal: 95 MMt CO2/quad
Natural gas: 55 MMt CO2/quad
Petroleum: 68 MMt CO2/quad
Multiply those by the amount of energy produced by each source and add them up:
95 * 22.5 + 55 * 23.8 + 68 * 37.1 = 5,969 MMt CO2
The actual total emissions were at about 5,979, so the above back-of-the-envelope calculation was pretty close.

In 2009, U.S. energy consumption will be about 108.6 quads, of which 21 quads will come from renewables and nuclear (40% growth from 2007), which leaves 87.2 quads from fossil fuels, a 4.6% increase from 2007.

If we substituted natural gas for all coal, then our 2020 emissions would be 5,300 MMt CO2, higher than the 2020 target and 12% below 2005, and would still lock us into a carbon intensive future.

In order to meet targets, we need to reduce coal consumption by 40%, or 11 quads, and replace that with renewables plus nuclear, plus an additional 3.8 quads of growth by 2020.

One quad equals about 15 nuclear plants, so 14.8 quads means building 222 new nuclear plants (on top of the 104 that are currently in the U.S.).

Or, alternatively, assuming 100 concentrated solar power installations * 30 MW peak per quad, 1,480 such installations for 14.8 quads, or one online every two days until 2020.

Or, assuming 37,500 * 80 kW peak wind turbines per quad, 555,000 such wind turbines for 14.8 quads, or one 150-turbine wind farm brought online daily until 2020.

To reach these targets with wind and solar would require increasing them by a factor of 37 by 2020; Obama has promised only a tripling.

Could we meet the targets by increasing efficiency of our energy consumption? We would have to reduce total energy consumption to 85.5 quads by 2020 (rather than 108.6), about equal to U.S. energy consumption in 1992, when the U.S. economy was 35% smaller than in 2007. That would be improving efficiency by about a third.

How fast can decarbonization occur? We don't know, because no one has really set out to intentionally do that. Historical rates have been 1-2% per year by developed countries; for short periods, some countries have exceeded 2% per year. Japan, from 1981-1986, improved by over 4% per year.

Pielke argued that these targets are not feasible targets in the U.S. or UK, and so policy makers are adding safety valves, offsets, and other mechanisms to allow some manipulation to give the appearance of success. Achieving 80% reduction in global emissions by 2050 requires > 5% decarbonization per year.

The problem, Pielke argued, is that the policy logic of targets and timetables is backwards, and we should focus on improving efficiency and decarbonization rather than emissions targets.

How things might be different

Pielke's suggested alternative strategy was presented in a slide something like this:
  • Focus policy on decarbonization of the economy (not simply emissions)
  • Efficiency gains (follow the Japanese model, “frontrunner program” by industry, look at best performer and set it as regulatory standard)
  • Expand carbon free energy (low carbon tax, other policies--subsidies, regulation, etc.)
  • Innovation-focused investments
  • To create ever advancing frontier of potential efficiency gains
  • Air capture backstop
  • Adaptation
The Japanese "frontrunner" program was where the government went industry by industry, identified the most efficient company in each industry, and set regulations to make that company the baseline standard for the other companies to meet.

Pielke argued that there should be a carbon tax of, say, $5/ton (or whatever is the "highest price politically possible"), with the collected funds (that would raise about $700B/year) used to promote innovation in energy efficiency.

If we find that we're stabilizing at 635 ppm, we may want to "brute force" some removal of carbon from the atmosphere (e.g., geoengineering).

In the Q&A session, Clark Miller questioned Pielke about the impossibility of replacing our energy infrastructure quickly--if it costs $2.61B for a 1400 MW nuclear plant, we'd need 65 of them (fewer than Pielke's number, he assumed smaller plants) at a cost of $260B. Since there is capital floating around causing asset bubbles in the trillions, and the energy industry is expected to become a $15T industry, surely there would be some drive to build them if they're going to become profitable. (Not to mention peak oil as a driver.) He agreed that it would take longer to construct these, but asked what the upshot would be if this was done by, say, 2075.

Also in the Q&A, Pielke pointed out that in a previous presentation of this talk, a philosophy professor had suggested that the population variable could be affected by handing out cyanide pills. (Or by promoting the growth of the Church of Euthanasia.) What I didn't mention above was that Pielke also briefly discussed improvements to human lifespan, and in his other talk (summary to come), he talked about how the IPCC's projections assume that we will not try to eradicate malaria...

ADDENDUM (November 7, 2009): I've seen estimates that U.S. carbon emissions will be about 6% lower in 2009 as a result of the recession, which amounts to considerable progress towards the Boxer-Kerry target. Projections of an economic recovery in 2010 strike me as overly optimistic; in my opinion there's a strong possibility that we haven't hit bottom yet and there's worse to come. Still, though, I think Pielke's probably right that energy consumption will go right back up again unless the recession becomes a depression and results in significant changes in consumption habits.

My summary of Pielke's lecture on climate change adaptation is here.

ADDENDUM (November 9, 2009): It should be noted that Roger Pielke, Jr. is a somewhat controversial figure in the climate change debate, and believed by many in the climate change blogosophere to be in the climate change skeptic camp, or to be biased towards them in terms of where he levels his criticisms. A post titled "Who Framed Roger Pielke?" from the Only In It For the Gold blog links to a number of opinions expressing these views.

UPDATE (February 5, 2010): A post titled "The Honest Joker" at Rabett Run critiques Pielke Jr.'s stance as an "honest broker" as a sham.

UPDATE (August 28, 2010): A talk by Pielke that appears to have some similarity to this one may be found here.

UPDATE (July 13, 2014): An updated version of the information in this talk is Ch. 3 of Pielke Jr.'s book, The Climate Fix (2010, Basic Books).

Charles Phoenix's retro slide show--in Phoenix


Tonight and tomorrow night at 8 p.m., Charles Phoenix will bring his Retro Slide Show Tour to the Phoenix Center for the Arts at 1202 N. 3rd St.

I've not seen his show before, but I've enjoyed his blog's slide-of-the-week feature and plan to go see this.

Here's the official description:

A laugh-out-loud funny celebration of '50s and '60s road trips, tourist traps, theme parks, world's fairs, car fashion fads, car culture and space age suburbia, will also include a selection of vintage images of the Valley of the Sun.

Click the above link for more details or to buy tickets.