Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Sunday, August 04, 2024

Tips on using OpenBSD's pledge and unveil in perl scripts

 OpenBSD 5.9 (current as of this post is 7.5) introduced the "pledge" system call and 6.4 introduced the "unveil" system call, which together provide a means of more granular control of system access by processes running on the system to enforce least privilege.  When a program calls "pledge", it provides a list of categories of system calls (called "promises") that it is planning to make during the life of the running process (children have to make their own pledges and are not restricted), and attempts to make calls outside of those areas will cause the call to be blocked and the process to be killed. Additional calls to pledge cannot add new categories but it can remove them, so access can become more restrictive but not less restrictive.

  "Unveil," by contrast, selectively exposes parts of the file system, by file path, with specific access, and the rest of the file system is correspondingly "veiled" or blocked from access. Successive calls to unveil can expand or override previous ones, expanding access to the file system, adding write and create permissions where there was previously read only, but only until unveil is called with no arguments, which locks the current state in place. Further attempts to call unveil after that result in a violation.

Violations of pledges or attempts to access file paths that are not unveiled show up in process accounting logs for the process with the process flags "P" or "U", respectively.  (My "reportnew" log monitoring script knows how to monitor process accounting logs and can be easily set up to report on such violations.)

Perl scripts on OpenBSD can also use pledge and unveil, with two modules provided in the base operating system, "OpenBSD::Pledge" and "OpenBSD::Unveil".  I've been adding this functionality to several of my commonly used scripts and have learned a few tips that I'd like to share.

Pledge:

* Check your call to pledge for errors.  If you typo the name of a promise (category of system calls), or you provide pledge with a string of comma separated promises instead of an array or list, it will fail and nothing will be pledged.

* If you don't have any idea what promises are required, just use "error".  With the error promise, instead of blocking the system call and killing the process, the result is logged to /var/log/messages and you can see what promises are required.

* The "stdio" promise is always included with OpenBSD::Pledge, so you don't need to list it.

* The "unveil" promise is required if you intend to use OpenBSD::Unveil.

* Calls to exec or system require the "proc" and "exec" promises; the new processes created as a result are not restricted and need to make their own use of pledge and unveil.  (Note: this means that if you are calling a system command that writes to a file, but your script doesn't otherwise write to files, you do not need to pledge the "wpath" promise in your script.)

* If you otherwise fork a child process (e.g., explicitly using "fork" or Parallel::ForkManager or implicitly forking a child process using "open" to read from or write to a command), the promises pledged by the parent process are carried over to the child, which can then restrict them further. (Hat tip to Bryan Steele, who pointed this out on Bluesky without specifically referring to the Perl context.)

* If you use the DBI perl module with mariadb and are accessing a database through a named pipe on the same server, you'll need to pledge the "unix", "inet", and "prot_exec" promises. (This works even from a chroot jail if the named pipe or socket is a hard link from the jail.)

* This isn't a tip, but an observation: if you promise "proc" but not "exec," your system call will fail but your process will not be killed and the script will continue running.

Unveil:

* If you make use of other perl modules in your code with "use", they are loaded prior to your call to unveil and so you don't need to unveil directories like /usr/libdata/perl5 in order to use them. The exception is perl modules that include compiled shared objects (".so"), or which use "require" on other modules (loading them at runtime), in which case you do need unveil such directories, but only with "r" permission.

* If you use the DBI perl module with mariadb, you will need to unveil /var/run/mysql with "rw" and /usr/lib and /usr/local/lib with "rx".

* If you use calls to "system" or "open" which use pipes, globs, or file redirection, you need to unveil "/bin/sh" with "x" permission. You may be able to rewrite your code to avoid the requirement--can you call "system" with a command name and list of arguments rather than a string, and do any processing you need in your program instead of with the shell?

* If you use calls to "system" to execute system commands, you need to unveil them with "x" permission but in most cases you don't need to include "r".

* It is often much easier to unveil a directory rather than individual files; if you plan to check for the existence of a file and then create it if it doesn't exist, you need "rwc" on the containing directory.

* One of the biggest challenges sometimes is to find the source of an unveil violation; unveiling "/" with various permissions to see if it goes away, and then removing that and testing individual directories under the root directory in trial and error can help find things. That's how I first found the need to unveil "/bin/sh".


Finally, if you are writing perl modules it's helpful to document which promises need to be pledged and files and directories need to be unveiled in the calling scripts in order for them to function. It would be inappropriate to pledge or unveil within the module except in a context like a forked child process. I've done this with my Signify.pm wrapper for the OpenBSD "signify" command for signing and verifying files with detached signatures or gzip archives with embedded signatures in the gzip header comments.

If you've made use of pledge and unveil--in perl scripts or otherwise--what lessons have you learned?


Wednesday, April 19, 2023

Google Bard on file system immutability flags in OpenBSD

 Unlike ChatGPT, Bard pretty much nailed it on the initial prompt. (However, Bard refused to answer my questions about the Mars Effect like the ones I gave to ChatGPT, even when reworded to be clear I was asking for descriptions of what the effect is and what explanations are offered rather than asking it to identify causes itself.)

It does lose points for grammar in the second sentence, though!

write a few paragraphs explaining immutable flags in OpenBSD including the commands for setting them and the difference between system and user immutable flags

Sure. Here is a few paragraphs explaining immutable flags in OpenBSD, including the commands for setting them and the difference between system and user immutable flags.

Immutable Flags

Immutable flags are a security feature in OpenBSD that prevent files from being modified, moved, or deleted. They can be set on any file by the superuser, or on files owned by the current user with the chflags command.

To set the immutable flag on a file, use the following command:

chflags schg filename

To unset the immutable flag, use the following command:

chflags nouchg filename

System and User Immutable Flags

There are two types of immutable flags: system immutable and user immutable. System immutable flags are set by the superuser and can only be cleared by the superuser at security level 0 or -1. User immutable flags are set by the owner of the file and can be cleared by the owner of the file at any time.

Commands for Setting Immutable Flags

The following commands can be used to set immutable flags:

  • chflags schg filename: Sets the system immutable flag on filename.
  • chflags uappnd filename: Sets the user append-only flag on filename.

Difference Between System and User Immutable Flags

The main difference between system and user immutable flags is that system immutable flags can only be cleared by the superuser at security level 0 or -1, while user immutable flags can be cleared by the owner of the file at any time.

Example

The following example shows how to set the immutable flag on a file and then clear it:

# Set the immutable flag on the file /etc/passwd
chflags schg /etc/passwd

# Clear the immutable flag on the file /etc/passwd
chflags nouchg /etc/passwd

Benefits of Using Immutable Flags

Immutable flags can be used to improve the security of your system by preventing unauthorized changes to critical files. For example, you could use immutable flags to protect system configuration files, such as /etc/passwd and /etc/shadow, from being modified by attackers.

Disadvantages of Using Immutable Flags

There are a few disadvantages to using immutable flags. First, they can make it difficult to troubleshoot problems with your system. If a file that is set to immutable is corrupted, you will not be able to modify it to fix the problem. Second, immutable flags can slow down your system performance, especially if a large number of files are set to immutable.

Conclusion

Immutable flags are a powerful security feature that can be used to protect your system from unauthorized changes. However, it is important to weigh the benefits and disadvantages of using immutable flags before deciding whether or not to use them on your system.

Friday, January 01, 2021

Books read in 2020

Not much blogging going on here still, but here's my annual list of books read for 2020.
  • Nicholson Baker, Baseless: My Search for Secrets in the Ruins of the Freedom of Information Act
  • John Bolton, The Room Where It Happened: A White House Memoir
  • Ben Buchanan, The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics
  • Susannah Cahalan, The Great Pretender: The Undercover Mission That Changed Our Understanding of Madness
  • Michael Cohen, Disloyal: The True Story of the Former Personal Attorney to President Donald J. Trump
  • Myke Cole, Legion versus Phalanx: The Epic Struggle for Infantry Supremacy in the Ancient World
  • Libby Copeland, The Lost Family: How DNA Testing Is Upending Who We Are
  • Barton Gellman, Dark Mirror: Edward Snowden and the Surveillance State
  • Fiona Hill and Clifford G. Gaddy, Mr. Putin: Operative in the Kremlin (2012)
  • James W. Johnson, Arizona Politicians: The Noble and the Notorious (2002)
  • Gene Kim, The Unicorn Project: A Novel about Developers, Digital Disruption, and Thriving in the Age of Data
  • Maria Konnikova, The Biggest Bluff: How I Learned to Pay Attention, Master Myself, and Win
  • Talia Lavin, Culture Warlords: My Journey Into the Dark Web of White Supremacy
  • Carol D. Leonnig and Philip Rucker, A Very Stable Genius: Donald J. Trump's Testing of America
  • Ben Macintyre, The Spy and the Traitor: The Greatest Espionage Story of the Cold War (2018)
  • Nancy MacLean, Democracy in Chains: The Deep History of the Radical Right's Stealth Plan for America (2017)
  • H. Keith Melton and Robert Wallace, with Henry R. Schlesinger, Spy Sites of New York City: A Guide to the Region's Secret History (2020)
  • Jefferson Morley, Morley v. CIA: My Unfinished JFK Investigation
  • Bastian Obermayer and Frederik Obermaier, The Panama Papers: Breaking the Story of How the Rich & Powerful Hide Their Money
  • Thomas RidActive Measures: The Secret History of Disinformation and Political Warfare
  • Brad Smith and Carol Anne Browne, Tools and Weapons: The Promise and Peril of the Digital Age
  • Mary Trump, Too Much and Never Enough: How My Family Created the World's Most Dangerous Man
  • Robert Wallace and H. Keith Melton with Henry R. Schesinger, Spy Sites of Washington, DC: A Guide to the Capital Region's Secret History (2017)
  • Anna Wiener, Uncanny Valley: A Memoir
  • Isabel Wilkerson, Caste: The Origins of Our Discontents
    Top for 2020: Copeland, Macintyre, Cahalan, Smith and Browne, Buchanan, Obermayer and Obermaier, Gellman, Rid.

    I started the following books I expect to finish in 2021 (yes, I also said that about LeFeber and Wilson last year--I'm well in to LaFeber's book and thought I might finish before the end of the year, but had only read Wilson's intro so it's barely started):

    William Dalrymple, The Anarchy: The East India Company, Corporate Violence, and the Pillage of an Empire
    Walter LaFeber, Inevitable Revolutions: The United States in Central America (2nd edition)
    Peter H. Wilson, The Holy Roman Empire: A Thousand Years of Europe's History

    I've also pre-ordered and am looking forward to reading:

    Nicole Perlroth, This Is How They Tell Me the World Ends: The Cyberweapon Arms Race (due to be published on February 9)

    (Previously: 201920182017201620152014201320122011201020092008200720062005.)

    Wednesday, January 01, 2020

    Books read in 2019

    Not much blogging going on here still, but here's my annual list of books read for 2019.
    • Graham T. Allison, Destined for War: Can America and China Escape Thucydides's Trap?
    • Ross Anderson, Security Engineering (3rd edition, draft chapters)
    • Herbert Asbury, The Barbary Coast: An Informal History of the San Francisco Underworld
    • Heidi Blake, From Russia with Blood: The Kremlin's Ruthless Assassination Program and Vladimir Putin's Secret War on the West
    • Rutger Bregman, Utopia for Realists: How We Can Build the Ideal World
    • Oliver Bullough, Moneyland: The Inside Story of the Crooks and Kleptocrats Who Rule the World
    • Bryan Caplan and Zach Weinersmith, Open Borders: The Science and Ethics of Immigration
    • C.J. Chivers, The Fighters: Americans in Combat
    • Sefton Delmer, Black Boomerang
    • Nina J. Easton, Gang of Five: Leaders at the Center of the Conservative Crusade (bio of Bill Kristol, Ralph Reed, Clint Bolick, Grover Norquist, and David McIntosh)
    • Ronan Farrow, Catch and Kill: Lies, Spies, and a Conspiracy to Protect Predators
    • Ronan Farrow, War on Peace: The End of Diplomacy and the Decline of American Influence
    • Ian Frisch, Magic is Dead: My Journey into the World's Most Secretive Society of Magicians
    • Anand Giridharadas, Winners Take All: The Elite Charade of Changing the World
    • Reba Wells Grandrud, Sunnyslope (Images of America series)
    • Andy Greenberg, Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin's Most Dangerous Hackers
    • Jodi Kantor and Megan Twohey, She Said: Breaking the Sexual Harassment Story That Helped Ignite a Movement
    • Stephen Kinzer, Overthrow: America's Century of Regime Change From Hawaii to Iraq
    • Michael Lewis, Flash Boys: A Wall Street Revolt
    • Jonathan Lusthaus, Industry of Anonymity: Inside the Business of Cybercrime
    • Ben MacIntyre, A Spy Among Friends: Kim Philby and the Great Betrayal
    • Joseph Menn, Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World
    • Anna Merlan, Republic of Lies: American Conspiracy Theorists and Their Surprising Rise to Power
    • Jefferson Morley, Our Man in Mexico: Winston Scott and the Hidden History of the CIA
    • Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media
    • Hans Rosling, with Ola Rosling and Anna Rosling Rönnlund, Factfulness: Ten Reasons We're Wrong About the World--and Why Things Are Better Than You Think
    • Russell Shorto, Amsterdam: A History of the World's Most Liberal City
    • Alexander Stille, The Sack of Rome: Media + Money + Celebrity = Power = Silvio Berlusconi
    • Jamie Susskind, Future Politics: Living Together in a World Transformed by Tech
    • Erik Van De Sandt, Deviant Security: The Technical Computer Security Practices of Cyber Criminals (Ph.D. thesis)
    • Tom Wolfe, The Right Stuff
    • Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads
    Top for 2019: Bullough, Farrow (Catch and Kill), Wu, Chivers, Rosling, Greenberg, Blake, Allison, Caplan and Weinersmith, Kinzer, Delmer.

    I started the following books I expect to finish in early 2020:

    Myke Cole, Legion versus Phalanx: The Epic Struggle for Infantry Supremacy in the Ancient World
    Walter LaFeber, Inevitable Revolutions: The United States in Central America (2nd edition)
    Brad Smith and Carol Anne Browne, Tools and Weapons: The Promise and Peril of the Digital Age
    Peter H. Wilson, The Holy Roman Empire: A Thousand Years of Europe's History

    Two books I preordered and look forward to reading in 2020:

    Anna Wiener, Uncanny Valley: A Memoir (due out January 14)
    Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (due out April 21)

    (Previously: 20182017201620152014201320122011201020092008200720062005.)

    Monday, November 23, 2015

    A few thoughts on OpenBSD 5.8

    I've been using OpenBSD since way back at release 2.3 in 1998, so I've gone through upgrades that took a fair amount of work due to incompatible changes, like the switch from ipf to pf for host firewalling or the change to ELF binaries. The upgrade from 5.7 to 5.8 was a pretty smooth and easy one, for the most part. The two most painful changes for me were the replacement of sudo with doas and the dropping of support in the rc.conf for the pf_rules variable.  While sudo is still available as a package, I like the idea of reducing attack surface with a simpler program, so I made the switch. The two things I miss most about sudo are the ability to authenticate for a period of time and the ability to have a single config file across a whole set of servers. The former I'm just living with, the latter I've adjusted to by having a single config file that has lines commented out depending on which server it's on. I did have one moment of concern about the quality of doas when it incorrectly reported the line number on which I had a syntax error in the config file--fortunately, this was just a failure to increment the line count on continuation lines (ending with a "\") which is fixed in the -current release.

    The removal of the pf_rules variable support from rc.conf was a bigger issue--I used to just put the default pf.conf rules file in place with each release and upgrade, and keep my changes in a pf.conf.local file that was specified in the pf_rules variable. The effect was that from the period after the upgrade until I noticed the change, my systems were using the default rules and thus more exposed than they were supposed to be. This wasn't the first time an incompatible change decreased my level of security--the removal of tcpwrappers support from SSH was another. I used to use a combination of pf rules and hosts.allow as additional layers of protection on my SSH access, and had a set of tools that allowed me to easily add IP addresses to or remove them from my hosts.allow files. This would have been a layer of defense still in place with the loss of my pf rules, had it still been in existence. Fortunately, I also have SSH on a non-standard port and only allow SSH key logins, not user/password logins, and most of my systems can't be reached on any port without first making a VPN connection, which requires two-factor authentication.

    A minor annoying change that was made in 5.8 was putting the file /var/unbound/db/root.key into /etc/changelist, so that the file gets checked daily by the security script. The issue with this is that if you are actually using unbound with DNSSEC, this file changes daily, though only in the comments. My "reportnew" log monitoring tool has a feature that allows you to be notified if files that are expected to change on some periodic schedule do not change, and that would be more appropriate than getting daily notifications that yes, the autotrust anchor file has been updated yet again. But what would really be ideal here would be a check that the non-comment components have not changed. (Others have also complained about this.)

    A final issue I've run into with OpenBSD 5.8 is not a new issue, but it's one that still hasn't been fixed with pf. That is that pf logs certain traffic (IGMP in particular) when it matches a rule that does not call for logging. This appears to be the same issue that was fixed earlier this year in pfsense, which is derived from an older fork of pf.

    Wednesday, October 30, 2013

    How to use Google Authenticator with OpenBSD, OpenSSH, and OpenVPN--and why you might not want to

    I thought that Google Authenticator might be a quick and easy two-factor authentication solution for VPN access to my personal network, so I did some Google searches to see if that were so.  I found quite a few sources describing how to set it up with systems that use Linux Pluggable Authentication Modules (PAM), but very little about using it with BSD Authentication on OpenBSD.

    The most promising link I came across was to an implementation of Google Authentication for OpenBSD that was last updated in early 2013, based on Google's PAM code, but I couldn't get it to work.  It compiled and installed, and the googleauth code for generating a secret (and a very insecure way of generating a QR code to use to import it into the Google Authenticator application) worked fine, but I couldn't successfully use it for console login, OpenSSH login, or OpenVPN login.

    I also found the standard OpenBSD port for openvpn_bsdauth, which compiled, installed, and worked successfully for password authentication by adding these lines to my OpenVPN configuration:
    script-security 2
    tmp-dir <path to dir writable only by _openvpn user>
    auth-user-pass-verify /usr/local/libexec/openvpn_bsdauth via-file

    This also requires that the authenticating user be put into the _openvpnusers group.

    I was unable to get the via-env method to work, however.

    I next tried the standard OpenBSD port of login_oath, which implements the OATH toolkit, which uses the same time-based TOTP protocol that Google Authenticator uses.  This turned out to do the trick.  Once installed, you create a secret key that the server authentication will check against and store it in your home directory (one thing I liked about googleauth is that it stores the shared secret in a system directory to which the user doesn't have access; better still is the suggestion of keeping the secrets on an auth server as totp-cgi does).  The documentation recommends creating the secret (which the user doesn't need to know except for the initial configuration of the Google Authenticator client application) by doing:
    openssl rand -hex 20 > ~/.totp-key
    I then needed to convert this from hex to base32, which is simple enough to do with the method the documentation recommends, which is using the perl module Convert::Base32 (OpenBSD port p5-Convert-Base32) and a short script like:
    #!/usr/bin/perl
    use Convert::Base32;
    open (FILE, "/home/vpnuser/.totp-key");
    $secret = <FILE>;
    close (FILE);
    $code = pack ('H*', $secret);
    print encode_base32($code)."\n";
    The resulting code can be manually entered into Google Authenticator.

    To use Google Authenticator as a login method, I updated the login class for the user I wanted to use in /etc/login.conf so that its last two lines were:
    :auth=-totp,passwd:\
    :tc=default:
    This allows either Google Authenticator or password authentication at the console, but only Google Authenticator via OpenSSH or OpenVPN as I configured them.  Instead of using "-totp" you can also use "-totp-and-pwd" which requires the entry of both your Google Authenticator code and your password (in that order, with a slash in between them) in order to authenticate.

    For OpenSSH, I added the following lines to my sshd_config:
    Match User <vpnuser>
         PasswordAuthentication yes
         AuthenticationMethods publickey,password:bsdauth
    I don't allow password authentication at all for other users; for this user, an SSH public key must first be used, then Google Authenticator must also be used before a successful login. [Updated 1 Nov 2013 to add:  After a reboot, this ssh config failed with a log message of "fatal: auth2_update_methods_lists: method not in AuthenticationMethods".  Removing the ":bsdauth" made it work again (it works since the "password" authentication method will use BSD Authentication by default), but this looks like an SSH bug.]

    So why might you not want to do this?  While Google Authenticator ensures that what is used over the network as a password is better than a typical user-selected password, it effectively stores a shared secret in plaintext at both ends of the connection, which is far less secure than SSH public key authentication.  If the device where Google Authenticator is present gets compromised, that secret is compromised.  And as the above link about totp-cgi points out, if you use Google Authenticator with the same secret across multiple machines, that secret is only as secure as the least secure host it's stored on, and using different secrets for different machines doesn't scale very well with the application.  A password safe with randomly generated passwords, stored in encrypted form, is probably a better solution in most cases. [Updated 2 November 2013: Authy uses the same TOTP mechanism as Google Authenticator, but encrypts the secret key on the client side.  That encryption is really more obfuscation than encryption since the key is based on phone attributes and can potentially be reverse engineered.]

    As I've set it up, I'm still relying on SSH public key authentication for SSH logins, and on certificate authentication for VPN logins, in addition to Google Authenticator.  For the case of logging into my VPN from my laptop and having Google Authenticator on a separate mobile device, it does seem to be a security improvement (though I welcome anyone to show me that the gains are illusory).

    UPDATE (July 31, 2019): Note that you should make the .totp-key file in the user's home directory owned by and only readable by root, or else you're effectively permitting that user to do passwordless doas/sudo, since passworded doas/sudo will use the TOTP mechanism for authentication. That won't stop the user from removing the .totp-key file and replacing it with their own, but at least that action becomes detectible. To prevent removal, on OpenBSD you can set the file to be immutable (schg flag) and run at securelevel=2. But a better solution would really be to put those secrets somewhere outside of the individual user's home directory.

    UPDATE (October 22, 2019): The OpenVPN authentication with 2FA is broken in OpenBSD 6.6, it now leads to user/password authentication failures. Not sure why yet.

    UPDATE (October 22, 2019 #2): Looks like it may have been user error, it works now, though I did update my _openvpnusers group to the new number (811) from the old one (596), but the number itself shouldn't be hardcoded in openvpn_bsdauth, so that shouldn't have had an impact.

    UPDATE (30 October 2022): Also see Solene Rapenne's blog post on this same topic.

    UPDATE (18 June 2024): Note that Authy (and probably any other authenticator app) treats manual secrets entry as case-sensitive even though base32 is not, and secrets must be entered in uppercase.

    Saturday, February 11, 2012

    Miscellanea

    I recently had a few opportunities on a plane to catch up on some reading and podcasts.  A few of the more interesting things I came across:

    A bunch of interesting articles in The Economist for the past few weeks:

    January 28-February 3, 2012:

    "Saving Lives: Scattered Saviors" -- harnessing social media and mobile devices to deploy first aid faster than an ambulance can arrive (United Hatzalah in Israel believes it will be able to have first responders on the scene within 90 seconds).

    "China's new tribes: Ant tribes and mortgage slaves" -- a new vocabulary in Mandarin describing emerging social groups in China.  (Reminds me of Cory Doctorow's Eastern Standard Tribe.)

    "Affinity fraud: Fleecing the flock" -- the rise in affinity fraud, especially religious affinity fraud, during the economic downturn, and why it works so effectively.  (Also see my blog post from 2008 and another on the same topic from the Secular Outpost in 2006.)  Briefly mentioned is the Baptist Foundation of Arizona affinity fraud, which victimized my step-grandfather by stealing most of his retirement savings.

    "Visible-light communication: Tripping the light fantastic" -- an update on where we stand with Li-Fi (using LED lighting as a mechanism for data transmission).

    February 4-10, 2012:

    "Synaesthesia: Smells like Beethoven" -- A new study finds correlations between odors and sounds, even among people who are not synaesthetes.

    "Scientific publishing: The price of information" -- On the boycott of Elsevier by scientists tired of excessive charges for journals, and the competition from arXiv and PLoS.

    "Biomimetics: Not a scratch" -- lessons from the microstructure of scorpion armor for reducing wear rates on aircraft engines and helicopter rotors.

    Podcasts:

    Philosophy Bites interview with Alain de Botton on Atheism 2.0: de Botton, author of Religion for Atheists, argues that there are good and useful components of religion which can be secularized, and that it is as legitimate to borrow things we like from religion while discarding what we don't as it is to prefer different kinds of art and music.  (Also see the Token Skeptic interview with de Botton and watch his TED talk.)  I think his picture of religion, like that of Scott Atran (In Gods We Trust) and Pascal Boyer (Religion Explained) makes more sense than the way some atheists talk about it as though fundamentalist religion is the essence of religion, and should be discarded completely (which doesn't seem likely to happen as long as we live in social communities).

    Rationally Speaking interview with Joseph Heath: Heath, author of Economics without Illusions: Debunking the Myths of Modern Capitalism (Canadian title: Filthy Lucre: Economics for People who Hate Capitalism, which the publishers decided wouldn't sell in the U.S.), talks about misunderstandings of economics on both the right and the left.  (Also see this BloggingHeads TV interview of Heath by Will Wilkinson, who writes: "The section on right-wing fallacies is largely on the money and a great challenge for rote libertarians and conservatives. The section of left-wing fallacies is terrific, and it would be terrific if more folks on the left were anywhere near as economically literate as Heath.")  Heath's "Rationally Speaking pick" also sounds fascinating, Janos Kornai's The Socialist System: The Political Economy of Communism, which explains the creative but ultimately futile ways that human beings tried to replace markets with planning and design.)

    Saturday, June 25, 2011

    Arizona Department of Public Service's security breach

    LulzSec breached the security of the Arizona Department of Public Service (DPS) at some point in the past, and on June 23 around 4 p.m. Arizona time, posted some or all of what they had acquired.  This included the names, email addresses, and passwords of several DPS officers as well as a number of internal documents which appeared to have been obtained from email attachments or perhaps from the compromise of end user systems.  The documents included a PowerPoint presentation on gang tattoos that purported to be a way of identifying Islamic radicals, which was reminiscent of similar ludicrous law enforcement presentations from the 1980s about identifying Satanic cult members by their black clothing and occult symbols. (Some police departments still promote such nonsense, citing exposed fraud "Lauren Stratford" as a source).  The documents also included a bulletin which expresses concern about the "Cop Recorder" iPhone application.

    On June 24, DPS posted a press release responding to the attacks, accusing LulSec of being a "cyber terrorist group"--a term better reserved for the use of criminally disruptive activities intended to cause physical harm or disruption of critical infrastructure, not embarrassing organizations that haven't properly secured themselves.  In the press release, DPS enumerates the steps they've taken to secure themselves and the safeguards they've put in place. It's an embarrassing list which suggests they've had poor information security and continue to have poor information security.

    First, their press release has a paragraph suggesting that the damage is limited, before they're probably had time to really determine that's the case.  They write:

    There is no evidence the attack has breached the servers or computer systems of DPS, nor the larger state network. Likewise, there is no evidence that DPS records related to ongoing investigations or other sensitive matters have been compromised.

    Just because they have "no evidence" of something doesn't mean it didn't happen--what records did they review to make this determination?  Were they doing appropriate logging?  Have logs been preserved, or were they deleted in the breach?  Do they have centralized logging that is still secure?  When did the compromise take place, and when did DPS detect it?  The appearance is that they didn't detect the breach until it was exposed by the perpetrators.  What was the nature of the vulnerability exploited, and why wasn't it detected by DPS in a penetration test or vulnerability assessment?  LulzSec has complained about the number of SQL injection vulnerabilities they've found--was there one in DPS's web mail application?

    Next, they report what they've done in response, and again make statements about how "limited" the breach was:

    Upon learning that a limited number of agency e-mails had been disclosed, DPS took action. In addition to contacting other law enforcement agencies, the Arizona Counter Terrorism Information Center (ACTIC) has been activated. Remote e-mail access for DPS employees remains frozen for the time-being. The security of the seven DPS officers in question remains the agency’s top priority and, since a limited amount of personal information was publicly disclosed as part of this breach. Steps are being taken to ensure the officers’ safety and that of their families. 

    They've disabled the e-mail access that they believe was used in the breach--that's good.  Presumably the exposed officer passwords were discovered to be from this system.  Perhaps they will not re-enable the system until they have a more secure mechanism that requires VPN access and two-factor authentication--or at least intrusion prevention, a web application firewall, and effective security monitoring.  They've notified ACTIC--presumably in part because of their overblown claim that this breach constitutes "terrorism" and in part because there are some ACTIC personnel who have good knowledge of information security.  And they're doing something to protect the safety of officers whose personal information (including some home addresses) was exposed.


    In the final paragraph of the press release, they list some of the safeguards they have in place:

    - 24/7 monitoring of the state’s Internet gateway.
    - Industry-standard firewalls, anti-virus software and other capabilities.
    - IT security staff employed at each major state agency.
    - Close coordination between the State of Arizona and state, federal and private-sector authorities regarding cyber-security issues.

    This sounds like a less-than-minimal set of security controls.  Is that 24/7 monitoring just network monitoring for availability, or does it include security monitoring?  Do they have intrusion detection and prevention systems in place?  Do they have web application firewalls in front of web servers?  Do they have centralized logging and are those logs being monitored?  Are they doing event correlation?  How many full-time information security staff are there at DPS?  Are there any security incident response staff? Is there a CISO, and if so, why isn't that person being heard from?  Does DPS have an incident response plan?  Are they reviewing policy, process, and control gaps as part of their investigation of this incident?  Have they had any third-party assessments of their information security?  Have any past assessments, internal or external, recommended improvements that were not made?

    These are questions journalists should be asking, which DPS should certainly be asking itself internally, and which organizations that haven't had a publicized breach yet should be asking themselves.  Breaches are becoming inevitable (a recent Ponemon Institute survey says 90% of surveyed businesses have had a security breach in the last 12 months; CNet charts the recent major publicly known breaches), so having in place the capacities to respond and recover quickly is key.

    Here's how NOT to prepare:
    Depth Security, "How to Get Properly Owned"

    Here's how NOT to respond to a breach or vulnerability disclosure:
    SANS ISC, "How Not to Respond to a Security Incident"

    How to publicly disclose a breach:
    Technologizer, "How to Tell Me You Let Somebody Steal My Personal Information"

    Saturday, May 14, 2011

    My lousy Android experience

    I've been a holdout on upgrading to a smart phone, in part because I haven't paid over $100 for a mobile phone since they were the size of a brick.  But after finding that I could get a Droid 2 Global on Verizon for $20 via Amazon Wireless a couple of months ago, I made the leap.

    My initial experience was negative--Amazon sent me a phone with instructions to go to Verizon's web site to activate.  Verizon's website wanted me to enter a code from a Verizon invoice.  No such invoice was included, and none of the numbers from the Amazon invoice worked.  So I had to talk get through to a human being, at which point activation was fairly simple.  But one more hurdle arose when I had to login to a Google account, which was an obstacle of my own creation--I use very long randomly generated passwords with special characters, and have independent Google accounts for different services, so I had to choose which one to use with the phone before I knew what all the implications would be.  (I chose my GMail account, which has worked out OK.)

    I wanted to set the phone up to use my own email servers, and to connect over VPN to gain access.  This proved to be an obstacle that took a few days to resolve, due to inadequacies and bugs in Droid applications.  The default VPN client doesn't support OpenVPN, so I had to gain root access to install an OpenVPN client.  This turned out to be the only reason I needed root access on the phone, and I managed to get that working without much difficulty.

    The Email application, however, refused to send outbound mail through my mail server, which allows outbound port 25 client connections from internal hosts with no authentication but requiring TLS.  This combination simply doesn't work--I ended up setting up port 587 (submission port) with username/password authentication via Dovecot.  Though I would have preferred using client certificate authentication, I couldn't get it to work.  I still run into periodic problems with Email refusing to send outbound messages for no apparent reason--and the server shows no attempts being made.  There doesn't seem to be a way to select an individual message in the outbox for an attempt to re-send.

    I managed to get contact and calendar synchronization working with my Mac, but I ended up exporting my iCal calendars to Google Calendar and using them as my primary calendars.  Most of the correlation of contacts in the phone from multiple sources (e.g., Facebook, LinkedIn, and my Address Book) worked fairly well, but some contacts are duplicated due to name variations.  Synchronization with LinkedIn is somewhat buggy, with first and last names showing up in contacts as "null null."  The Calendar app is even more buggy--I've created events on the phone that disappear, I've seen error messages in Portuguese and events with names that appear to be leftover debugging messages. I was also surprised to see that spelling correction was performed, without any prompts, on events I imported into the Calendar app from GMail (it incorrectly turned an acronym, "JAD," into the word "HAD").

    I've received an SMS text message from one person which was identified as being from another person--looking at the specific contact information showed that the telephone number of the sender was associated with the correct contact, yet the name and photo displayed on the phone was of a different contact that had no association with that telephone number.

    The phone's camera capability is pretty good, but when I connect the phone to my Mac, it launches iPhoto but doesn't find any photographs.  I have to import them manually by pointing iPhoto to the correct location on the SD card.

    I've seen the phone crash repeatedly, especially when using location services (Google Navigation, Maps, and Yelp have been repeat offenders).  There also seems to be some caching of location information that gets out of sync with other location information.  For example, I saw Yelp correctly show me nearby restaurants, but refuse to allow me to check in to the one I was sitting in because I was "too far away"--and Maps showed my location being somewhere else I had been earlier.  In one case, thousands of miles away--an attempted Yelp check-in after returning from a vacation in Hawaii showed my location on the map as still being in Hawaii.  In at least one case, I was unable to get my location to update for Yelp until I rebooted the phone.

    I've had issues doing things as simple as copying and pasting a URL from Firefox to Facebook or Twitter.  I copy the URL, verify that it's in the clipboard correctly, but when I go into Facebook or Twitter to paste it, it is truncated.

    The number of bugs I run into seems awfully high for very basic applications.  The problem is no doubt in part due to the way development occurs between Google, Motorola, and Verizon, and Linux development, which also seems to be an obstacle to fixing security vulnerabilities.  The May 2011 issue of CSO magazine reports that Coverity has done two scans of Android source code for the HTC Incredible, finding 359 defects (88 critical) on the first scan last November and 149 defects (106 unfixed from the previous scan) on a more recent scan.  Accountability for the code is distributed across the aforementioned groups.  (Also see this CNet story, or the Coverity report itself.)

    I wonder if I would run into problems like this with an iPhone.

    UPDATE (May 19, 2011): And now there's a security vulnerability identified in version 2.3.3 of Android and earlier (I'm on 2.2, and can't update until Verizon pushes an update), which potentially exposes contacts, calendar events, pictures, and other items stored in Google-hosted services, if users access those services via unencrypted WiFi.  Although the connections to those services are over SSL-encrypted HTTP, there is a returned authToken that can be intercepted and used for subsequent logins to those services.  I've never used my Droid on unencrypted WiFi networks, but I'll now take extra care to make sure that I don't.  Version 2.3.4 fixes the problem for contacts and calendars but not for Picasa photos.

    UPDATE (November 16, 2011): It's still been a horrible experience, and I still see regular crashes, particularly when using map and location-related applications.  A new discovery today while traveling is that the World Clock widget does not know when Daylight Saving Time occurs--the option labeled "Daylight Savings[sic] Time: Adjust displayed time for Daylight Savings" appears to just set the clock forward one hour, not display the correct current time taking into account the date and whether Daylight Saving Time is in effect in the given location.  I traveled to the east coast and saw that my World Clock widget time for New York was one hour ahead of the actual time in New York.  It's utterly ridiculous that this widget requires the user to check and uncheck this option manually when Daylight Saving Time is in effect or not--that's exactly sort of simple task that computers are equipped to do on our behalf.

    Thursday, January 06, 2011

    Global Crossing blogging

    I've joined the team of Global Crossing bloggers--please check out my initial post at Global Crossing blogs, "Forget passwords!"

    (BTW, my friend and colleague Glen Walker independently wrote a blog post making a very similar recommendation.)

    Saturday, June 05, 2010

    Abe Heward's new blog on software testing

    Veteran software tester Abe Heward has started up a blog on software testing, which I'm sure will also include many items of epistemological, economic, and skeptical interest.  He's already got posts on how the post hoc ergo propter hoc fallacy is relevant to software testing, why good testers aren't robots (and the flaws in one company's attempt to treat them as if they were), and on opportunity cost and testing automation.

    Check it out at www.abeheward.com.

    Tuesday, April 20, 2010

    Translating local knowledge into state-legible science

    James Scott’s Seeing Like a State (about which I've blogged previously) talks about how the state imposes standards in order to make features legible, countable, regulatable, and taxable. J. Stephen Lansing’s Perfect Order: Recognizing Complexity in Bali describes a case where the reverse happened. When Bali tried to impose a top-down system of scientifically designed order--a system of water management--on Balinese rice farmers, in the name of modernization in the early 1970s, the result was a brief increase in productivity followed by disaster. Rather than lead to more efficient use of water and continued improved crop yields, it produced pest outbreaks which destroyed crops. An investment of $55 million in Romijn gates to control water flow in irrigation canals had the opposite of the intended effect. Farmers removed the gates or lifted them out of the water and left them to rust, upsetting the consultants and officials behind the project. Pesticides delivered to farmers resulted in brown leafhoppers becoming resistant to pesticides, and supplied fertilizers washed into the rivers and killed coral reefs at the mouths of the rivers.

    Lansing was part of a team sponsored by the National Science Foundation in 1983 that evaluated the Balinese farmers’ traditional water management system to understand how it worked. The farmers of each village belong to subaks, or organizations that manage rice terraces and irrigation systems, which are referred to in Balinese writings going back at least a thousand years. Lansing notes that “Between them, the village and subak assemblies govern most aspects of a farmer’s social, economic, and spiritual life.”

    Lansing’s team found that the Balinese system of water temples, religious ritual, and irrigation managed by the subaks would synchronize fallow periods of contiguous segments of terraces, so that long segments could be kept flooded after harvest, killing pests by depriving them of habitat. But their attempt and that of the farmers to persuade the government to allow the traditional system to continue fell upon deaf ears, and the modernization scheme continued to be pushed.

    In 1987, Lansing worked with James Kremer to develop a computer model of the Balinese water temple system, and ran a simulation using historical rainfall data. This translation of the traditional system into scientific explanation showed that the traditional system was more effective than the modernized system, and government officials were persuaded to allow and encourage a return to the traditional system.

    The Balinese system of farming is an example of how local knowledge can develop and become embedded in a “premodern” society by mechanisms other than conscious and intentional scientific investigation (in this case, probably more like a form of evolution), and be invisible to the state until it is specifically studied. It’s also a case where the religious aspects of the traditional system may have contributed to its dismissal by the modern experts.

    What I find of particular interest here is to what extent the local knowledge was simply embedded into the practices, and not known by any of the participants--were they just doing what they've "always" done (with practices that have evolved over the last 1,000 years), in a circumstance where the system as a whole "knows," but no individual had an understanding until Lansing and Kremer built and tested a model of what they were doing?

    [A slightly different version of the above was written for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Brenda T. for her comments.  More on Lansing's work in Bali may be found online here.]

    Thursday, April 15, 2010

    Winner's techne and politeia, 22 years later

    Chapter 3 of Langdon Winner’s The Whale and the Reactor (1988) is titled “TechnĂ© and Politeia,” a discussion of the relationship of technology and politics that draws upon Plato, Rousseau, and Thomas Jefferson to recount historical views before turning to the “modern technical constitution.”  The contemporary “interconnected systems of manufacturing, communications, transportation” and so forth that have arisen have a set of five features that Winner says “embody answers to age-old political questions ... about membership, power, authority, order, freedom, and justice” (p. 47).

    The five features are (pp. 47-48):
    1. “the ability of technologies of transportation and communication to facilitate control over events from a single center or small number of centers.”
    2. “a tendency for new devices and techniques to increase the most efficient or effective size of organized human associations.”
    3. “the way in which the rational arrangement of socio-technical systems has tended to produce its own distinctive forms of hierarchical authority.”
    4. “the tendence of large, centralized, hierarchically arranged sociotechnical entities to crowd out and eliminate other varieties of human activity.”
    5. “the various ways that large sociotechnical organizations exercise power to control the social and political influences that ostensibly control them.” (e.g., regulatory capture)
    Winner states that the adoption of systems with these features implicitly provides answers to political questions without our thinking about it, questions such as “Should power be centralized or dispersed? What is the best size for units of social organization? What constitutes justifiable authority in human associations? Does a free society depend on social uniformity or diversity? What are appropriate structures and processes of public deliberation and decision making?” (p. 49)  Where the founding fathers of the United States considered these questions explicitly in formulating our political constitution, the developers of technological systems--which have become socio-technical systems, with social practices surrounding the use of technology--have typically failed to do so, being more concerned with innovation, profit, and organizational control rather than broader social implications (p. 50).

    While there are widely accepted criteria for placing regulatory limits on technology--Winner notes five (threats to health and safety, exhaustion of a vital resource, degrading environmental quality, threats to natural species and wilderness, and causing “social stresses and strains of an exaggerated kind,” pp. 50-51)--he suggests that these are insufficient.  He cites a study by colleagues of electronic funds transfer (EFT) which suggested that it “would make possible a shift of power from smaller banks to larger national and international institutions” and create problems of data protection and individual privacy.  But those problems don’t seem to fall under his five criteria, so he suggested, ironically, that “their research try to show that under conditions of heavy, continued exposure, EFT causes cancer in laboratory animals” (p. 51).  Although I’d be surprised to find that EFT by itself had the effect Winner suggests, the recent global financial crisis as shown problems with allowing financial institutions to become “too big to fail” and motivated financial reform proposals (e.g., Sen. Dodd’s bill that would create new regulatory power over institutions with more than $50 billion in assets, including the ability to force such institutions into liquidation--“death panels” for large financial institutions).

    In the 22 years since Winner’s book was published, most of his five features seem to continue to be relevant to developments such as the Internet.  With respect to (2),(3), and (4) the Internet has greatly reduced the costs of organizing and allowed for social (non-market) production of goods.  But the mechanisms which ease the creation of small, geographically dispersed groups have also facilitated the creation of larger groups, new kinds of hierarchical authority, and new kinds of centralization and monitoring (e.g., via applications used by hundreds of millions of people, provided by companies like Google, Facebook, and Twitter).  It’s also allowed for new forms of influence by the same old powers-that-be, via techniques like astroturfing and faux amateur viral videos.

    [A slightly different version of the above was written as a comment for my Human and Social Dimensions of Science and Technology core seminar. Thanks to Tim K. for his comments (though I declined to move the paragraph you suggested).]

    Tuesday, November 24, 2009

    Wikileaks to release over 500K text pager intercepts from 9/11

    Wikileaks is releasing over 500,000 U.S. national text pager intercepts from September 11, 2001, over the next two days:
    From 3AM on Wednesday November 25, 2009, until 3AM the following day (New York Time), WikiLeaks will release over half a million US national text pager intercepts. The intercepts cover a 24 hour period surrounding the September 11, 2001 terrorist attacks in New York and Washington.

    The first message, corresponding to 3AM September 11, 2001, five hours before the first attack, will be released at 3AM November 25, 2009 and the last, corresponding to 3AM September 12, 2001 at 3AM November 26, 2009.

    Text pagers are mostly carried by persons operating in an official capacity. Messages in the collection range from Pentagon and New York Police Department exchanges, to computers reporting faults to their operators as the World Trade Center collapsed.
    This is a significant and completely objective record of the defining moment of our time. We hope that its entry into the historical record will lead to a deeper and more nuanced understanding of how this tragedy and its aftermath may have been prevented.

    While we are obligated by to protect our sources, it is clear that the information comes from an organization which has been intercepting and archiving national US telecommunications since prior to 9/11.
    The Transparent Society getting closer, it appears...

    What would be more horrifying than "locked-in" syndrome?


    Numerous mass media outlets and blogs are reporting on the misdiagnosis of Rom Houben of being comatose for 23 years when he was really conscious, according to Belgian neurologist Steven Laureys, who has claimed for years to be able to treat patients allegedly in a persistent vegetative state with electric shocks and find that they were really in a minimally conscious state. Videos of Houben show him allegedly communicating via a keyboard which is pressed by a single finger on one hand--but his hand is being held by a facilitator, and he's not even looking at the keyboard. Some still photos show the facilitator looking intently at the keyboard, while Houben's eyes are closed.

    James Randi observes that this looks just like the self-deception of Facilitated Communication that was promoted as a way to communicate with severely autistic people, and Marshall Brain at How Stuff Works seconds that conclusion.

    I think it's a bit too fast to conclude that Houben's not conscious--brain scans could indeed have provided good evidence that he is. But what would be worse than having "locked-in syndrome"? Having somebody else purporting to speak for you with ideomotor-driven Facilitated Communication, while you were helpless to do anything about it.

    I'd like to see some double-blind tests of Houben, where he's asked questions about events that occur when the facilitator isn't present, as well as fMRI results during the process of facilitation (since there are brain activation differences between active and passive activities, which have been used to study such things as the perception of involuntariness during hypnosis--it shows features of both active and passive movement). I'd also like to see further opinion on Laureys methodology and diagnosis--it seems he has significant self-interest in promoting this case.

    UPDATE: Brandon Keim at Wired Science has finally asked the questions that those who have reported this in the mainstream media should have been asking.

    Here's a 2001 review of the scientific literature on facilitated communication.

    UPDATE: The video on this story shows the facilitator typing for him while his eyes are closed and he appears to be asleep.

    UPDATE: A Times Online story claims that Houben's facilitator, Linda Wouters, spent the last three years working with Houben to learn to feel tiny muscle movements in his finger, and that Dr. Laureys did tests to validate the technique:

    The spectacle is so incredible that even Steven Laureys, the neurologist who discovered Mr Houben’s potential, had doubts about its authenticity. He decided to put it to the test.

    “I showed him objects when I was alone with him in the room and then, later, with his aide, he was able to give the right answers,” Professor Laureys said. “It is true.”

    and

    Mr Houben’s “rebirth” took many painstaking months. “We asked him to try and blink but he couldn’t; we asked him to move his cheek but he couldn’t; we asked him to move his hand and he couldn’t,” Mrs Wouters said.

    “Eventually, someone noticed that when we talked to him he moved his toe so we started to try and communicate using his toe to press a button.”

    It was a breakthrough but much more was to come when a fellow speech therapist discovered that it was possible to discern minuscule movements in his right forefinger.

    Mrs Wouters, 42, was assigned to Mr Houben and they began to learn the communication technique that he is now using to write a book about his life and thoughts. “I thought it was a miracle — it actually worked,” she said.

    The method involves taking Mr Houben by the elbow and the right hand while he is seated at a specially adapted computer and feeling for minute twitches in his forefinger as his hand is guided over the letters of the alphabet. Mrs Wouters said that she could feel him recoil slightly if the letter was wrong. After three years of practice the words now come tumbling out, she said.

    This still seems hard to rationalize with the video footage of the typing occurring while he's apparently asleep. Mrs. Wouters admits the possibility of "tak[ing] over" for him:
    “The tension increases and I feel he wants to go so I move his hand along the screen and if it is a mistake he pulls back. As a facilitator, you have to be very careful that you do not take over. You have to follow him.”
    UPDATE (November 25, 2009): Neurologist Steven Novella has weighed in. He suggests that Houben may have recovered some brain function and be conscious, but that the facilitated communication in the videos is positively bogus.
    I've noted on the discussion page of Dr. Steven Laureys' Wikipedia entry that the paper in BMC Neurology that purportedly included Houben as a subject claims that all patients in the study were in a minimally conscious state (MCS) but had been misdiagnosed as being in a persistent vegetative state (PVS). The criteria of the study say that those that recovered and emerged from MCS were excluded, which seems at odds with claims that Houben's brain function is "almost normal." A story in Nature 443, 132-133 (14 September 2006) by Mike Hopkin, "'Vegetative' patient shows signs of conscious thought," which quotes Laureys, is about a different patient, in a persistent vegetative state, who showed some signs minimal consciousness. When asked to visualize herself playing tennis, for example, she showed corresponding brain activity. But, as that article noted, that kind of neural response isn't necessarily a sign of consciousness:

    But what that 'awareness' means is still up for debate. For example, Paul Matthews, a clinical neuroscientist at Imperial College London, argues that the brain imaging technique used cannot evaluate conscious thought; fMRI lights up regions of brain activity by identifying hotspots of oxygen consumption by neurons. "It helps us identify regions associated with a task, but not which regions are necessary or sufficient for performing that task," he says.

    Matthews argues that the patient's brain could have been responding automatically to the word 'tennis', rather than consciously imagining a game. He also points out that in many vegetative cases, the patient's motor system seems to be undamaged, so he questions why, if they are conscious, they do not respond physically. "They are simply not behaving as if they are conscious," he says.

    Owens counters that an automatic response would be transient, lasting for perhaps a few seconds before fading. He says his patient's responses lasted for up to 30 seconds, until he asked her to stop. He believes this demonstrates strong motivation.

    He does admit, however, that it is impossible to say whether the patient is fully conscious. Although in theory it might be possible to ask simple 'yes/no' questions using the technique, he says: "We just don't know what she's capable of. We can't get inside her head and see what the quality of her experience is like."

    But then again, as someone who's been reading a lot of literature on automaticity and voluntary action lately, it appears to me likely that a lot of our normal actions are automatic, the product of unconsciously driven motor programs of routine behavior.

    Laureys is quoted in the article with a note of skepticism:

    "Family members should not think that any patient in a vegetative state is necessarily conscious and can play tennis," says co-author Steven Laureys of the University of Liège, Belgium."It's an illustration of how the evaluation of consciousness, which is a subjective and personal thing, is very tricky, especially with someone who cannot communicate."

    The article goes on to note that this woman, who is possibly somewhere between PVS and MCS, "seems to have been much less severely injured than the permanently vegetative Terri Schiavo" (as the report from her Guardian Ad Litem (PDF) made clear).

    If Houben is in a minimally conscious state, which he apparently was in order to be included in Laureys' paper that his Wikipedia page says published the Hoeben case in 2009, that appears to contradict news claims that Houben's brain function is "nearly normal," unless he has recovered further function since that paper was written.

    UPDATE (November 26, 2009): This footage of Houben and Mrs. Wouters from Belgian (Dutch) state television seems to be the most extensive footage of the facilitation process, and while it starts out looking slightly more plausible, it also clearly shows fairly rapid typing while his eyes are closed (and the camera zooms in on his face).

    UPDATE (November 28, 2009): Dr. Laureys and Dr. Novella have had some interaction, which demonstrates that Laureys doesn't get it.

    UPDATE (February 15, 2010): Dr. Laureys almost gets it now, and has done additional tests, which have shown that the communications are coming from the facilitator, not Houben.

    UPDATE (February 20, 2010): David Gorski at the Science-Based Medicine blog has a bit more from the Belgian Skeptics, who were involved in the test.

    Thursday, November 19, 2009

    Joel Garreau on radical evolution

    Yesterday I heard Joel Garreau speak again at ASU, as part of a workshop on Plausibility put on by the Consortium for Science, Policy, and Outcomes (CSPO). I previously posted a summary of his talk back in August on the future of cities. This talk was based on his book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies--and What It Means to Be Human.

    Garreau was introduced by Paul Berman, Dean of the Sandra Day O'Connor School of Law at ASU, who also announced that Garreau will be joining the law school faculty beginning this spring, as the Lincoln Professor for Law, Culture, and Values.

    He began by saying that we're at a turning point in history [has there ever been a time when we haven't thought that, though?], and he's going to present some possible scenarios for the next 2, 3, 5, 10, or 20 years, and that his book is a roadmap. The main feature of this turning point is that rather than transforming our environment, we'll be increasingly transforming ourselves, and we're the first species to take control of its own evolution, and it's happening now.

    At some point in the not-too-distant future, he said, your kid may come home from school in tears about how he can't compete with the other kids who are more intelligent, more athletic, more attractive, more attentive, and so forth--because you haven't invested in the human enhancement technologies coming on the market. Your possible reactions will be to suck it up [somebody's still gotta do the dirty jobs in society?], remortgage the house again to make your kid competitive, or try to get the enhanced kids thrown out of school. What you can't do is ignore it.

    He then asked people to raise their hands who could remember when things were still prevalent:
    • The Sony Walkman
    • When computer screens were black and white. (An audience member said "green and black!")
    • Rotary dial phones
    • Mimeograph machines and the smell of their fluid
    • Polio
    This shows, he said, that we're going through a period of exponential change.

    His talk then had a small amount of overlap with his previous talk, in his explanation of Moore's Law--that we've had 32 doublings of computer firepower since 1959, so that $1 of computing power is about 2 billion times more than it was then, and an iPhone has more computing power than all of NORAD had in 1965. Such doublings change our expectations of the future, so that the last 20 years isn't a guide to the next 20, but to the next 8; the last 50 years is a guide to the next 14. He pulled out a handkerchief and said this is essentially the sort of display we'll have in the future for reading a book or newspaper.

    He then followed Ray Kurzweil in presenting some data points to argue that exponential change has been going on since the beginning of life on earth (see P.Z. Myers' "Singularly Silly Singularity" for a critique):

    It took 400 million years (My) to go from organisms to mammals, and
    • 150My to monkeys
    • 30My to chimps
    • 16My to bipedality
    • 4My to cave paintings
    • 10,000 years to first settlements
    • 4,000 years to first writing
    At this point, culture comes into the picture, which causes even more rapid change (a point also made by Daniel Dennett in his talk at ASU last February).
    • 4,000 years to Roman empire
    • 1800 years to industrial revolution
    • 100 years to first flight
    • 66 years to landing on the moon
    And now we're in the information age, which Garreau identified as a third kind of evolution, engineered or radical evolution, where we're in control. [It seems to me that such planned changes are subject to the limits of human minds, unless we can build either AI or enhancement technologies that improve our minds, and I think the evidence for that possibility really has yet to be demonstrated--I see it as possible, but I place no bets on its probability and think there are reasons for skepticism.]

    Garreau spent a year at DARPA (the Defense Advanced Research Projects Agency), the organization that invented the Internet (then the ARPANet), which is now in the business of creating better humans, better war fighters. [DARPA was also a subject of yesterday's Law, Science, and Technology class. It's a highly funded organization that doesn't accept grant proposals, rather, it seeks out people that it thinks are qualified to give funding to for its projects. It has become rather more secretive as a result of embarrassment about its Total Information Awareness and terrorism futures ideas that got negative press in 2003.]

    Via DARPA, Garreau learned about their project at Duke University with an owl monkey named Belle, that he described as a monkey that can control physical objects at long distances with her mind. Belle was trained to play a video game with a joystick, initially for a juice reward and then because she enjoyed it. They then drilled a hole in her head and attached fine electrodes (single-unit recording electrodes like the sort used to discover mirror neurons), identified the active regions of her brain when she operated the joystick, and then disconnected the joystick. She became proficient and playing the game with the direct control of her brain. They then connected the system to a robotic arm at MIT which duplicated the movements of her arm with the joystick.

    Why did they do this? Garreau said there's an official reason and a real reason. The official reason is that an F-35 jet fighter is difficult to control with a joystick, and wouldn't it be better to control it with your mind, and send information sensed by the equipment directly into the mind? The real reason is that the DARPA defense sciences office is run by Michael Goldblatt, whose daughter Gina Marie (who recently graduated from the University of Arizona) has cerebral palsy and is supposed to spend the rest of her life in a wheelchair. If machines can be controlled with the mind, machines in her legs could be controlled with her mind, and there's the possibility that she could walk.

    Belle first moved the robotic arm 9 years ago, Garreau said, and this Christmas you'll be able to buy the first toy mind-machine interface from Mattel at Walmart for about $100. It's just a cheap EEG device and not much of a game--it lets you levitate a ping pong ball with your mind--but there's obviously more to come.

    Garreau said that Matthew Nagel was the first person to send emails using his thoughts (back in 2006), and DARPA is interested in moving this technology out to people who want to control robots. [This, by the way, is the subject of the recent film "Sleep Dealer," which postulates a future in which labor is outsourced to robots operated by Mexicans, so that they can do work in the U.S. without immigrating.]

    This exposure to DARPA was how Garreau got interested in these topics, which he called the GRIN technologies--Genetics, Robotics, Information science, and Nanotechnology, which he identified as technologies enabled by Moore's Law.

    He showed a slide of Barry Bonds, and said that steroids are sort of a primitive first-generation human enhancement, and noted that the first uses of human enhancement tend to occur in sports and the military, areas where you have the most competition.

    Garreau went over a few examples of each of the GRIN technologies that already exist or are likely on the way.

    Genetics
    Dolly the cloned sheep. "Manipulating and understanding life at the most primitive and basic level."

    "Within three years, memory pills, originally aimed at Alzheimer's patients, will then move out to the needy well, like 78 million baby boomers who can't remember where they left their car, then out to the merely ambitious." He said there's already a $36.5 billion grey market for drugs like Ritalin and Provigil (midafonil), and asked, "Are our elite schools already filling up with the enhanced?" [There's some evidence, however, that the enhancement of cognitive function (as opposed to staying awake) is minimal for people who already operate at high ability, with the greatest enhancement effect for those who don't--i.e., it may have something of an egalitarian equalizing effect.]

    He said DARPA is looking at ways to end the need for sleep--whales and dolphins don't sleep, or they'd drown, but they do something like sleeping with one half of the brain at a time.

    DARPA is also looking at ways to turn off hunger signals. Special forces troops burn 12,000 calories per day, but can't carry huge amounts of food. The body carries extra calories in fat that are ordinarily inaccessible unless you're starving, at which point they get burned. If that switch to start burning fat could be turned on and off at will, that could be handy for military use. He observed that DARPA says "the civilian implications of this have not eluded us."

    Sirtris Pharmaceuticals, started by David Sinclair of the Harvard Medical School, aims to have a drug to reverse aging based on resveratrol, an ingredient from grapes found in red wine. [Though Quackwatch offers some skepticism.]

    Garreau looks forward to cures for obesity and addiction. He mentioned Craig Venter's plan to create an organism that "eats CO2 and poops gasoline" by the end of this year, that will simultaneously "end [the problems in] the Middle East and climate change." [That seems overly optimistic to me, but ExxonMobil has given Venter $600 million for this project.]

    He said there are people at ASU in the hunt, trying to create life forms like this as well. [Though for some reason ASU doesn't participate in the iGEM synthetic biology competition.]

    Robotics
    Garreau showed a photo of a Predator drone, and said, "Ten years ago, flying robots were science fiction, now it's the only game in town for the Air Force." He said this is the first year that more Air Force personnel were being trained to operate drones than to be pilots. 2002 was the first year that a robot killed a human being, when a Predator drone launched a Hellfire missile to kill al Qaeda members in an SUV in Yemen. He said, "while there's still a human in the loop, philosophical discussions about homicidal robots could be seen as overly fine if you were one of the guys in the SUV."

    "We're acquiring the superpowers of the 1930s comic book superheroes," he said, and went on to talk about a Berkeley exoskeleton that allows you to carry a 180-pound pack like it weighs four pounds, like Iron Man's suit. He asked the engineers who built it, "Could you leap over a tall building in a single bound?" They answered, "yes, but landing is still a problem."

    Functional MRI (fMRI) is being used at the University of Pennsylvania to try to determine when people are lying. Garreau: "Then you're like the Shadow who knows what evil lurks in the hearts of men."

    Cochlear implants to give hearing to people for whom hearing aids do nothing, connecting directly to the auditory nerve. Ocular implants to allow the blind to have some vision. Brain implants to improve memory and cognition. Garreau asked, "If you could buy an implant that would allow you to be fluent in Mandarin Chinese, would you do it?" About half the room raised their hands. [I didn't hear a price or safety information, so didn't raise my hand.]

    Information
    He showed a photo of a camera phone and said, "Fifteen years ago, a machine like this that can fit in your pocket, with a camera, GPS, and MP3 player, and can send email, was science fiction. Now it's a bottom-of-the-line $30 Nokia."

    He asked, "Does anyone remember when music players were three big boxes that you put on your bookshelves? Now they're jewelry. Soon they'll be earrings, then implants."

    Close behind, he said, are universal translators. "Google has pretty good universal translation on the web, and see it as moving out to their Droid phones." He observed that Sergey Brin was talking in 2004 about having all of the world's information directly attached to your brain, or having a version of Google on a chip implanted in your brain. [I won't get one unless they address network security issues...]

    Nanotechnology
    Garreau said, "Imagine anything you want, one atom or molecule at a time. Diamonds, molecularly accurate T-bone steaks." He said this is the least developed of the four GRIN technologies, "so you can say anything you want about it, it might be true." It's estimated to become a $1 trillion/year market in the next 10 years. There may be nanobots you can inject into your bloodstream by the thousands to monitor for things about to go wrong [see this video for the scenario I think he's describing], hunter-killers that kill cancer cells. "When you control matter at a fundamental level, you get a feedback loop between the [four] technologies."

    At this point, Garreau said he's really not all that interested in the "boys and their toys" so much as he is the implications--"where does this take culture and society and values?" He presented three possible scenarios, emphasizing that he's not making predictions. He called his three scenarios Heaven, Hell, and Prevail.

    Heaven
    He showed a chart of an exponential curve going up (presumably something like technological capacity on the y axis and time on the x axis).

    He said that at the NIH Institute on Aging, there's a bet that the first person to live to 150 is already alive today. He mentioned Ray Kurzweil, said that he pops 250 pills a day and is convinced that he's immortal, and is "not entirely nuts." [I am very skeptical that 250 pills a day is remotely sensible or useful.]

    For the last 160 years, human life expectancy has increased at about 1/4 of a year every year. He asked us to imagine that that rate improves to one year per year, or more--at that point, "if you have a good medical plan you're effectively immortal." [I questioned this in the Q&A, below.]

    Hell
    He showed a chart that was an x-axis mirror of the Heaven one, and described this as a case where technology "gets into the hands of madmen or fools." He described the Australian mousepox incident, where researchers in Australia found a way to genetically alter mousepox so that it becomes 100% fatal, destroying the immune system, so that there's no possible vaccine or prevention. This was published in a paper available to anyone, and the same thing could be done to smallpox to wipe out human beings with no defense. He said the optimistic version is something that wipes out all human life; the pessimistic version is something that wipes out all life on earth. [In my law school class, we discussed this same topic yesterday in more detail, along with a similar U.S. paper that showed how to reconstruct the polio virus.]

    The problem with both of these scenarios for Garreau is that they are both "techno-deterministic," assuming that technology is in control and we're "just along for the ride."

    Prevail
    He showed a chart that showed a line going in a wacky, twisty pattern. The y-axis may have been technological capacity of some sort, but the x-axis in this case couldn't have been time, unless there's time travel involved.

    Garreau said, if you were in the Dark Ages, surrounding by marauding hordes and plagues, you'd think there wasn't a good future. But in 1450 came the printing press--"a new way of storing, sharing, collecting, and distributing information," which led to the Renaissance, enlightenment, science, democracy, etc. [Some of those things were rediscoveries of advancements previously made, as Richard Carrier has pointed out. And the up-and-down of this chart and example of the Dark Ages seems to be in tension, if not in conflict, with his earlier exponential curve, though perhaps it's just a matter of scale. At the very least, however, they are reason to doubt continued growth in the short term, as is our current economic climate.]

    Garreau called the Prevail scenario more of a co-evolution scenario, where we face challenges hitting us in rapid succession, to which we quickly respond, which creates new challenges. He expressed skepticism of top-down organizations having any capacity to deal with such challenges, and instead suggested that bottom-up group behavior by humans not relying on leaders is where everything interesting will happen. He gave examples of eBay ("100 million people doing complex things without leaders"), YouTube ("no leaders there"), and Twitter ("I have no idea what it's good for, but if it flips out the Iranian government, I'm for it.") [These are all cases of bottom-up behavior facilitated by technologies that are operated by top-down corporations and subject to co-option by other top-down institutions in various ways. I'm not sure how good the YouTube example is considering that it is less profitable per dollar spent than Hulu--while some amateur content bubbles to the top and goes viral, there still seems to be more willingness to pay for professional content. Though it does get cheaper to produce professional content and there are amateurs that produce professional-quality content. And I'll probably offer to help him "get" Twitter.]

    The Prevail scenario, he said, is "a bet on humans being surprising, coming together in unpredicted ways and being unpredictably clever."

    He ended by asking, "Why have we been looking for intelligent life in the universe for decades with no success? I wonder if every intelligent species gets to the point where they start controlling their own destiny and what it means to be as good as they can get. What if everybody else has flunked. Let's not flunk. Thanks."

    Q&A
    I asked the first question, which was whether there is really so much grounds for optimism on extending human lifespan when our gains have increased the median lifespan but not made recent progress on the top end--the oldest woman in the world, Jeanne Calment, died at 122 in 1997 and no one else has reached that age. He answered that this was correct, that past improvements have come from nutrition, sanitation, reducing infant mortality, and so forth, but now that we spent $15 billion to sequence the first human genome and the cost of sequencing a complete human genome is approaching $1,000 and personalized medicine is coming along, he suspects we'll find the causes of aging and have the ability to reverse it through genetic engineering.

    Prof. David Guston of CSPO asked "What's the relation between your Prevail scenario and the distribution of the success of the good stuff from GRIN technologies?" Looking at subgroups like males in post-Soviet Russia and adults in Africa, he said, things seem to be going in the wrong direction. Garreau answered that this is one of the nightmare scenarios--that humans split into multiple species, such as enhanced, naturals, and the rest. The enhanced are those that keep upgrading every six months. The naturals are those with access to enhancements that "choose not to indulge, like today's vegetarians who are so because of ethical or aesthetic reasons." The rest are those who don't have access to enhancements, and have envy for and despise those who do. "When you have more than one species competing for the same ecological niche," he said, "that ends up badly for somebody." But, he said, that's assuming a rich-get-richer, poor-get-poorer belief, "a hallmark of the industrial age." Suppose that instead of distributing scarcity, we are distributing abundance. He said that transplanted hearts haven't become cheap because they aren't abundant, but if we can create new organs in the body or in the lab in a manner that would benefit from mass production, it could become cheap. He pointed out that cell phones represent "the fastest update of technology in human history," going from zero to one phone for every two people in 26 years, and adapted to new uses in the developing world faster than in the developed world. He brought up the possibility of the developing world "leapfrogging" the developed world, "the way Europeans leapfrogged the Arab world a thousand years ago, when they were the leaders in science, math, and everything else." [I think this is a very interesting possibility--the lack of sunk costs in existing out-of-date infrastructure, the lack of stable, firmly established institutions are, I think, likely to make the developing world a chaotic experimental laboratory for emerging technologies.]

    Prof. Gary Marchant of the Center for the Study of Law, Science, and Technology then said, "I'm worried about the bottom-up--it also gave us witch trials, Girls Gone Wild, and the Teabaggers." Garreau said his Prevail scenario shows "a shocking faith in human nature--a belief in millions of small miracles," but again said "I'm not predicting it, but I'm rooting for it."

    Prof. Farzad Mahootian and Prof. Cynthia Selin of CSPO asked a pair of related questions about work on public deliberations and trying to extend decision-making to broader audiences, asking what Garreau thought about "DARPA driving this or being porous to any kind of public deliberation or extended decision-making?" Garreau responded that "The last thing in the world that I want to do is leave this up to DARPA. The Hell scenario could happen. Top-down hierarchical decision-making is too slow. Anyone waiting for the chairman of the House finance committe to save us is pathetic. Humans in general have been pulling ashes out of the fire by the skin of their teeth for quite a while; and Americans in particular have been at the forefront of change for 400 years and have cultural optimism about change." [I think these questions seemed to presuppose top-down thinking in a way that Garreau is challenging.]

    He said he had reported a few years ago about the maquiladoras in Mexico and called it a "revolution," to which he got responses from Mexicans saying, "we're not very fond of revolutions, it was very messy and we didn't like it," and asking him to use a different word. By contrast, he said, "Americans view revolutions fondly, and think they're cool, and look forward to it." [Though there's also a strange conservatism that looks fondly upon a nonexistent ideal past here, as well.] With respect to governance, he said he's interested in looking for alternate forms of governance because "Washington D.C. can't conceivably respond fast enough. We've got a long way to go and a short time to get there." [Quoting the 'Smokey and the Bandit' theme song.]

    He went on to say, "I don't necessarily think that all wisdom is based here in America. Other places will come up with dramatically different governance." He talked about the possibility of India, which wants to get cheaper drugs out to the masses, taking an approach different from FDA-style regulation (he called the FDA "a hopelessly dysfunctional organization that takes forever to produce abysmal results"). "Let's say the people of India were willing to accept a few casualties to produce a faster, better, cheaper cure for malaria, on the Microsoft model--get a 'good enough' version, send it out and see how many computers die. Suppose you did that with drugs, and were willing to accept 10,000 or 100,000 casualties if the payoff was curing malaria once and for all among a billion people. That would be an interesting development." By contrast, he said, "The French are convinced they can do it the opposite way, with top-down governance. Glad to see somebody's trying that. I'll be amazed if it works." His view, he said, was "try everything, see what sticks, and fast." [This has historically been the intent of the U.S. federal system, to allow the individual states to experiment with different rules to see what works before or in lieu of federal rules. Large corporations that operate across states, however, which have extensive lobbying power, push for federal regulations to pre-empt state rules, so that they don't have to deal with the complexity.]

    There were a few more questions, one of which was whether anyone besides DARPA was doing things like this. Garreau said certainly, and pointed to both conventional pharmaceutical companies and startups working to try to cure addiction and obesity, as well as do memory enhancement, like Eric Kandel's Memory Pharmaceuticals. He talked about an Israeli company that has built a robotic arm which provides touch feedback, with the goal of being able to replace whatever functionality someone has lost, including abilities like throwing a Major League fastball or playing the piano professionally.

    Prof. Selin reported a conversation she had with people at the law school about enhancement and whether it would affect application procedures. They indicated that it wouldn't, that enhancement was no different to them than giving piano lessons to children or their having the benefit of a good upbringing. Garreau commented that his latest client is the NFL, and observed that body building has already divided into two leagues, the tested and the untested. The tested have to be free of drugs, untested is anything goes. He asked, "can you imagine this bifurcation in other sports? How far back do you want to back out technology to get to 'natural'? Imagine a shoeless football league." He noted that one person suggested that football minus technology is rugby. [This reminded me of the old Saturday Night Live skit about the "All Drug Olympics."]

    All-in-all, it was an interesting talk that had some overlap with things that I'm very interested in pursuing in my program, especially regarding top-down vs. bottom-up organizational structures. Afterward, I spoke briefly with Garreau about how bottom-up skeptical organizations are proliferating and top-down skeptical organizations are trying to capitalize on it, and I wondered to what extent the new creations of bottom-up organizations tend to get co-opted and controlled by top-down organizations in the end. In that regard, I also talked to him a bit about Robert Neuwirth's work on "shadow cities" and the Kowloon Walled City, where new forms of regulatory order arise in jurisdictional no-man's-lands (I could also have mentioned pirate codes). Those cases fall between the cracks for geographical reasons, while the cases that are occurring with regard to GRIN technologies fall between the cracks for temporal reasons, but it seems to me there's still the possibility of the old-style institutions to catch up and take control.

    UPDATE: As a postscript, I recently listened to the episode of the Philosophy Bites podcast on human enhancement with philosopher Allen Buchanan, who was at the University of Arizona when I went to grad school there. Good stuff.