Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Sunday, August 04, 2024

Tips on using OpenBSD's pledge and unveil in perl scripts

 OpenBSD 5.9 (current as of this post is 7.5) introduced the "pledge" system call and 6.4 introduced the "unveil" system call, which together provide a means of more granular control of system access by processes running on the system to enforce least privilege.  When a program calls "pledge", it provides a list of categories of system calls (called "promises") that it is planning to make during the life of the running process (children have to make their own pledges and are not restricted), and attempts to make calls outside of those areas will cause the call to be blocked and the process to be killed. Additional calls to pledge cannot add new categories but it can remove them, so access can become more restrictive but not less restrictive.

  "Unveil," by contrast, selectively exposes parts of the file system, by file path, with specific access, and the rest of the file system is correspondingly "veiled" or blocked from access. Successive calls to unveil can expand or override previous ones, expanding access to the file system, adding write and create permissions where there was previously read only, but only until unveil is called with no arguments, which locks the current state in place. Further attempts to call unveil after that result in a violation.

Violations of pledges or attempts to access file paths that are not unveiled show up in process accounting logs for the process with the process flags "P" or "U", respectively.  (My "reportnew" log monitoring script knows how to monitor process accounting logs and can be easily set up to report on such violations.)

Perl scripts on OpenBSD can also use pledge and unveil, with two modules provided in the base operating system, "OpenBSD::Pledge" and "OpenBSD::Unveil".  I've been adding this functionality to several of my commonly used scripts and have learned a few tips that I'd like to share.

Pledge:

* Check your call to pledge for errors.  If you typo the name of a promise (category of system calls), or you provide pledge with a string of comma separated promises instead of an array or list, it will fail and nothing will be pledged.

* If you don't have any idea what promises are required, just use "error".  With the error promise, instead of blocking the system call and killing the process, the result is logged to /var/log/messages and you can see what promises are required.

* The "stdio" promise is always included with OpenBSD::Pledge, so you don't need to list it.

* The "unveil" promise is required if you intend to use OpenBSD::Unveil.

* Calls to exec or system require the "proc" and "exec" promises; the new processes created as a result are not restricted and need to make their own use of pledge and unveil.  (Note: this means that if you are calling a system command that writes to a file, but your script doesn't otherwise write to files, you do not need to pledge the "wpath" promise in your script.)

* If you otherwise fork a child process (e.g., explicitly using "fork" or Parallel::ForkManager or implicitly forking a child process using "open" to read from or write to a command), the promises pledged by the parent process are carried over to the child, which can then restrict them further. (Hat tip to Bryan Steele, who pointed this out on Bluesky without specifically referring to the Perl context.)

* If you use the DBI perl module with mariadb and are accessing a database through a named pipe on the same server, you'll need to pledge the "unix", "inet", and "prot_exec" promises. (This works even from a chroot jail if the named pipe or socket is a hard link from the jail.)

* This isn't a tip, but an observation: if you promise "proc" but not "exec," your system call will fail but your process will not be killed and the script will continue running.

Unveil:

* If you make use of other perl modules in your code with "use", they are loaded prior to your call to unveil and so you don't need to unveil directories like /usr/libdata/perl5 in order to use them. The exception is perl modules that include compiled shared objects (".so"), or which use "require" on other modules (loading them at runtime), in which case you do need unveil such directories, but only with "r" permission.

* If you use the DBI perl module with mariadb, you will need to unveil /var/run/mysql with "rw" and /usr/lib and /usr/local/lib with "rx".

* If you use calls to "system" or "open" which use pipes, globs, or file redirection, you need to unveil "/bin/sh" with "x" permission. You may be able to rewrite your code to avoid the requirement--can you call "system" with a command name and list of arguments rather than a string, and do any processing you need in your program instead of with the shell?

* If you use calls to "system" to execute system commands, you need to unveil them with "x" permission but in most cases you don't need to include "r".

* It is often much easier to unveil a directory rather than individual files; if you plan to check for the existence of a file and then create it if it doesn't exist, you need "rwc" on the containing directory.

* One of the biggest challenges sometimes is to find the source of an unveil violation; unveiling "/" with various permissions to see if it goes away, and then removing that and testing individual directories under the root directory in trial and error can help find things. That's how I first found the need to unveil "/bin/sh".


Finally, if you are writing perl modules it's helpful to document which promises need to be pledged and files and directories need to be unveiled in the calling scripts in order for them to function. It would be inappropriate to pledge or unveil within the module except in a context like a forked child process. I've done this with my Signify.pm wrapper for the OpenBSD "signify" command for signing and verifying files with detached signatures or gzip archives with embedded signatures in the gzip header comments.

If you've made use of pledge and unveil--in perl scripts or otherwise--what lessons have you learned?


Thursday, June 15, 2023

AI vs. CEOs on Destroying the World

 After seeing a CNN headline "Exclusive: 42% of CEOs say AI could destroy humanity in five to ten years," I thought it was only fair to get an AI opinion in response. Turns out, 50% of AIs consulted said CEOs of large corporations could destroy the world in five to ten years.

Google Bard:

How likely is it that CEOs of large corporations will destroy the world in the next five to ten years?

The likelihood of CEOs of large corporations destroying the world in the next five to ten years is low, but not impossible. The biggest threats to the planet come from climate change, pollution, and resource depletion. These are all problems that are caused by the activities of large corporations, and if they are not addressed, they could lead to catastrophic consequences.

However, there are also many CEOs who are committed to sustainability and environmental protection. These CEOs are working to develop new technologies, products, and practices that will help to reduce our impact on the planet. If these efforts are successful, it is possible to avoid the worst-case scenarios and create a more sustainable future.

Ultimately, the fate of the planet will depend on the actions of both CEOs and consumers. CEOs have the power to make decisions that will have a big impact on the environment. Consumers have the power to choose products and services that are produced in a sustainable way. If we all work together, we can create a future where the planet is healthy and prosperous for all.

Here are some of the things that CEOs can do to help protect the planet:

  • Invest in renewable energy and energy efficiency
  • Reduce their reliance on fossil fuels
  • Eliminate waste
  • Protect forests and other natural resources
  • Promote sustainable agriculture
  • Support policies that protect the environment

Here are some of the things that consumers can do to help protect the planet:

  • Choose products and services that are produced in a sustainable way
  • Reduce their consumption of energy and resources
  • Recycle and compost
  • Support sustainable businesses
  • Get involved in environmental activism

By working together, we can create a more sustainable future for the planet.

ChatGPT:

How likely is it that CEOs of large corporations will destroy the world in the next five to ten years?

It is highly unlikely that CEOs of large corporations will intentionally "destroy the world" in the next five to ten years or at any other time. While it is true that some corporations and business leaders have engaged in unethical practices or made decisions that have had negative consequences for the environment or society, it is important to note that the vast majority of CEOs strive to run their companies responsibly and ethically.

In recent years, there has been an increasing focus on corporate social responsibility and sustainability, with many companies adopting measures to reduce their environmental impact and contribute positively to society. Additionally, regulatory bodies and public scrutiny play a role in holding corporations accountable for their actions.

It is essential to encourage responsible business practices, promote transparency, and hold corporations accountable for any harmful actions. However, it is overly simplistic and unfair to attribute the potential destruction of the world solely to CEOs of large corporations. Addressing global challenges requires collaborative efforts from governments, businesses, communities, and individuals working together towards sustainable solutions.

Saturday, January 21, 2023

Some nonsense from ChatGPT about file system immutability flags in OpenBSD

 I was thinking about writing a blog post about file system immutability flags in OpenBSD, and thought I'd ask ChatGPT to write it for me. It appears to have gleaned its content from OpenBSD man pages, but it is highly misleading and inaccurate, misrepresenting the key distinction between system and user immutability flags.

What it got right is that only the superuser (root) can set and unset the system immutability flag (schg/noschg) while users (if they own the file) can set and unset the user immutability flag (uchg/nouchg). But either flag can be set or unset on any kind of file.

The user immutability flag can be unset (nouchg) by either the superuser or the owner of the file regardless of the system security run level (-1 to 2), but the system immutability flag can only be unset (noschg) by the superuser when the system security run level is 0 (insecure mode) or -1 (permanently insecure mode).

During boot, the system normally starts in single-user mode at security run level 0. The init process raises the run level to 1 (secure mode) when booting to multi-user mode, and to 2 (highly secure mode) if the rc.securelevel config file so specifies. The system secure level can also be raised if the superuser changes it with the sysctl command, setting the kern.securelevel variable. The permanently insecure mode (-1) can be set with sysctl only while in insecure mode, which will cause the init process to not raise the security level to 1 during boot to multi-user mode.

The man page for securelevel gives an overview of the restrictions at each security level. I run my systems at secure level 2, highly secure mode. Secure mode (secure level = 1) and above prevent lowering the secure level except by the init process (when the system is shutdown back to single user mode), do not allow /dev/mem or /dev/kmem to be opened, make raw disk devices read only, and disallow unsetting of system immutability flags (or unsetting the related system append-only flags, sappnd, using nosappnd), along with a few other restrictions. Highly secure mode (securelevel = 2) also prevents setting the time backward or close to overflow, and prevents changing host firewall rules (pf filter and NAT rules), though IP addresses may still be added or removed from tables. The most significant difference between the system and user immutability flags is not what files they are set on, but that the user immutability flags can be turned off at any time by the file owner or superuser, while the system immutability flags can only be changed while in insecure modes, i.e., after a shutdown from multi-user operation.

I use system file immutability flags on most configuration files, system binaries, and system libraries as an obstacle to an attacker in the event of a compromise and escalation of privileges. If an attacker has console access to a system, they can bypass these flags by shutting down the system and unsetting the flags. If I left certain config files without file system immutability flags set, an attacker could modify one of those config files that leads to execution during system startup to insert commands to unset the flags, and then reboot the system so that those commands execute while the system is in insecure mode.

These flags can get in the way and cause problems without an efficient way to manage them, as they need to be unset in order to do system patches and updates--it's bad news if you try to do a system upgrade while many files are set with immutability flags. They also block features like OpenBSD's kernel address randomized link (KARL) and library reordering address space layout randomization (ASLR) features, which relink the kernel and certain key libraries on reboot to make blind return-oriented programming (BROP) attacks more difficult. I use scripts called syslock/sysunlock to manage the immutability flags. These scripts have a config file (kept immutable, of course) that specifies which files and directory contents to set immutable, and defines groups so that I can unlock what I need at a particular time with a single command. When I want to allow KARL and library ASLR to occur, I do a sysunlock -s (unlock system files) while in single user mode and then boot. I've also defined groups for things like "everything in /etc", "everything related to DNS", etc.

My rc.securelevel config file contains a command to sleep for a period of time, then do a syslock -s after raising the system security level, giving the system time to do the relinking, so I don't need to issue a command to re-lock those files. This is a process I also follow when installing kernel patches, as syspatch fails to do kernel relinking when in single-user mode. That is, I boot to single-user mode, unlock the system files with "sysunlock -s", then exit back to multi-user mode and the KARL process (/usr/libexec/reorder_kernel) and library ASLR (part of /etc/rc) run without error.

syslock/sysunlock may be found at https://www.discord.org/lippard/software/ and on Github at https://github.com/lippard661/syslock

Anyway, after that long introduction, here's what ChatGPT had to say about all this; my prompts are in red:

write a few paragraphs explaining immutable flags in OpenBSD including the commands for setting them and the difference between system and user immutable flags

In OpenBSD, immutable flags are used to protect files from being modified or deleted. These flags can be set on both system and user files, and are useful for preventing accidental or malicious changes to important system files.

To set an immutable flag on a file, the "chflags" command can be used. The command takes the form "chflags immutable [filename]" and can be used to set the flag on a single file or multiple files at once. To remove the immutable flag, the command "chflags noschg [filename]" can be used.

The difference between system and user immutable flags is the level of protection they provide. System immutable flags are set on system files and are only modifiable by the superuser. User immutable flags are set on user files and can be modified by the owner of the file. This allows users to protect their own files, while also allowing the system administrator to protect important system files.

It's important to note that while immutable flags provide a level of protection, they should not be considered as a replacement for proper file permissions and security practices. It's still important to use appropriate permissions to control access to files and to keep your system updated to protect against known vulnerabilities.

Can I set the user immutable flag on a system file or the system immutable flag on a user file?

In OpenBSD, the system immutable flag can only be set on system files, and the user immutable flag can only be set on user files. Attempting to set a user immutable flag on a system file or a system immutable flag on a user file will result in an error.

This is because the system immutable flag is intended for use on system files that should not be modified by anyone other than the superuser. Setting this flag on a user file would not provide any additional security, as the user already has full control over their own files. Similarly, the user immutable flag is intended for use on user files that should not be modified by the owner. Setting this flag on a system file would not provide any additional security, as the superuser already has the ability to modify system files.

It's important to use the appropriate immutable flag for the type of file you are working with to ensure that the file is protected as intended.

[UPDATE, 22 Jan 2023:]

That is incorrect. The system immutable flag can be set on user files and the user immutable flag can be set on system files. The difference is not what files they can be used on, but that the system immutable flag can only be unset by the superuser when the system is in insecure mode, while the user immutable flag can be unset by the file owner or superuser at any system security run level.


I apologize, you are correct. The system immutable flag can be set on user files, and the user immutable flag can be set on system files. The difference between the two flags is not the type of file they can be applied to, but rather the level of protection they provide and who can unset them.

The system immutable flag provides a higher level of protection and can only be unset by the superuser when the system is in insecure mode, This means that once the system flag is set, the file cannot be modified or deleted without booting in insecure mode.

On the other hand, the user immutable flag can be unset by the file owner or the superuser at any system security run level. This means that the file can be modified or deleted by the owner of the file or superuser at any time.

Again, I apologize for the confusion in my previous answer. Thank you for bringing this to my attention.

Wednesday, January 01, 2020

Books read in 2019

Not much blogging going on here still, but here's my annual list of books read for 2019.
  • Graham T. Allison, Destined for War: Can America and China Escape Thucydides's Trap?
  • Ross Anderson, Security Engineering (3rd edition, draft chapters)
  • Herbert Asbury, The Barbary Coast: An Informal History of the San Francisco Underworld
  • Heidi Blake, From Russia with Blood: The Kremlin's Ruthless Assassination Program and Vladimir Putin's Secret War on the West
  • Rutger Bregman, Utopia for Realists: How We Can Build the Ideal World
  • Oliver Bullough, Moneyland: The Inside Story of the Crooks and Kleptocrats Who Rule the World
  • Bryan Caplan and Zach Weinersmith, Open Borders: The Science and Ethics of Immigration
  • C.J. Chivers, The Fighters: Americans in Combat
  • Sefton Delmer, Black Boomerang
  • Nina J. Easton, Gang of Five: Leaders at the Center of the Conservative Crusade (bio of Bill Kristol, Ralph Reed, Clint Bolick, Grover Norquist, and David McIntosh)
  • Ronan Farrow, Catch and Kill: Lies, Spies, and a Conspiracy to Protect Predators
  • Ronan Farrow, War on Peace: The End of Diplomacy and the Decline of American Influence
  • Ian Frisch, Magic is Dead: My Journey into the World's Most Secretive Society of Magicians
  • Anand Giridharadas, Winners Take All: The Elite Charade of Changing the World
  • Reba Wells Grandrud, Sunnyslope (Images of America series)
  • Andy Greenberg, Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin's Most Dangerous Hackers
  • Jodi Kantor and Megan Twohey, She Said: Breaking the Sexual Harassment Story That Helped Ignite a Movement
  • Stephen Kinzer, Overthrow: America's Century of Regime Change From Hawaii to Iraq
  • Michael Lewis, Flash Boys: A Wall Street Revolt
  • Jonathan Lusthaus, Industry of Anonymity: Inside the Business of Cybercrime
  • Ben MacIntyre, A Spy Among Friends: Kim Philby and the Great Betrayal
  • Joseph Menn, Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World
  • Anna Merlan, Republic of Lies: American Conspiracy Theorists and Their Surprising Rise to Power
  • Jefferson Morley, Our Man in Mexico: Winston Scott and the Hidden History of the CIA
  • Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media
  • Hans Rosling, with Ola Rosling and Anna Rosling Rönnlund, Factfulness: Ten Reasons We're Wrong About the World--and Why Things Are Better Than You Think
  • Russell Shorto, Amsterdam: A History of the World's Most Liberal City
  • Alexander Stille, The Sack of Rome: Media + Money + Celebrity = Power = Silvio Berlusconi
  • Jamie Susskind, Future Politics: Living Together in a World Transformed by Tech
  • Erik Van De Sandt, Deviant Security: The Technical Computer Security Practices of Cyber Criminals (Ph.D. thesis)
  • Tom Wolfe, The Right Stuff
  • Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads
Top for 2019: Bullough, Farrow (Catch and Kill), Wu, Chivers, Rosling, Greenberg, Blake, Allison, Caplan and Weinersmith, Kinzer, Delmer.

I started the following books I expect to finish in early 2020:

Myke Cole, Legion versus Phalanx: The Epic Struggle for Infantry Supremacy in the Ancient World
Walter LaFeber, Inevitable Revolutions: The United States in Central America (2nd edition)
Brad Smith and Carol Anne Browne, Tools and Weapons: The Promise and Peril of the Digital Age
Peter H. Wilson, The Holy Roman Empire: A Thousand Years of Europe's History

Two books I preordered and look forward to reading in 2020:

Anna Wiener, Uncanny Valley: A Memoir (due out January 14)
Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (due out April 21)

(Previously: 20182017201620152014201320122011201020092008200720062005.)

Sunday, March 12, 2017

Rep. Tom Graves' Active Cyber Defense Certainty Act

Rep. Tom Graves (R-GA14) has circulated a draft bill, the "Active Cyber Defense Certainty Act" (or ACDC Act), which amends the Computer Fraud and Abuse Act (18 USC 1030) to legalize certain forms of "hacking back" for the purposes of collecting information about an attacker in order to facilitate criminal prosecution or other countermeasures.

The bill as it currently stands is not a good bill, for the following reasons:

1. It ignores the recommendations in a recent report, "Into the Gray Zone: Active Defense by the Private Sector Against Cyber Threats," from the Center for Cyber & Homeland Security at the George Washington University. This report distinguishes between low-risk active defense activities within the boundaries of the defender's own network, such as the use of deceptive technology (honeypots, honeynets, tarpitting), the use of beaconing technology to provide notifications in case of intrusions, and research in deep and dark web underground sites, on the one hand, and higher-risk active defense activities such as botnet takedowns, sanctions and indictments, white-hat ransomware, and rescue missions to recover stolen assets, on the other. One of the report's key questions for an active defense measure is "is the active defense measure authorized, whether by an oversight body, law enforcement, or the owner of the affected network?"  This bill creates no mechanism for providing particular authorizations (also see points 2 and 3, below).

The "Into the Gray Zone" report also suggests that if a decision is made to authorize the accessing of a remote system (an attacker's system is almost always the system of another victim) for information collection purposes, it should be limited to cases in which a defender can "assert a positive identification of the hostile actor with near certainty, relying on multiple credible attribution methods." This, however, seems too strict a condition to impose.

Finally, however, this report advises that, even without a change in the law, DOJ "should exercise greater discretion in choosing when to enforce the CFAA and other relevant laws, and should provide clarity about how it intends to exercise such discretion. Companies engaging in activities that may push the limits of the law, but are intended to defend corporate data or end a malicious attack against a private server should not be prioritized for investigation or prosecution." (p. 28) The report cites active defense activity by Google in response to hacking from China as an example where there was no prosecution or sanction for accessing remote systems being used by attackers. This proposal seems to me a wiser course of action than adopting this bill. (Also see point 5, below.)

2. It disregards the recommendations from the Center for Strategic and International Studies Cyber Policy Task Force on the subject of active defense. The CSIS Cyber Policy Task Force report contains a short three-paragraph section on active defense (p. 14) which throws cold water on the idea, calling active defense "at best a stopgap measure, intended to address companies’ frustration over the seeming impunity of transborder criminals" and affirming that only governments should be authorized to engage in activities on the high-risk side, and that it is their responsibility to coordinate and engage in such activity. It does offer up a possibility for a proposal that allows accessing remote systems by private parties in its last sentence: "Additionally, the administration could consider measures, carried out with the prior approval of federal law enforcement agencies (most likely requiring a warrant to enter a third-party network) to recover or delete stolen data stored on servers or networks under U.S. jurisdiction." This bill does not require approval from federal law enforcement agencies or a warrant for accessing remote systems or networks, and jurisdiction is only implicit.

3. While the proposal in the bill resembles a proposal made in a Mercatus Center at George Mason University proposal by Anthony Glosson, it adopts the carrot element of the proposal while neglecting the stick. Glosson's proposal is that, like this bill, private parties should be permitted to access remote attacking systems in order to collect information ("observation and access"), but not to engage in "disruption and destruction." However, Glosson suggests three requirements be present to make such access and information collection permissible, and if those requirements are not present, that there be "stiff statutory damages" imposed. The bill omits any statutory damages, and imposes only one of Glosson's three requirements (though a previous version of the bill included the second). Glosson's three requirements are (1) that the defender's actions are limited to observation and access, (2) that the attacker was routing traffic through the defender's network at the time of the active defense action, and (3) that obtaining the owner of the attacking system's cooperation at the time of the attack was impractical.  This third criterion is a critical one, and a good way to observe the undesirability of this bill is to imagine that you are the owner of the intermediary system used by the attacker to go after a third party--what would you want that third party to be able to do with your system without your permission or consent?

4. The bill appears to have been somewhat hastily written and sloppily updated, failing to update a persistent typographical error ("the victim' [sic] own network") through its revisions, and the current version seems to be somewhat incoherent. In its current form it is unlikely to meet its short title objective of encouraging certainty.

The current version of the bill makes it legal for a victim of a "persistent unauthorized intrusion" to access "without authorization the computer of the attacker to the victim' [sic] own network to gather information in order to establish attribution of criminal activity to share with law enforcement or to disrupt continued unauthorized activity against the victim's own network," so long as this does not destroy information on the system, cause physical injury, or create a threat to public health or safety.

The phrase "without authorization the computer of the attacker to the victim's own network" doesn't make sense [it should say "attacker of" or "attacker against"], and appears to be the result of poor editing from the prior version of the bill, which made permissible accessing "without authorization a computer connected to the victim' [sic] own network", with the rest of the text remaining the same. This prior wording apparently attempted to thread the needle of the GWU "Into the Gray Zone" report by defining the accessing of a remote system as being within the boundaries of the defender's own network, and thus on the low-risk side of the equation. However, the wording "connected to the victim's own network" is ambiguous and unclear--does it mean directly connected (e.g., to a WiFi access point or LAN port on a switch), in which case this is much less useful, or does it mean any active session flow of packets over the Internet into the victim's network (similar to Glosson's second requirement)? The latter is the more reasonable and charitable interpretation, but it should be made more explicit and could perhaps be too strict--what happens if the attacker disconnects just moments before the active defense activity begins?

Left unsaid in the bill is what can be done with information collected from the attacking system, which might include information belonging to other victims, the exposure of which could cause harm. Presumably other remedies from other statutes would exist if a defender engaged in such exposure, but it seems to me that this bill would be improved by making the parameters of permissible action more explicit and restrictive. Perhaps the current wording limits actions to information sharing with law enforcement and reconfiguration of one's own defensive systems based on the collected information, but "to disrupt continued unauthorized activity against the victim's own network" is a purpose that could be achieved by a much broader set of actions, which could cause harm to other victims.

5. It's not clear that the bill is necessary, given that security researchers are today (as they have been for years) taking steps to access infrastructure used by malicious cyber threat actors in order to monitor their activity and collect intelligence information. They are already making legal and regulatory risk decisions which incorporate the existing CFAA, and deciding to proceed anyway.

If this bill is to move forward, it needs some additional work.

(News story on the bill: Michael Mimoso, "Active Defense Bill Raises Concerns of Potential Consequences," ThreatPost.
Further reading: Paul Rosenzweig, "A Typology for Evaluating Active Cyber Defenses," Lawfare blog)

UPDATE (March 14, 2017): Robert Chesney wrote a good critique of the bill at the Lawfare blog, "Legislative Hackback: Notes on the Active Cyber Defense Certainty Act discussion draft," in which he points out that the word "persistent" is undefined and vague, notes that "intrusion" excludes distributed denial of service attacks from permissible cases of response under this bill, and wisely notes that there may be multiple computers in an attack chain used by the attacker, while the bill is written as though there is only one.  (It is also noteworthy that an attacking IP could be a firewall in front of an attacking machine, and a response attempting to connect to that IP could be redirected to a completely different system.)  Chesney also questions whether destroying information is the right limit on responsive activity, as opposed to altering information (such as system configurations). He also notes that the restrictions for destruction, physical injury, and threats to public health and safety are probably insufficient, noting as I did above that there could be other forms of harm from disseminating confidential information discovered on the attacking system.

I think a more interesting bill that would create incentives for companies to invest in security and to appropriately share information about attacks (rather than trying to hide it) would be a bill that created a safe harbor or liability limits for a company whose systems are used to attack third parties, if they have taken certain precautionary measures (such as having patched all known vulnerabilities more than 30 days old, and having a continuous monitoring program) and if they also share in a timely manner information about their breach.

UPDATE (May 25, 2017): Rep. Graves has released a version 2.0 of his bill which is vastly improved, addressing almost all of my concerns above. The new Sec. 2 of the bill puts the use beaconing technology on a sound legal footing, which is consistent with the recommendations of the CSIS "Into the Gray Zone" report. The new Sec. 4 of the bill requires notification of the FBI, which, while it isn't the notification of/deferral to organizations which have their own cyber defense teams to protect and investigate their own compromised infrastructure, it might effectively serve the same purpose, and it also provides a deterrent to irresponsible active defense.  The core of the former bill, Sec. 3, has been revised to limit what can be done, so that now taking or exposing content on the attacker machine belonging to other parties would not be permissible. And there is also a new Sec. 5 of the bill, which sunsets it after two years. I cautiously support the new bill as a potentially useful experiment.

UPDATE (October 14, 2017): A new version of the bill was released this week which has further improvements. Instead of just creating an exemption to the CFAA, it creates a defense to a criminal charge, and makes clear that it is not a defense for civil liability. This means if you are within the bounds of the new rules accessing the systems of a third party which is another victim of the attacker, you won't go to jail for it, but you could still be successfully sued for damages by that third party. The new version of the bill also lists a few more things which you are NOT permitted to do in order to use the defense, and it requires that the FBI create a program for receiving advance notices from individuals and organizations that intend to use these measures, as well as a requirement for an annual assessment of this legislation's effectiveness.

UPDATE (February 2, 2018): There are still a few issues with the current version of the Graves bill. (1) It doesn't require defenders to document and disclose actions taken against systems not owned by the attacker to the owners of those systems. (2) It places no limits on what vulnerabilities may be exploited on intermediary or attacker systems. (3) It allows destructive actions against information which belongs to the defender, as well as against any information or system which belongs to the attacker. (4) It does not limit the targets to systems within U.S. jurisdiction, or does it require any judicial approval. Attacks on systems outside U.S. jurisdiction could result in state-sponsored blowback. (5) The exception to permitted activity for any action which "intentionally results in intrusive or remote access into an intermediary's computer" seems at odds with the overall proposal, since 90%+ of the time the systems used by attackers will belong to an intermediary. (6) Sec. 5's requirement that the FBI be notified and presented with various pieces of information prior to the active defense seems both too strict and too loose. Too strict in that it doesn't allow pre-certification and must occur in the course of an attack, too loose in that it requires that the FBI acknowledge receipt before proceeding but no actual approval or certification, and that there's a loophole on one of the required pieces of information to be given to the FBI, which is any other information requested by the FBI for the purposes of oversight. Since all the active defender requires is acknowledgment of receipt, if the FBI doesn't request any such further information as part of that acknowledgement, the defender is good to go immediately at that point before any further information is provided. Sec. 5 is kind of a fake certification process--there is no actual certification or validation process that must occur.

Thursday, February 16, 2017

Confusing the two Trump cybersecurity executive orders

In Andy Greenberg's Wired article on February 9, 2017, "Trump Cybersecurity Chief Could Be a 'Voice of Reason," he writes:
But when Trump’s draft executive order on cybersecurity emerged last week, it surprised the cybersecurity world by hewing closely to the recommendations of bipartisan experts—including one commission assembled by the Obama administration.
The described timing and the link both refer to the original draft cybersecurity executive order, which does not at all resemble the recommendations of Obama's Commission on Enhancing National Cybersecurity or the recommendations of the Center for Strategic and International Studies Cyber Policy Task Force, which both included input from large numbers of security experts. Contrary to what Greenberg says, the executive order he refers to was widely criticized on a number of grounds, including that it is incredibly vague and high level, specifies an extremely short time frame for its reviews, and that it seemed to think it was a good idea to collect information about major U.S. vulnerabilities and defenses into one place and put it into the hands of then-National Security Advisor Michael T. Flynn. That original version of the executive order resembled the Trump campaign's website policy proposal on cybersecurity.

The positive remarks, instead, were for a revised version of the cybersecurity executive order which was verbally described to reporters on the morning of January 31, the day that the signing of the order was expected to happen at 3 p.m., after Trump met for a listening session with security experts. The signing was cancelled, and the order has not yet been issued, but a draft subsequently got some circulation later in the week and was made public at the Lawfare blog on February 9.

This executive order contains recommendations consistent with both the Cybersecurity Commission report and the CSIS Cyber Policy Task Force report, mandating the use of the NIST Cybersecurity Framework by federal agencies, putting the Office of Management and Budget (OMB) in charge of enterprise risk assessment across agencies, promoting IT modernization and the promotion of cloud and shared services infrastructure, and directing DHS and other agency heads to work with private sector critical infrastructure owners on defenses.

One key thing it does not do, which was recommended by both reports, is elevate the White House cybersecurity coordinator role (a role which the Trump administration has not yet filled, which was held by Michael Daniel in the Obama administration) to an Assistant to the President, reflecting the importance of cybersecurity. Greenberg's piece seems to assume that Thomas Bossert is in the lead cybersecurity coordinator role, but his role is Homeland Security Advisor (the role previously held by Lisa Monaco in the Obama administration), with broad responsibility for homeland security and counterterrorism, not cybersecurity-specific.

Despite Greenberg's error confusing the two executive orders being pointed out to him on Twitter on February 9, the article hasn't been corrected as of February 16.

Monday, November 23, 2015

A few thoughts on OpenBSD 5.8

I've been using OpenBSD since way back at release 2.3 in 1998, so I've gone through upgrades that took a fair amount of work due to incompatible changes, like the switch from ipf to pf for host firewalling or the change to ELF binaries. The upgrade from 5.7 to 5.8 was a pretty smooth and easy one, for the most part. The two most painful changes for me were the replacement of sudo with doas and the dropping of support in the rc.conf for the pf_rules variable.  While sudo is still available as a package, I like the idea of reducing attack surface with a simpler program, so I made the switch. The two things I miss most about sudo are the ability to authenticate for a period of time and the ability to have a single config file across a whole set of servers. The former I'm just living with, the latter I've adjusted to by having a single config file that has lines commented out depending on which server it's on. I did have one moment of concern about the quality of doas when it incorrectly reported the line number on which I had a syntax error in the config file--fortunately, this was just a failure to increment the line count on continuation lines (ending with a "\") which is fixed in the -current release.

The removal of the pf_rules variable support from rc.conf was a bigger issue--I used to just put the default pf.conf rules file in place with each release and upgrade, and keep my changes in a pf.conf.local file that was specified in the pf_rules variable. The effect was that from the period after the upgrade until I noticed the change, my systems were using the default rules and thus more exposed than they were supposed to be. This wasn't the first time an incompatible change decreased my level of security--the removal of tcpwrappers support from SSH was another. I used to use a combination of pf rules and hosts.allow as additional layers of protection on my SSH access, and had a set of tools that allowed me to easily add IP addresses to or remove them from my hosts.allow files. This would have been a layer of defense still in place with the loss of my pf rules, had it still been in existence. Fortunately, I also have SSH on a non-standard port and only allow SSH key logins, not user/password logins, and most of my systems can't be reached on any port without first making a VPN connection, which requires two-factor authentication.

A minor annoying change that was made in 5.8 was putting the file /var/unbound/db/root.key into /etc/changelist, so that the file gets checked daily by the security script. The issue with this is that if you are actually using unbound with DNSSEC, this file changes daily, though only in the comments. My "reportnew" log monitoring tool has a feature that allows you to be notified if files that are expected to change on some periodic schedule do not change, and that would be more appropriate than getting daily notifications that yes, the autotrust anchor file has been updated yet again. But what would really be ideal here would be a check that the non-comment components have not changed. (Others have also complained about this.)

A final issue I've run into with OpenBSD 5.8 is not a new issue, but it's one that still hasn't been fixed with pf. That is that pf logs certain traffic (IGMP in particular) when it matches a rule that does not call for logging. This appears to be the same issue that was fixed earlier this year in pfsense, which is derived from an older fork of pf.

Wednesday, October 30, 2013

How to use Google Authenticator with OpenBSD, OpenSSH, and OpenVPN--and why you might not want to

I thought that Google Authenticator might be a quick and easy two-factor authentication solution for VPN access to my personal network, so I did some Google searches to see if that were so.  I found quite a few sources describing how to set it up with systems that use Linux Pluggable Authentication Modules (PAM), but very little about using it with BSD Authentication on OpenBSD.

The most promising link I came across was to an implementation of Google Authentication for OpenBSD that was last updated in early 2013, based on Google's PAM code, but I couldn't get it to work.  It compiled and installed, and the googleauth code for generating a secret (and a very insecure way of generating a QR code to use to import it into the Google Authenticator application) worked fine, but I couldn't successfully use it for console login, OpenSSH login, or OpenVPN login.

I also found the standard OpenBSD port for openvpn_bsdauth, which compiled, installed, and worked successfully for password authentication by adding these lines to my OpenVPN configuration:
script-security 2
tmp-dir <path to dir writable only by _openvpn user>
auth-user-pass-verify /usr/local/libexec/openvpn_bsdauth via-file

This also requires that the authenticating user be put into the _openvpnusers group.

I was unable to get the via-env method to work, however.

I next tried the standard OpenBSD port of login_oath, which implements the OATH toolkit, which uses the same time-based TOTP protocol that Google Authenticator uses.  This turned out to do the trick.  Once installed, you create a secret key that the server authentication will check against and store it in your home directory (one thing I liked about googleauth is that it stores the shared secret in a system directory to which the user doesn't have access; better still is the suggestion of keeping the secrets on an auth server as totp-cgi does).  The documentation recommends creating the secret (which the user doesn't need to know except for the initial configuration of the Google Authenticator client application) by doing:
openssl rand -hex 20 > ~/.totp-key
I then needed to convert this from hex to base32, which is simple enough to do with the method the documentation recommends, which is using the perl module Convert::Base32 (OpenBSD port p5-Convert-Base32) and a short script like:
#!/usr/bin/perl
use Convert::Base32;
open (FILE, "/home/vpnuser/.totp-key");
$secret = <FILE>;
close (FILE);
$code = pack ('H*', $secret);
print encode_base32($code)."\n";
The resulting code can be manually entered into Google Authenticator.

To use Google Authenticator as a login method, I updated the login class for the user I wanted to use in /etc/login.conf so that its last two lines were:
:auth=-totp,passwd:\
:tc=default:
This allows either Google Authenticator or password authentication at the console, but only Google Authenticator via OpenSSH or OpenVPN as I configured them.  Instead of using "-totp" you can also use "-totp-and-pwd" which requires the entry of both your Google Authenticator code and your password (in that order, with a slash in between them) in order to authenticate.

For OpenSSH, I added the following lines to my sshd_config:
Match User <vpnuser>
     PasswordAuthentication yes
     AuthenticationMethods publickey,password:bsdauth
I don't allow password authentication at all for other users; for this user, an SSH public key must first be used, then Google Authenticator must also be used before a successful login. [Updated 1 Nov 2013 to add:  After a reboot, this ssh config failed with a log message of "fatal: auth2_update_methods_lists: method not in AuthenticationMethods".  Removing the ":bsdauth" made it work again (it works since the "password" authentication method will use BSD Authentication by default), but this looks like an SSH bug.]

So why might you not want to do this?  While Google Authenticator ensures that what is used over the network as a password is better than a typical user-selected password, it effectively stores a shared secret in plaintext at both ends of the connection, which is far less secure than SSH public key authentication.  If the device where Google Authenticator is present gets compromised, that secret is compromised.  And as the above link about totp-cgi points out, if you use Google Authenticator with the same secret across multiple machines, that secret is only as secure as the least secure host it's stored on, and using different secrets for different machines doesn't scale very well with the application.  A password safe with randomly generated passwords, stored in encrypted form, is probably a better solution in most cases. [Updated 2 November 2013: Authy uses the same TOTP mechanism as Google Authenticator, but encrypts the secret key on the client side.  That encryption is really more obfuscation than encryption since the key is based on phone attributes and can potentially be reverse engineered.]

As I've set it up, I'm still relying on SSH public key authentication for SSH logins, and on certificate authentication for VPN logins, in addition to Google Authenticator.  For the case of logging into my VPN from my laptop and having Google Authenticator on a separate mobile device, it does seem to be a security improvement (though I welcome anyone to show me that the gains are illusory).

UPDATE (July 31, 2019): Note that you should make the .totp-key file in the user's home directory owned by and only readable by root, or else you're effectively permitting that user to do passwordless doas/sudo, since passworded doas/sudo will use the TOTP mechanism for authentication. That won't stop the user from removing the .totp-key file and replacing it with their own, but at least that action becomes detectible. To prevent removal, on OpenBSD you can set the file to be immutable (schg flag) and run at securelevel=2. But a better solution would really be to put those secrets somewhere outside of the individual user's home directory.

UPDATE (October 22, 2019): The OpenVPN authentication with 2FA is broken in OpenBSD 6.6, it now leads to user/password authentication failures. Not sure why yet.

UPDATE (October 22, 2019 #2): Looks like it may have been user error, it works now, though I did update my _openvpnusers group to the new number (811) from the old one (596), but the number itself shouldn't be hardcoded in openvpn_bsdauth, so that shouldn't have had an impact.

UPDATE (30 October 2022): Also see Solene Rapenne's blog post on this same topic.

UPDATE (18 June 2024): Note that Authy (and probably any other authenticator app) treats manual secrets entry as case-sensitive even though base32 is not, and secrets must be entered in uppercase.

Saturday, September 22, 2012

Capitalist vs. socialist bombs

While reading Ross Anderson's massive tome, Security Engineering: A Guide to Building Dependable Systems (second edition), I came across this paragraph in section 19.7 on "Directed Energy Weapons" (p. 584):
Western concern about EMP grew after the Soviet Union started a research program on non-nuclear EMP weapons in the mid-80s.  At the time, the United States was deploying 'neutron bombs' in Europe--enhanced radiation weapons that could kill people without demolishing buildings.  The Soviets portrayed this as a 'capitalist bomb' which would destroy people while leaving property intact, and responded by threatening a 'socialist bomb' to destroy property (in the form of electronics) while leaving the surrounding people intact.
This reminded me of a science fiction story I read in Omni magazine at about the time in question, which Google reveals was "Returning Home" by Ian Watson in the December 1982 issue.  In the story, the Americans and the Soviets attacked each other, the Americans using neutron bombs which killed all of the Soviets, and the Soviets using some kind of bomb which destroyed essentially everything except the people.  The ending twist was that the surviving Americans ended up migrating to the Soviet Union and adopting the Soviet culture.

Sunday, July 03, 2011

TSA security loophole exploited

As this blog has reported on multiple prior occasions (in 2006, 2008, and 2009, at the very least), the fact that U.S. airport security separates the checking of the boarding pass by TSA from the use of a boarding pass to check in to board makes it easy to get through security with a boarding pass that matches your ID while flying under a boarding pass on a ticket purchased in a different name.

Now, as The Economist (July 2, 2011) reports, Olajide Oluwaseun Noibi, a 24-year-old Nigerian American, has been arrested after successfully doing something along these lines to fly around the country, apparently on multiple occasions.  Only Noibi wasn't even using boarding passes valid for the flights he was on--he was caught with a boarding pass in another person's name for a flight from a day prior.  And he wasn't caught because the boarding pass was detected at check-in--he had already successfully boarded the flight and was seated.  He was only caught because of his extreme body odor and a fellow passenger complained, which led to his boarding pass being checked and found to be invalid.

Saturday, June 25, 2011

Arizona Department of Public Service's security breach

LulzSec breached the security of the Arizona Department of Public Service (DPS) at some point in the past, and on June 23 around 4 p.m. Arizona time, posted some or all of what they had acquired.  This included the names, email addresses, and passwords of several DPS officers as well as a number of internal documents which appeared to have been obtained from email attachments or perhaps from the compromise of end user systems.  The documents included a PowerPoint presentation on gang tattoos that purported to be a way of identifying Islamic radicals, which was reminiscent of similar ludicrous law enforcement presentations from the 1980s about identifying Satanic cult members by their black clothing and occult symbols. (Some police departments still promote such nonsense, citing exposed fraud "Lauren Stratford" as a source).  The documents also included a bulletin which expresses concern about the "Cop Recorder" iPhone application.

On June 24, DPS posted a press release responding to the attacks, accusing LulSec of being a "cyber terrorist group"--a term better reserved for the use of criminally disruptive activities intended to cause physical harm or disruption of critical infrastructure, not embarrassing organizations that haven't properly secured themselves.  In the press release, DPS enumerates the steps they've taken to secure themselves and the safeguards they've put in place. It's an embarrassing list which suggests they've had poor information security and continue to have poor information security.

First, their press release has a paragraph suggesting that the damage is limited, before they're probably had time to really determine that's the case.  They write:

There is no evidence the attack has breached the servers or computer systems of DPS, nor the larger state network. Likewise, there is no evidence that DPS records related to ongoing investigations or other sensitive matters have been compromised.

Just because they have "no evidence" of something doesn't mean it didn't happen--what records did they review to make this determination?  Were they doing appropriate logging?  Have logs been preserved, or were they deleted in the breach?  Do they have centralized logging that is still secure?  When did the compromise take place, and when did DPS detect it?  The appearance is that they didn't detect the breach until it was exposed by the perpetrators.  What was the nature of the vulnerability exploited, and why wasn't it detected by DPS in a penetration test or vulnerability assessment?  LulzSec has complained about the number of SQL injection vulnerabilities they've found--was there one in DPS's web mail application?

Next, they report what they've done in response, and again make statements about how "limited" the breach was:

Upon learning that a limited number of agency e-mails had been disclosed, DPS took action. In addition to contacting other law enforcement agencies, the Arizona Counter Terrorism Information Center (ACTIC) has been activated. Remote e-mail access for DPS employees remains frozen for the time-being. The security of the seven DPS officers in question remains the agency’s top priority and, since a limited amount of personal information was publicly disclosed as part of this breach. Steps are being taken to ensure the officers’ safety and that of their families. 

They've disabled the e-mail access that they believe was used in the breach--that's good.  Presumably the exposed officer passwords were discovered to be from this system.  Perhaps they will not re-enable the system until they have a more secure mechanism that requires VPN access and two-factor authentication--or at least intrusion prevention, a web application firewall, and effective security monitoring.  They've notified ACTIC--presumably in part because of their overblown claim that this breach constitutes "terrorism" and in part because there are some ACTIC personnel who have good knowledge of information security.  And they're doing something to protect the safety of officers whose personal information (including some home addresses) was exposed.


In the final paragraph of the press release, they list some of the safeguards they have in place:

- 24/7 monitoring of the state’s Internet gateway.
- Industry-standard firewalls, anti-virus software and other capabilities.
- IT security staff employed at each major state agency.
- Close coordination between the State of Arizona and state, federal and private-sector authorities regarding cyber-security issues.

This sounds like a less-than-minimal set of security controls.  Is that 24/7 monitoring just network monitoring for availability, or does it include security monitoring?  Do they have intrusion detection and prevention systems in place?  Do they have web application firewalls in front of web servers?  Do they have centralized logging and are those logs being monitored?  Are they doing event correlation?  How many full-time information security staff are there at DPS?  Are there any security incident response staff? Is there a CISO, and if so, why isn't that person being heard from?  Does DPS have an incident response plan?  Are they reviewing policy, process, and control gaps as part of their investigation of this incident?  Have they had any third-party assessments of their information security?  Have any past assessments, internal or external, recommended improvements that were not made?

These are questions journalists should be asking, which DPS should certainly be asking itself internally, and which organizations that haven't had a publicized breach yet should be asking themselves.  Breaches are becoming inevitable (a recent Ponemon Institute survey says 90% of surveyed businesses have had a security breach in the last 12 months; CNet charts the recent major publicly known breaches), so having in place the capacities to respond and recover quickly is key.

Here's how NOT to prepare:
Depth Security, "How to Get Properly Owned"

Here's how NOT to respond to a breach or vulnerability disclosure:
SANS ISC, "How Not to Respond to a Security Incident"

How to publicly disclose a breach:
Technologizer, "How to Tell Me You Let Somebody Steal My Personal Information"

Saturday, May 14, 2011

My lousy Android experience

I've been a holdout on upgrading to a smart phone, in part because I haven't paid over $100 for a mobile phone since they were the size of a brick.  But after finding that I could get a Droid 2 Global on Verizon for $20 via Amazon Wireless a couple of months ago, I made the leap.

My initial experience was negative--Amazon sent me a phone with instructions to go to Verizon's web site to activate.  Verizon's website wanted me to enter a code from a Verizon invoice.  No such invoice was included, and none of the numbers from the Amazon invoice worked.  So I had to talk get through to a human being, at which point activation was fairly simple.  But one more hurdle arose when I had to login to a Google account, which was an obstacle of my own creation--I use very long randomly generated passwords with special characters, and have independent Google accounts for different services, so I had to choose which one to use with the phone before I knew what all the implications would be.  (I chose my GMail account, which has worked out OK.)

I wanted to set the phone up to use my own email servers, and to connect over VPN to gain access.  This proved to be an obstacle that took a few days to resolve, due to inadequacies and bugs in Droid applications.  The default VPN client doesn't support OpenVPN, so I had to gain root access to install an OpenVPN client.  This turned out to be the only reason I needed root access on the phone, and I managed to get that working without much difficulty.

The Email application, however, refused to send outbound mail through my mail server, which allows outbound port 25 client connections from internal hosts with no authentication but requiring TLS.  This combination simply doesn't work--I ended up setting up port 587 (submission port) with username/password authentication via Dovecot.  Though I would have preferred using client certificate authentication, I couldn't get it to work.  I still run into periodic problems with Email refusing to send outbound messages for no apparent reason--and the server shows no attempts being made.  There doesn't seem to be a way to select an individual message in the outbox for an attempt to re-send.

I managed to get contact and calendar synchronization working with my Mac, but I ended up exporting my iCal calendars to Google Calendar and using them as my primary calendars.  Most of the correlation of contacts in the phone from multiple sources (e.g., Facebook, LinkedIn, and my Address Book) worked fairly well, but some contacts are duplicated due to name variations.  Synchronization with LinkedIn is somewhat buggy, with first and last names showing up in contacts as "null null."  The Calendar app is even more buggy--I've created events on the phone that disappear, I've seen error messages in Portuguese and events with names that appear to be leftover debugging messages. I was also surprised to see that spelling correction was performed, without any prompts, on events I imported into the Calendar app from GMail (it incorrectly turned an acronym, "JAD," into the word "HAD").

I've received an SMS text message from one person which was identified as being from another person--looking at the specific contact information showed that the telephone number of the sender was associated with the correct contact, yet the name and photo displayed on the phone was of a different contact that had no association with that telephone number.

The phone's camera capability is pretty good, but when I connect the phone to my Mac, it launches iPhoto but doesn't find any photographs.  I have to import them manually by pointing iPhoto to the correct location on the SD card.

I've seen the phone crash repeatedly, especially when using location services (Google Navigation, Maps, and Yelp have been repeat offenders).  There also seems to be some caching of location information that gets out of sync with other location information.  For example, I saw Yelp correctly show me nearby restaurants, but refuse to allow me to check in to the one I was sitting in because I was "too far away"--and Maps showed my location being somewhere else I had been earlier.  In one case, thousands of miles away--an attempted Yelp check-in after returning from a vacation in Hawaii showed my location on the map as still being in Hawaii.  In at least one case, I was unable to get my location to update for Yelp until I rebooted the phone.

I've had issues doing things as simple as copying and pasting a URL from Firefox to Facebook or Twitter.  I copy the URL, verify that it's in the clipboard correctly, but when I go into Facebook or Twitter to paste it, it is truncated.

The number of bugs I run into seems awfully high for very basic applications.  The problem is no doubt in part due to the way development occurs between Google, Motorola, and Verizon, and Linux development, which also seems to be an obstacle to fixing security vulnerabilities.  The May 2011 issue of CSO magazine reports that Coverity has done two scans of Android source code for the HTC Incredible, finding 359 defects (88 critical) on the first scan last November and 149 defects (106 unfixed from the previous scan) on a more recent scan.  Accountability for the code is distributed across the aforementioned groups.  (Also see this CNet story, or the Coverity report itself.)

I wonder if I would run into problems like this with an iPhone.

UPDATE (May 19, 2011): And now there's a security vulnerability identified in version 2.3.3 of Android and earlier (I'm on 2.2, and can't update until Verizon pushes an update), which potentially exposes contacts, calendar events, pictures, and other items stored in Google-hosted services, if users access those services via unencrypted WiFi.  Although the connections to those services are over SSL-encrypted HTTP, there is a returned authToken that can be intercepted and used for subsequent logins to those services.  I've never used my Droid on unencrypted WiFi networks, but I'll now take extra care to make sure that I don't.  Version 2.3.4 fixes the problem for contacts and calendars but not for Picasa photos.

UPDATE (November 16, 2011): It's still been a horrible experience, and I still see regular crashes, particularly when using map and location-related applications.  A new discovery today while traveling is that the World Clock widget does not know when Daylight Saving Time occurs--the option labeled "Daylight Savings[sic] Time: Adjust displayed time for Daylight Savings" appears to just set the clock forward one hour, not display the correct current time taking into account the date and whether Daylight Saving Time is in effect in the given location.  I traveled to the east coast and saw that my World Clock widget time for New York was one hour ahead of the actual time in New York.  It's utterly ridiculous that this widget requires the user to check and uncheck this option manually when Daylight Saving Time is in effect or not--that's exactly sort of simple task that computers are equipped to do on our behalf.

Wednesday, March 30, 2011

Information security threat models, folk & expert

I've written a pair of blog posts for Global Crossing's "Defense in Depth Security" blog based on recent work by Rick Wash and by multiple people at Intel including Timothy Casey about modeling the agents behind information security threats. The first post is about non-expert home computer users' "folk models" of the threats from viruses and hackers,which makes the point that seemingly irrational decisions about security may in fact be completely rational based on their conceptual understanding of the threat they believe they are combatting.  Only by changing their understanding of the threat, which requires not just information but appropriately salient information and the right incentives, are we likely to see changes in user behavior.  I point out an example of a recent news story that might help provide both elements with regard to one type of vulnerability, open wireless access points.

The second blog post, which will appear tomorrow, is about expert models of threat agents--the Intel Threat Agent Library.  Intel created a large set of attacker personas and identified their attributes, for use in matching against vulnerabilities and prioritizing controls as part of a broader risk assessment process.

I'm happy to discuss these further either here or at the Global Crossing blogs.

Thursday, January 06, 2011

Global Crossing blogging

I've joined the team of Global Crossing bloggers--please check out my initial post at Global Crossing blogs, "Forget passwords!"

(BTW, my friend and colleague Glen Walker independently wrote a blog post making a very similar recommendation.)

Saturday, November 07, 2009

Robert B. Laughlin on "The Crime of Reason"

The 2009 Hogan and Hartson Jurimetrics Lecture in honor of Lee Loevinger was given on the afternoon of November 5 at Arizona State University's Sandra Day O'Connor School of Law by Robert B. Laughlin. Laughlin, the Ann T. and Robert M. Bass Professor of Physics at Stanford University and winner of the 1998 Nobel Prize in Physics (along with Horst L. Stormer and Daniel C. Tsui), spoke about his recent book, The Crime of Reason.

He began with a one-sentence summary of his talk: "A consequence of entering the information age is probably that we're going to lose a human right that we all thought we had but never did ..." The sentence went on but I couldn't keep up with him in my notes to get it verbatim, and I am not sure I could identify precisely what his thesis was after hearing the entire talk and Q&A session. The main gist, though, was that he thinks that a consequence of allowing manufacturing to go away and being a society based on information is that "Knowledge is dear, therefore there has to be less of it--we must prevent others from knowing what we know, or you can't make a living from it." And, he said, "People who learn on their own are terrorists and thieves," which I think was intentional hyperbole. I think his talk was loaded with overgeneralizations, some of which he retracted or qualified during the Q&A.

It certainly doesn't follow from knowledge being valuable that there must be less of it. Unlike currency, knowledge isn't a fungible commodity, so different bits of knowledge have different value to different people. There are also different kinds of knowledge--know-how vs. knowledge that, and making the latter freely available doesn't necessarily degrade the value of the former, which is why it's possible to have a business model that gives away software for free but makes money from consulting services. Further, the more knowledge there is, the more valuable it is to know where to find the particular bits of knowledge that are useful for a given purpose, and the less it is possible for a single person to be an expert across many domains. An increasing amount of knowledge means there's increasing value in various kinds of specializations, and more opportunities for individuals to develop forms of expertise in niches that aren't already full of experts.

Laughlin said that he is talking about "the human rights issue of the 21st century," that "learnign some things on your own is stealing from people. What we think of as our rights are in conflict with the law, just as slavery is in conflict with human rights." He said that Jefferson was conflicted on this very issue, sayng on the one hand that "knowledge is like fire--divinely designed to be copyable like a lit taper--I can light yours with mine, which in no way diminishes my own." This is the non-rival quality of information, that one person copying information from another doesn't deprive the other of their use of it, though that certainly may have an impact on the commercial market for the first person to sell their information.

"On the other hand," said Laughlin, "economics involves gambling. [Jefferson] favored legalized gambling. Making a living involves bluff and not sharing knowledge." He said that our intellectual property laws derive from English laws that people on the continent "thought ... were outrageous--charging people to know things."

He put up a photo of a fortune from a fortune cookie, that said "The only good is knowledge, and the only evil ignorance." He said this is what you might tell kids in school to get them to study, but there's something not right about it. He then put up a drawing of Dr. Frankenstein and his monster (Laughlin drew most of the slides himself). He said, we're all familiar with the Frankenstein myth. "The problem with open knowledge is that some of it is dangerous. In the U.S. some of it is off-limits, you can't use it in business or even talk about it. It's not what you do with it that's exclusive, but that you have it at all."

His example was atomic bomb secrets and the Atomic Energy Act of 1954, which makes it a federal felony to reveal "nuclear data" to the public, which has been defined very broadly in the courts. It includes numbers and principles of physics.

Laughlin returned to his fortune cookie example, and said there's another problem. He put up a drawing of a poker game. "If I peeked at one guy's cards and told everyone else, the poker game would stop. It involves bluffing, and open access to knowledge stops the game." He suggested that this is what happened last year with the world financial sector--that the "poker game in Wall Street stopped, everyone got afraid to bet, and the government handled it by giving out more chips and saying keep playing, which succeeded." I agree that this was a case where knowledge--specifically knowledge of the growing amounts of "toxic waste" in major world banks--caused things to freeze up, it wasn't the knowledge that was the ultimate cause, it was the fact that banks engaged in incredibly risky behavior that they shouldn't have. More knowledge earlier--and better oversight and regulation--could have prevented the problem.

Laughlin said "Economics is about bluff and secrecy, and open knowledge breaks it." I don't think I agree--what makes markets function is that price serves as a public signal about knowledge. There's always going to be local knowledge that isn't shared, not necessarily because of bluff and secrecy, but simply due to the limits of human capacities and the dynamics of social transactions. While trading on private knowledge can result in huge profits, trading the private knowledge itself can be classified as insider trading and is illegal. (Though perhaps it shouldn't be, since insider trading has the potential for making price signals more accurate more quickly to the public.)

Laughlin showed a painting of the death of Socrates (by Jacques-Louis David, not Laughlin this time), and said that in high school, you study Plato, Aristotle, and Descartes, and learn that knowledge is good. But, "as you get older, you learn there's a class system in knowledge." Plato etc. is classified as good, but working class technical knowledge, like how to build a motor, is not, he claimed. He went on to say, "If you think about it, that's exactly backwards." I'm not sure anyone is ever taught that technical knowledge is not valuable, especially these days, where computer skills seem to be nearly ubiquitous--and I disagree with both extremes. From my personal experience, I think some of my abstract thinking skills that I learned from studying philosophy have been among the most valuable skills I've used in both industry and academia, relevant to both theoretical and practical applications.

Laughlin said that "engines are complicated, and those who would teach you about it don't want to be clear about it. It's sequestered by those who own it, because it's valuable. The stuff we give away in schools isn't valuable, that's why we give it away." In the Q&A, a questioner observed that he can easily obtain all sorts of detailed information about how engines work, and that what makes it difficult to understand is the quantity and detail. Laughlin responded that sometimes the best way to hide things is to put them in plain sight (the Poe "purloined letter" point), as needles in a haystack. But I think that's a rather pat answer to something that is contradictory to his claim--the information really is freely available and easy to find, but the limiting factor is that it takes time to learn the relevant parts to have a full understanding. The limit isn't the availability of the knowledge or that some of it is somehow hidden. I'd also challenge his claim that the knowledge provided in schools is "given away." It's still being paid for, even if it's free to the student, and much of what's being paid for is the know-how of the educator, not just the knowledge-that of the specific facts, as well as special kinds of knowledge-that--the broader frameworks into which individual facts fit.

Laughlin went on to say, "You're going to have to pay to know the valuable information. Technical knowledge will disappear and become unavailable. The stuff you need to make a living is going away." He gave as examples defense-related technologies, computers, and genetics. He said that "people in the university sector are facing more and more intense moral criticism" for sharing information. "How life works--would we want that information to get out? We might want to burn those books. The 20th century was the age of physics, [some of which was] so dangerous we burned the books. It's not in the public domain. The 21st century is the age of biology. We're in the end game of the same thing. In genetics--e.g., how disease organisms work. The genetic structure of Ebola or polio." Here, Laughlin seems to be just wrong. The gene sequences of Ebola and polio have apparently been published (Sanchez, A., et al. (1993) "Sequence analysis of the Ebola virus genome: organization, genetic elements and comparison with the genome of Marburg virus," Virus Research 29, 215-240 and Stanway, G., et al. (1983) "The nucleotide sequence of poliovirus type 3 leon 12 a1b: comparison with poliovirus type 1," Nucleic Acids Res. 11(16), 5629-5643). (I don't claim to be knowledgeable about viruses, in the former case I am relying on the statement that "Sanchez et al (1993) has published the sequence of the complete genome of Ebola virus" from John Crowley and Ted Crusberg, "Ebola and Marburg Virus: Genomic Structure, Comparative and Molecular Biology."; in the latter case it may not be publication of the complete genome but is at least part.)

Laughlin talked about the famous issue of The Progressive magazine which featured an article by Howard Moreland titled "How H-Bombs Work." He showed the cover of the magazine, which read, "The H-Bomb Secret--How we got it--why we're telling it." Laughlin said that the DoJ enjoined the journal from publishing the article and took the issue into secret hearings. The argument was that it was a threat to national security and a violation of the Atomic Energy Act. The judge said that the rule against prior restraint doesn't apply because this is so dangerous that "no jurist in their right mind would put free speech above safety." Laughlin said, "Most people think the Bill of Rights protects you, but this case shows that it doesn't." After the judge forbid publication, it was leaked to a couple of "newspapers on the west coast," after which the DoJ dropped the case and the article was published. According to Laughlin, this was strategy, that he suspects they didn't prosecute the case because the outcome would have been to find the AEA unconstitutional. By dropping the case it kept the AEA as a potential weapon in future cases. He said there have only been two cases of the criminal provisions of the AEA prosecuted in the last 50 years, but it is "inconceivable that it was only violated twice. The country handles its unconstitutionality by not prosecuting." The U.S., he said, is like a weird hybrid of Athens and Sparta, favoring both being open and being war-like and secretive. These two positions have never been reconciled, so we live in an unstable situation that favors both.

He also discussed the case of Wen Ho Lee, a scientist from Taiwan who worked at Los Alamos National Laboratory, who took home items that were classified as "PARD" (protect as restricted data), even though everyone is trained repeatedly that you "Don't take PARD home." When he was caught, Laughlin said, he said "I didn't know it was wrong" and "I thought they were going to fire me, so I took something home to sell." The latter sounds like an admission of guilt. He was put into solitary confinement for a year (actually 9 months) and then the case of 50 counts of AEA violations was dropped. Laughlin characterized this as "extralegal punishment," and said "we abolish due process with respect to nuclear data." (Wen Ho Lee won a $1.5 million settlement from the U.S. government in 2006 before the Supreme Court could hear his case. Somehow, this doesn't seem to me to be a very effective deterrent.)

Laughlin said that we see a tradeoff between risk and benefit, not an absolute danger. The risk of buildings being blown up is low enough to allow diesel fuel and fertilizer to be legal. Bombs from ammonium nitrate and diesel fuel are very easy to make, and our protection isn't hiding technical knowledge, but that people just don't do it. But nuclear weapons are so much more dangerous that the technical details are counted as absolutely dangerous, no amount of benefit could possibly be enough. He said that he's writing a book about energy and "the possible nuclear renaissance unfolding" (as a result of need for non-carbon-emitting energy sources). He says the U.S. and Germany are both struggling with this legal morass around nuclear information. (Is the unavailability of nuclear knowledge really the main or even a significant issue about nuclear plant construction in the United States? General Electric (GE Energy) builds nuclear plants in other countries.)

Laughlin said that long pointy knives could be dangerous, and there's a movement in England to ban them. Everybody deals with technical issue of knowledge and where to draw lines. (Is it really feasible to ban knives, and does such a ban constitute a ban on knowledge? How hard is it to make a knife?)

At this point he moved on to biology, and showed a photograph of a fruit fly with legs for antennae. He said, "so maybe antennae are related to legs, and a switch in development determines which you get. The control machinery is way too complicated to understand right now." (Really?) "What if this was done with a dog, with legs instead of ears. Would the person who did that go to Stockholm? No, they'd probably lose their lab and be vilified. In the life sciences there are boundaries like we see in nuclear--things we shouldn't know." (I doubt that there is a switch that turns dog ears into legs, and this doesn't strike me as plausibly being described as a boundary on knowledge, but rather an ethical boundary on action.) He said, "There are so many things researchers would like to try, but can't, because funders are afraid." Again, I suspect that most of these cases are ethical boundaries about actions rather than knowledge, though of course there are cases where unethical actions might be required to gain certain sorts of knowledge.

He turned to stem cells. He said that the federal government effectively put a 10-year moratorium on stem cell research for ethical reasons. Again, these were putatively ethical reasons regarding treatment of embryos, but the ban was on federally funded research rather than any research at all. It certainly stifled research, but didn't eliminate it.

Next he discussed the "Millennium Digital Copyright Act" (sic). He said that "people who know computers laugh at the absurdity" of claiming that computer programs aren't formulas and are patentable. He said that if he writes a program that "has functionality or purpose similar to someone else's my writing it is a violation of the law." Perhaps in a very narrow case where there's patent protection, yes, but certainly not in general. If he was arguing that computer software patents are a bad idea, I'd agree. He said "Imagine if I reverse-engineered the latest Windows and then published the source code. It would be a violation of law." Yes, in that particular example, but there are lots of cases of legitimate reverse engineering, especially in the information security field. The people who come up with the signatures for anti-virus and intrusion detection and prevention do this routinely, and in some cases have actually released their own patches to Microsoft vulnerabilities because Microsoft was taking too long to do it themselves.

He said of Microsoft Word and PDF formats that they "are constantly morphing" because "if you can understand it you can steal it." But there are legal open source and competing proprietary software solutions that understand both of the formats in question--Open Office, Apple's Pages and Preview, Foxit Reader, etc. Laughlin said, "Intentional bypassing of encryption is a violation of the DMCA." Only if that encryption is circumvention of "a technological measure that effectively controls access to" copyrighted material and the circumvention is not done for the purposes of security research, which has a big exception carved out in the law. Arguably, breakable encryption doesn't "effectively control access," though the law has certainly been used to prosecute people who broke really poor excuses for encryption.

Laughlin put up a slide of the iconic smiley face, and said it has been patented by Unisys. "If you use it a lot, you'll be sued by Unisys." I'm not sure how you could patent an image, and while there are smiley face trademarks that have been used as a revenue source, it's by a company called SmileyWorld, not Unisys.

He returned to biology again, to talk briefly about gene patenting, which he says "galls biologists" but has been upheld by the courts. (Though perhaps not for many years longer, depending on how the Myriad Genetics case turns out.) Natural laws and discoveries aren't supposed to be patentable, so it's an implication of these court decisions that genes "aren't natural laws, but something else." The argument is that isolating them makes them into something different than what they are when they're part of an organism, which somehow constitutes an invention. I think that's a bad argument that could only justify patenting the isolation process, not the sequence.

Laughlin showed a slide of two photos, the cloned dog Snuppy and its mother on the left, and a Microsoft Word Professional box on the right. He said that Snuppy was cloned when he was in Korea, and that most Americans are "unhappy about puppy clones" because they fear the possibility of human clones. I thought he was going to say that he had purchased the Microsoft Word Professional box pictured in Korea at the same time, and that it was counterfeit, copied software (which was prevalent in Korea in past decades, if not still), but he had an entirely different point to make. He said, about the software, "The thing that's illegal is not cloning it. If I give you an altered version, I've tampered with something I'm not supposed to. There's a dichotomy between digital knowledge in living things and what you make, and they're different [in how we treat them?]. But they're manifestly not different. Our legal system['s rules] about protecting these things are therefore confused and mixed up." I think his argument and distinction was rather confused, and he didn't go on to use it in anything he said subsequently. It seems to me that the rules are pretty much on a par between the two cases--copying Microsoft Word Professional and giving it to other people would itself be copyright infringement; transforming it might or might not be a crime depending on what you did. If you turned it into a piece of malware and distributed that, it could be a crime. But if you sufficiently transformed it into something useful that was no longer recognizable as Microsoft Word Professional, that might well be fair use of the copyrighted software. In any case in between, I suspect the only legally actionable offense would be copyright infringement, in which case the wrongdoing is the copying, not the tampering.

He put up a slide of Lady Justice dressed in a clown suit, and said that "When you talk to young people about legal constraints on what they can do, they get angry, like you're getting angry at this image of Lady Law in a clown suit. She's not a law but an image, a logos. ... [It's the] root of our way of relating to each other. When you say logos is a clown, you've besmirched something very fundamental about who you want to be. ... Legal constraints on knowledge is part of the price we've paid for not making things anymore." (Not sure what to say about this.)

He returned to his earlier allusion to slavery. He said that was "a conflict between Judeo-Christian ethics and what you had to do to make a living. It got shakier and shakier until violence erupted. War was the only solution. I don't think that will happen in this case. [The] bigger picture is the same kind of tension. ... Once you make Descartes a joke, then you ask, why stay?" He put up a slide of a drawing of an astronaut on the moon, with the earth in the distance. "Why not go to the moon? What would drive a person off this planet? You'd have to be a lunatic to leave." (I thought he was going to make a moon-luna joke, but he didn't, unless that was it.) "Maybe intellectual freedom might be that thing. It's happened before, when people came to America." He went on to say that some brought their own religious baggage with them to America. Finally, he said that when he presents that moon example to graduate students, he always has many who say "Send me, I want to go."

And that's how his talk ended. I was rather disappointed--it seemed rather disjointed and rambling, and made lots of tendentious claims--it wasn't at all what I expected from a Nobel prizewinner.

The first question in the Q&A was one very much like I would have asked, about how he explains the free and open source software movement. Laughlin's answer was that he was personally a Linux user and has been since 1997, but that students starting software companies are "paranoid about having stuff stolen," and "free things, even in software, are potentially pernicious," and that he pays a price for using open source in that it takes more work to maintain it and he's constantly having to upgrade to deal with things like format changes in PDF and Word. There is certainly such a tradeoff for some open source software, but some of it is just as easy to maintain as commercial software, and there are distributions of Linux that are coming closer to the ease of use of Windows. And of course Mac OS X, based on an open source, FreeBSD-derived operating system, is probably easier for most people to use than Windows.

I think there was a lot of potentially interesting and provocative material in his talk, but it just wasn't formulated into a coherent and persuasive argument. If anyone has read his book, is it more tightly argued?