Showing posts with label cybersecurity. Show all posts
Showing posts with label cybersecurity. Show all posts

Sunday, August 04, 2024

Tips on using OpenBSD's pledge and unveil in perl scripts

 OpenBSD 5.9 (current as of this post is 7.5) introduced the "pledge" system call and 6.4 introduced the "unveil" system call, which together provide a means of more granular control of system access by processes running on the system to enforce least privilege.  When a program calls "pledge", it provides a list of categories of system calls (called "promises") that it is planning to make during the life of the running process (children have to make their own pledges and are not restricted), and attempts to make calls outside of those areas will cause the call to be blocked and the process to be killed. Additional calls to pledge cannot add new categories but it can remove them, so access can become more restrictive but not less restrictive.

  "Unveil," by contrast, selectively exposes parts of the file system, by file path, with specific access, and the rest of the file system is correspondingly "veiled" or blocked from access. Successive calls to unveil can expand or override previous ones, expanding access to the file system, adding write and create permissions where there was previously read only, but only until unveil is called with no arguments, which locks the current state in place. Further attempts to call unveil after that result in a violation.

Violations of pledges or attempts to access file paths that are not unveiled show up in process accounting logs for the process with the process flags "P" or "U", respectively.  (My "reportnew" log monitoring script knows how to monitor process accounting logs and can be easily set up to report on such violations.)

Perl scripts on OpenBSD can also use pledge and unveil, with two modules provided in the base operating system, "OpenBSD::Pledge" and "OpenBSD::Unveil".  I've been adding this functionality to several of my commonly used scripts and have learned a few tips that I'd like to share.

Pledge:

* Check your call to pledge for errors.  If you typo the name of a promise (category of system calls), or you provide pledge with a string of comma separated promises instead of an array or list, it will fail and nothing will be pledged.

* If you don't have any idea what promises are required, just use "error".  With the error promise, instead of blocking the system call and killing the process, the result is logged to /var/log/messages and you can see what promises are required.

* The "stdio" promise is always included with OpenBSD::Pledge, so you don't need to list it.

* The "unveil" promise is required if you intend to use OpenBSD::Unveil.

* Calls to exec or system require the "proc" and "exec" promises; the new processes created as a result are not restricted and need to make their own use of pledge and unveil.  (Note: this means that if you are calling a system command that writes to a file, but your script doesn't otherwise write to files, you do not need to pledge the "wpath" promise in your script.)

* If you otherwise fork a child process (e.g., explicitly using "fork" or Parallel::ForkManager or implicitly forking a child process using "open" to read from or write to a command), the promises pledged by the parent process are carried over to the child, which can then restrict them further. (Hat tip to Bryan Steele, who pointed this out on Bluesky without specifically referring to the Perl context.)

* If you use the DBI perl module with mariadb and are accessing a database through a named pipe on the same server, you'll need to pledge the "unix", "inet", and "prot_exec" promises. (This works even from a chroot jail if the named pipe or socket is a hard link from the jail.)

* This isn't a tip, but an observation: if you promise "proc" but not "exec," your system call will fail but your process will not be killed and the script will continue running.

Unveil:

* If you make use of other perl modules in your code with "use", they are loaded prior to your call to unveil and so you don't need to unveil directories like /usr/libdata/perl5 in order to use them. The exception is perl modules that include compiled shared objects (".so"), or which use "require" on other modules (loading them at runtime), in which case you do need unveil such directories, but only with "r" permission.

* If you use the DBI perl module with mariadb, you will need to unveil /var/run/mysql with "rw" and /usr/lib and /usr/local/lib with "rx".

* If you use calls to "system" or "open" which use pipes, globs, or file redirection, you need to unveil "/bin/sh" with "x" permission. You may be able to rewrite your code to avoid the requirement--can you call "system" with a command name and list of arguments rather than a string, and do any processing you need in your program instead of with the shell?

* If you use calls to "system" to execute system commands, you need to unveil them with "x" permission but in most cases you don't need to include "r".

* It is often much easier to unveil a directory rather than individual files; if you plan to check for the existence of a file and then create it if it doesn't exist, you need "rwc" on the containing directory.

* One of the biggest challenges sometimes is to find the source of an unveil violation; unveiling "/" with various permissions to see if it goes away, and then removing that and testing individual directories under the root directory in trial and error can help find things. That's how I first found the need to unveil "/bin/sh".


Finally, if you are writing perl modules it's helpful to document which promises need to be pledged and files and directories need to be unveiled in the calling scripts in order for them to function. It would be inappropriate to pledge or unveil within the module except in a context like a forked child process. I've done this with my Signify.pm wrapper for the OpenBSD "signify" command for signing and verifying files with detached signatures or gzip archives with embedded signatures in the gzip header comments.

If you've made use of pledge and unveil--in perl scripts or otherwise--what lessons have you learned?


Saturday, January 21, 2023

Some nonsense from ChatGPT about file system immutability flags in OpenBSD

 I was thinking about writing a blog post about file system immutability flags in OpenBSD, and thought I'd ask ChatGPT to write it for me. It appears to have gleaned its content from OpenBSD man pages, but it is highly misleading and inaccurate, misrepresenting the key distinction between system and user immutability flags.

What it got right is that only the superuser (root) can set and unset the system immutability flag (schg/noschg) while users (if they own the file) can set and unset the user immutability flag (uchg/nouchg). But either flag can be set or unset on any kind of file.

The user immutability flag can be unset (nouchg) by either the superuser or the owner of the file regardless of the system security run level (-1 to 2), but the system immutability flag can only be unset (noschg) by the superuser when the system security run level is 0 (insecure mode) or -1 (permanently insecure mode).

During boot, the system normally starts in single-user mode at security run level 0. The init process raises the run level to 1 (secure mode) when booting to multi-user mode, and to 2 (highly secure mode) if the rc.securelevel config file so specifies. The system secure level can also be raised if the superuser changes it with the sysctl command, setting the kern.securelevel variable. The permanently insecure mode (-1) can be set with sysctl only while in insecure mode, which will cause the init process to not raise the security level to 1 during boot to multi-user mode.

The man page for securelevel gives an overview of the restrictions at each security level. I run my systems at secure level 2, highly secure mode. Secure mode (secure level = 1) and above prevent lowering the secure level except by the init process (when the system is shutdown back to single user mode), do not allow /dev/mem or /dev/kmem to be opened, make raw disk devices read only, and disallow unsetting of system immutability flags (or unsetting the related system append-only flags, sappnd, using nosappnd), along with a few other restrictions. Highly secure mode (securelevel = 2) also prevents setting the time backward or close to overflow, and prevents changing host firewall rules (pf filter and NAT rules), though IP addresses may still be added or removed from tables. The most significant difference between the system and user immutability flags is not what files they are set on, but that the user immutability flags can be turned off at any time by the file owner or superuser, while the system immutability flags can only be changed while in insecure modes, i.e., after a shutdown from multi-user operation.

I use system file immutability flags on most configuration files, system binaries, and system libraries as an obstacle to an attacker in the event of a compromise and escalation of privileges. If an attacker has console access to a system, they can bypass these flags by shutting down the system and unsetting the flags. If I left certain config files without file system immutability flags set, an attacker could modify one of those config files that leads to execution during system startup to insert commands to unset the flags, and then reboot the system so that those commands execute while the system is in insecure mode.

These flags can get in the way and cause problems without an efficient way to manage them, as they need to be unset in order to do system patches and updates--it's bad news if you try to do a system upgrade while many files are set with immutability flags. They also block features like OpenBSD's kernel address randomized link (KARL) and library reordering address space layout randomization (ASLR) features, which relink the kernel and certain key libraries on reboot to make blind return-oriented programming (BROP) attacks more difficult. I use scripts called syslock/sysunlock to manage the immutability flags. These scripts have a config file (kept immutable, of course) that specifies which files and directory contents to set immutable, and defines groups so that I can unlock what I need at a particular time with a single command. When I want to allow KARL and library ASLR to occur, I do a sysunlock -s (unlock system files) while in single user mode and then boot. I've also defined groups for things like "everything in /etc", "everything related to DNS", etc.

My rc.securelevel config file contains a command to sleep for a period of time, then do a syslock -s after raising the system security level, giving the system time to do the relinking, so I don't need to issue a command to re-lock those files. This is a process I also follow when installing kernel patches, as syspatch fails to do kernel relinking when in single-user mode. That is, I boot to single-user mode, unlock the system files with "sysunlock -s", then exit back to multi-user mode and the KARL process (/usr/libexec/reorder_kernel) and library ASLR (part of /etc/rc) run without error.

syslock/sysunlock may be found at https://www.discord.org/lippard/software/ and on Github at https://github.com/lippard661/syslock

Anyway, after that long introduction, here's what ChatGPT had to say about all this; my prompts are in red:

write a few paragraphs explaining immutable flags in OpenBSD including the commands for setting them and the difference between system and user immutable flags

In OpenBSD, immutable flags are used to protect files from being modified or deleted. These flags can be set on both system and user files, and are useful for preventing accidental or malicious changes to important system files.

To set an immutable flag on a file, the "chflags" command can be used. The command takes the form "chflags immutable [filename]" and can be used to set the flag on a single file or multiple files at once. To remove the immutable flag, the command "chflags noschg [filename]" can be used.

The difference between system and user immutable flags is the level of protection they provide. System immutable flags are set on system files and are only modifiable by the superuser. User immutable flags are set on user files and can be modified by the owner of the file. This allows users to protect their own files, while also allowing the system administrator to protect important system files.

It's important to note that while immutable flags provide a level of protection, they should not be considered as a replacement for proper file permissions and security practices. It's still important to use appropriate permissions to control access to files and to keep your system updated to protect against known vulnerabilities.

Can I set the user immutable flag on a system file or the system immutable flag on a user file?

In OpenBSD, the system immutable flag can only be set on system files, and the user immutable flag can only be set on user files. Attempting to set a user immutable flag on a system file or a system immutable flag on a user file will result in an error.

This is because the system immutable flag is intended for use on system files that should not be modified by anyone other than the superuser. Setting this flag on a user file would not provide any additional security, as the user already has full control over their own files. Similarly, the user immutable flag is intended for use on user files that should not be modified by the owner. Setting this flag on a system file would not provide any additional security, as the superuser already has the ability to modify system files.

It's important to use the appropriate immutable flag for the type of file you are working with to ensure that the file is protected as intended.

[UPDATE, 22 Jan 2023:]

That is incorrect. The system immutable flag can be set on user files and the user immutable flag can be set on system files. The difference is not what files they can be used on, but that the system immutable flag can only be unset by the superuser when the system is in insecure mode, while the user immutable flag can be unset by the file owner or superuser at any system security run level.


I apologize, you are correct. The system immutable flag can be set on user files, and the user immutable flag can be set on system files. The difference between the two flags is not the type of file they can be applied to, but rather the level of protection they provide and who can unset them.

The system immutable flag provides a higher level of protection and can only be unset by the superuser when the system is in insecure mode, This means that once the system flag is set, the file cannot be modified or deleted without booting in insecure mode.

On the other hand, the user immutable flag can be unset by the file owner or the superuser at any system security run level. This means that the file can be modified or deleted by the owner of the file or superuser at any time.

Again, I apologize for the confusion in my previous answer. Thank you for bringing this to my attention.

Friday, January 01, 2021

Books read in 2020

Not much blogging going on here still, but here's my annual list of books read for 2020.
  • Nicholson Baker, Baseless: My Search for Secrets in the Ruins of the Freedom of Information Act
  • John Bolton, The Room Where It Happened: A White House Memoir
  • Ben Buchanan, The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics
  • Susannah Cahalan, The Great Pretender: The Undercover Mission That Changed Our Understanding of Madness
  • Michael Cohen, Disloyal: The True Story of the Former Personal Attorney to President Donald J. Trump
  • Myke Cole, Legion versus Phalanx: The Epic Struggle for Infantry Supremacy in the Ancient World
  • Libby Copeland, The Lost Family: How DNA Testing Is Upending Who We Are
  • Barton Gellman, Dark Mirror: Edward Snowden and the Surveillance State
  • Fiona Hill and Clifford G. Gaddy, Mr. Putin: Operative in the Kremlin (2012)
  • James W. Johnson, Arizona Politicians: The Noble and the Notorious (2002)
  • Gene Kim, The Unicorn Project: A Novel about Developers, Digital Disruption, and Thriving in the Age of Data
  • Maria Konnikova, The Biggest Bluff: How I Learned to Pay Attention, Master Myself, and Win
  • Talia Lavin, Culture Warlords: My Journey Into the Dark Web of White Supremacy
  • Carol D. Leonnig and Philip Rucker, A Very Stable Genius: Donald J. Trump's Testing of America
  • Ben Macintyre, The Spy and the Traitor: The Greatest Espionage Story of the Cold War (2018)
  • Nancy MacLean, Democracy in Chains: The Deep History of the Radical Right's Stealth Plan for America (2017)
  • H. Keith Melton and Robert Wallace, with Henry R. Schlesinger, Spy Sites of New York City: A Guide to the Region's Secret History (2020)
  • Jefferson Morley, Morley v. CIA: My Unfinished JFK Investigation
  • Bastian Obermayer and Frederik Obermaier, The Panama Papers: Breaking the Story of How the Rich & Powerful Hide Their Money
  • Thomas RidActive Measures: The Secret History of Disinformation and Political Warfare
  • Brad Smith and Carol Anne Browne, Tools and Weapons: The Promise and Peril of the Digital Age
  • Mary Trump, Too Much and Never Enough: How My Family Created the World's Most Dangerous Man
  • Robert Wallace and H. Keith Melton with Henry R. Schesinger, Spy Sites of Washington, DC: A Guide to the Capital Region's Secret History (2017)
  • Anna Wiener, Uncanny Valley: A Memoir
  • Isabel Wilkerson, Caste: The Origins of Our Discontents
    Top for 2020: Copeland, Macintyre, Cahalan, Smith and Browne, Buchanan, Obermayer and Obermaier, Gellman, Rid.

    I started the following books I expect to finish in 2021 (yes, I also said that about LeFeber and Wilson last year--I'm well in to LaFeber's book and thought I might finish before the end of the year, but had only read Wilson's intro so it's barely started):

    William Dalrymple, The Anarchy: The East India Company, Corporate Violence, and the Pillage of an Empire
    Walter LaFeber, Inevitable Revolutions: The United States in Central America (2nd edition)
    Peter H. Wilson, The Holy Roman Empire: A Thousand Years of Europe's History

    I've also pre-ordered and am looking forward to reading:

    Nicole Perlroth, This Is How They Tell Me the World Ends: The Cyberweapon Arms Race (due to be published on February 9)

    (Previously: 201920182017201620152014201320122011201020092008200720062005.)

    Wednesday, January 01, 2020

    Books read in 2019

    Not much blogging going on here still, but here's my annual list of books read for 2019.
    • Graham T. Allison, Destined for War: Can America and China Escape Thucydides's Trap?
    • Ross Anderson, Security Engineering (3rd edition, draft chapters)
    • Herbert Asbury, The Barbary Coast: An Informal History of the San Francisco Underworld
    • Heidi Blake, From Russia with Blood: The Kremlin's Ruthless Assassination Program and Vladimir Putin's Secret War on the West
    • Rutger Bregman, Utopia for Realists: How We Can Build the Ideal World
    • Oliver Bullough, Moneyland: The Inside Story of the Crooks and Kleptocrats Who Rule the World
    • Bryan Caplan and Zach Weinersmith, Open Borders: The Science and Ethics of Immigration
    • C.J. Chivers, The Fighters: Americans in Combat
    • Sefton Delmer, Black Boomerang
    • Nina J. Easton, Gang of Five: Leaders at the Center of the Conservative Crusade (bio of Bill Kristol, Ralph Reed, Clint Bolick, Grover Norquist, and David McIntosh)
    • Ronan Farrow, Catch and Kill: Lies, Spies, and a Conspiracy to Protect Predators
    • Ronan Farrow, War on Peace: The End of Diplomacy and the Decline of American Influence
    • Ian Frisch, Magic is Dead: My Journey into the World's Most Secretive Society of Magicians
    • Anand Giridharadas, Winners Take All: The Elite Charade of Changing the World
    • Reba Wells Grandrud, Sunnyslope (Images of America series)
    • Andy Greenberg, Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin's Most Dangerous Hackers
    • Jodi Kantor and Megan Twohey, She Said: Breaking the Sexual Harassment Story That Helped Ignite a Movement
    • Stephen Kinzer, Overthrow: America's Century of Regime Change From Hawaii to Iraq
    • Michael Lewis, Flash Boys: A Wall Street Revolt
    • Jonathan Lusthaus, Industry of Anonymity: Inside the Business of Cybercrime
    • Ben MacIntyre, A Spy Among Friends: Kim Philby and the Great Betrayal
    • Joseph Menn, Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World
    • Anna Merlan, Republic of Lies: American Conspiracy Theorists and Their Surprising Rise to Power
    • Jefferson Morley, Our Man in Mexico: Winston Scott and the Hidden History of the CIA
    • Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media
    • Hans Rosling, with Ola Rosling and Anna Rosling Rönnlund, Factfulness: Ten Reasons We're Wrong About the World--and Why Things Are Better Than You Think
    • Russell Shorto, Amsterdam: A History of the World's Most Liberal City
    • Alexander Stille, The Sack of Rome: Media + Money + Celebrity = Power = Silvio Berlusconi
    • Jamie Susskind, Future Politics: Living Together in a World Transformed by Tech
    • Erik Van De Sandt, Deviant Security: The Technical Computer Security Practices of Cyber Criminals (Ph.D. thesis)
    • Tom Wolfe, The Right Stuff
    • Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads
    Top for 2019: Bullough, Farrow (Catch and Kill), Wu, Chivers, Rosling, Greenberg, Blake, Allison, Caplan and Weinersmith, Kinzer, Delmer.

    I started the following books I expect to finish in early 2020:

    Myke Cole, Legion versus Phalanx: The Epic Struggle for Infantry Supremacy in the Ancient World
    Walter LaFeber, Inevitable Revolutions: The United States in Central America (2nd edition)
    Brad Smith and Carol Anne Browne, Tools and Weapons: The Promise and Peril of the Digital Age
    Peter H. Wilson, The Holy Roman Empire: A Thousand Years of Europe's History

    Two books I preordered and look forward to reading in 2020:

    Anna Wiener, Uncanny Valley: A Memoir (due out January 14)
    Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (due out April 21)

    (Previously: 20182017201620152014201320122011201020092008200720062005.)

    Sunday, March 12, 2017

    Rep. Tom Graves' Active Cyber Defense Certainty Act

    Rep. Tom Graves (R-GA14) has circulated a draft bill, the "Active Cyber Defense Certainty Act" (or ACDC Act), which amends the Computer Fraud and Abuse Act (18 USC 1030) to legalize certain forms of "hacking back" for the purposes of collecting information about an attacker in order to facilitate criminal prosecution or other countermeasures.

    The bill as it currently stands is not a good bill, for the following reasons:

    1. It ignores the recommendations in a recent report, "Into the Gray Zone: Active Defense by the Private Sector Against Cyber Threats," from the Center for Cyber & Homeland Security at the George Washington University. This report distinguishes between low-risk active defense activities within the boundaries of the defender's own network, such as the use of deceptive technology (honeypots, honeynets, tarpitting), the use of beaconing technology to provide notifications in case of intrusions, and research in deep and dark web underground sites, on the one hand, and higher-risk active defense activities such as botnet takedowns, sanctions and indictments, white-hat ransomware, and rescue missions to recover stolen assets, on the other. One of the report's key questions for an active defense measure is "is the active defense measure authorized, whether by an oversight body, law enforcement, or the owner of the affected network?"  This bill creates no mechanism for providing particular authorizations (also see points 2 and 3, below).

    The "Into the Gray Zone" report also suggests that if a decision is made to authorize the accessing of a remote system (an attacker's system is almost always the system of another victim) for information collection purposes, it should be limited to cases in which a defender can "assert a positive identification of the hostile actor with near certainty, relying on multiple credible attribution methods." This, however, seems too strict a condition to impose.

    Finally, however, this report advises that, even without a change in the law, DOJ "should exercise greater discretion in choosing when to enforce the CFAA and other relevant laws, and should provide clarity about how it intends to exercise such discretion. Companies engaging in activities that may push the limits of the law, but are intended to defend corporate data or end a malicious attack against a private server should not be prioritized for investigation or prosecution." (p. 28) The report cites active defense activity by Google in response to hacking from China as an example where there was no prosecution or sanction for accessing remote systems being used by attackers. This proposal seems to me a wiser course of action than adopting this bill. (Also see point 5, below.)

    2. It disregards the recommendations from the Center for Strategic and International Studies Cyber Policy Task Force on the subject of active defense. The CSIS Cyber Policy Task Force report contains a short three-paragraph section on active defense (p. 14) which throws cold water on the idea, calling active defense "at best a stopgap measure, intended to address companies’ frustration over the seeming impunity of transborder criminals" and affirming that only governments should be authorized to engage in activities on the high-risk side, and that it is their responsibility to coordinate and engage in such activity. It does offer up a possibility for a proposal that allows accessing remote systems by private parties in its last sentence: "Additionally, the administration could consider measures, carried out with the prior approval of federal law enforcement agencies (most likely requiring a warrant to enter a third-party network) to recover or delete stolen data stored on servers or networks under U.S. jurisdiction." This bill does not require approval from federal law enforcement agencies or a warrant for accessing remote systems or networks, and jurisdiction is only implicit.

    3. While the proposal in the bill resembles a proposal made in a Mercatus Center at George Mason University proposal by Anthony Glosson, it adopts the carrot element of the proposal while neglecting the stick. Glosson's proposal is that, like this bill, private parties should be permitted to access remote attacking systems in order to collect information ("observation and access"), but not to engage in "disruption and destruction." However, Glosson suggests three requirements be present to make such access and information collection permissible, and if those requirements are not present, that there be "stiff statutory damages" imposed. The bill omits any statutory damages, and imposes only one of Glosson's three requirements (though a previous version of the bill included the second). Glosson's three requirements are (1) that the defender's actions are limited to observation and access, (2) that the attacker was routing traffic through the defender's network at the time of the active defense action, and (3) that obtaining the owner of the attacking system's cooperation at the time of the attack was impractical.  This third criterion is a critical one, and a good way to observe the undesirability of this bill is to imagine that you are the owner of the intermediary system used by the attacker to go after a third party--what would you want that third party to be able to do with your system without your permission or consent?

    4. The bill appears to have been somewhat hastily written and sloppily updated, failing to update a persistent typographical error ("the victim' [sic] own network") through its revisions, and the current version seems to be somewhat incoherent. In its current form it is unlikely to meet its short title objective of encouraging certainty.

    The current version of the bill makes it legal for a victim of a "persistent unauthorized intrusion" to access "without authorization the computer of the attacker to the victim' [sic] own network to gather information in order to establish attribution of criminal activity to share with law enforcement or to disrupt continued unauthorized activity against the victim's own network," so long as this does not destroy information on the system, cause physical injury, or create a threat to public health or safety.

    The phrase "without authorization the computer of the attacker to the victim's own network" doesn't make sense [it should say "attacker of" or "attacker against"], and appears to be the result of poor editing from the prior version of the bill, which made permissible accessing "without authorization a computer connected to the victim' [sic] own network", with the rest of the text remaining the same. This prior wording apparently attempted to thread the needle of the GWU "Into the Gray Zone" report by defining the accessing of a remote system as being within the boundaries of the defender's own network, and thus on the low-risk side of the equation. However, the wording "connected to the victim's own network" is ambiguous and unclear--does it mean directly connected (e.g., to a WiFi access point or LAN port on a switch), in which case this is much less useful, or does it mean any active session flow of packets over the Internet into the victim's network (similar to Glosson's second requirement)? The latter is the more reasonable and charitable interpretation, but it should be made more explicit and could perhaps be too strict--what happens if the attacker disconnects just moments before the active defense activity begins?

    Left unsaid in the bill is what can be done with information collected from the attacking system, which might include information belonging to other victims, the exposure of which could cause harm. Presumably other remedies from other statutes would exist if a defender engaged in such exposure, but it seems to me that this bill would be improved by making the parameters of permissible action more explicit and restrictive. Perhaps the current wording limits actions to information sharing with law enforcement and reconfiguration of one's own defensive systems based on the collected information, but "to disrupt continued unauthorized activity against the victim's own network" is a purpose that could be achieved by a much broader set of actions, which could cause harm to other victims.

    5. It's not clear that the bill is necessary, given that security researchers are today (as they have been for years) taking steps to access infrastructure used by malicious cyber threat actors in order to monitor their activity and collect intelligence information. They are already making legal and regulatory risk decisions which incorporate the existing CFAA, and deciding to proceed anyway.

    If this bill is to move forward, it needs some additional work.

    (News story on the bill: Michael Mimoso, "Active Defense Bill Raises Concerns of Potential Consequences," ThreatPost.
    Further reading: Paul Rosenzweig, "A Typology for Evaluating Active Cyber Defenses," Lawfare blog)

    UPDATE (March 14, 2017): Robert Chesney wrote a good critique of the bill at the Lawfare blog, "Legislative Hackback: Notes on the Active Cyber Defense Certainty Act discussion draft," in which he points out that the word "persistent" is undefined and vague, notes that "intrusion" excludes distributed denial of service attacks from permissible cases of response under this bill, and wisely notes that there may be multiple computers in an attack chain used by the attacker, while the bill is written as though there is only one.  (It is also noteworthy that an attacking IP could be a firewall in front of an attacking machine, and a response attempting to connect to that IP could be redirected to a completely different system.)  Chesney also questions whether destroying information is the right limit on responsive activity, as opposed to altering information (such as system configurations). He also notes that the restrictions for destruction, physical injury, and threats to public health and safety are probably insufficient, noting as I did above that there could be other forms of harm from disseminating confidential information discovered on the attacking system.

    I think a more interesting bill that would create incentives for companies to invest in security and to appropriately share information about attacks (rather than trying to hide it) would be a bill that created a safe harbor or liability limits for a company whose systems are used to attack third parties, if they have taken certain precautionary measures (such as having patched all known vulnerabilities more than 30 days old, and having a continuous monitoring program) and if they also share in a timely manner information about their breach.

    UPDATE (May 25, 2017): Rep. Graves has released a version 2.0 of his bill which is vastly improved, addressing almost all of my concerns above. The new Sec. 2 of the bill puts the use beaconing technology on a sound legal footing, which is consistent with the recommendations of the CSIS "Into the Gray Zone" report. The new Sec. 4 of the bill requires notification of the FBI, which, while it isn't the notification of/deferral to organizations which have their own cyber defense teams to protect and investigate their own compromised infrastructure, it might effectively serve the same purpose, and it also provides a deterrent to irresponsible active defense.  The core of the former bill, Sec. 3, has been revised to limit what can be done, so that now taking or exposing content on the attacker machine belonging to other parties would not be permissible. And there is also a new Sec. 5 of the bill, which sunsets it after two years. I cautiously support the new bill as a potentially useful experiment.

    UPDATE (October 14, 2017): A new version of the bill was released this week which has further improvements. Instead of just creating an exemption to the CFAA, it creates a defense to a criminal charge, and makes clear that it is not a defense for civil liability. This means if you are within the bounds of the new rules accessing the systems of a third party which is another victim of the attacker, you won't go to jail for it, but you could still be successfully sued for damages by that third party. The new version of the bill also lists a few more things which you are NOT permitted to do in order to use the defense, and it requires that the FBI create a program for receiving advance notices from individuals and organizations that intend to use these measures, as well as a requirement for an annual assessment of this legislation's effectiveness.

    UPDATE (February 2, 2018): There are still a few issues with the current version of the Graves bill. (1) It doesn't require defenders to document and disclose actions taken against systems not owned by the attacker to the owners of those systems. (2) It places no limits on what vulnerabilities may be exploited on intermediary or attacker systems. (3) It allows destructive actions against information which belongs to the defender, as well as against any information or system which belongs to the attacker. (4) It does not limit the targets to systems within U.S. jurisdiction, or does it require any judicial approval. Attacks on systems outside U.S. jurisdiction could result in state-sponsored blowback. (5) The exception to permitted activity for any action which "intentionally results in intrusive or remote access into an intermediary's computer" seems at odds with the overall proposal, since 90%+ of the time the systems used by attackers will belong to an intermediary. (6) Sec. 5's requirement that the FBI be notified and presented with various pieces of information prior to the active defense seems both too strict and too loose. Too strict in that it doesn't allow pre-certification and must occur in the course of an attack, too loose in that it requires that the FBI acknowledge receipt before proceeding but no actual approval or certification, and that there's a loophole on one of the required pieces of information to be given to the FBI, which is any other information requested by the FBI for the purposes of oversight. Since all the active defender requires is acknowledgment of receipt, if the FBI doesn't request any such further information as part of that acknowledgement, the defender is good to go immediately at that point before any further information is provided. Sec. 5 is kind of a fake certification process--there is no actual certification or validation process that must occur.

    Thursday, February 16, 2017

    Confusing the two Trump cybersecurity executive orders

    In Andy Greenberg's Wired article on February 9, 2017, "Trump Cybersecurity Chief Could Be a 'Voice of Reason," he writes:
    But when Trump’s draft executive order on cybersecurity emerged last week, it surprised the cybersecurity world by hewing closely to the recommendations of bipartisan experts—including one commission assembled by the Obama administration.
    The described timing and the link both refer to the original draft cybersecurity executive order, which does not at all resemble the recommendations of Obama's Commission on Enhancing National Cybersecurity or the recommendations of the Center for Strategic and International Studies Cyber Policy Task Force, which both included input from large numbers of security experts. Contrary to what Greenberg says, the executive order he refers to was widely criticized on a number of grounds, including that it is incredibly vague and high level, specifies an extremely short time frame for its reviews, and that it seemed to think it was a good idea to collect information about major U.S. vulnerabilities and defenses into one place and put it into the hands of then-National Security Advisor Michael T. Flynn. That original version of the executive order resembled the Trump campaign's website policy proposal on cybersecurity.

    The positive remarks, instead, were for a revised version of the cybersecurity executive order which was verbally described to reporters on the morning of January 31, the day that the signing of the order was expected to happen at 3 p.m., after Trump met for a listening session with security experts. The signing was cancelled, and the order has not yet been issued, but a draft subsequently got some circulation later in the week and was made public at the Lawfare blog on February 9.

    This executive order contains recommendations consistent with both the Cybersecurity Commission report and the CSIS Cyber Policy Task Force report, mandating the use of the NIST Cybersecurity Framework by federal agencies, putting the Office of Management and Budget (OMB) in charge of enterprise risk assessment across agencies, promoting IT modernization and the promotion of cloud and shared services infrastructure, and directing DHS and other agency heads to work with private sector critical infrastructure owners on defenses.

    One key thing it does not do, which was recommended by both reports, is elevate the White House cybersecurity coordinator role (a role which the Trump administration has not yet filled, which was held by Michael Daniel in the Obama administration) to an Assistant to the President, reflecting the importance of cybersecurity. Greenberg's piece seems to assume that Thomas Bossert is in the lead cybersecurity coordinator role, but his role is Homeland Security Advisor (the role previously held by Lisa Monaco in the Obama administration), with broad responsibility for homeland security and counterterrorism, not cybersecurity-specific.

    Despite Greenberg's error confusing the two executive orders being pointed out to him on Twitter on February 9, the article hasn't been corrected as of February 16.