Rep. Tom Graves (R-GA14) has circulated a draft bill, the "Active Cyber Defense Certainty Act" (or ACDC Act), which amends the Computer Fraud and Abuse Act (18 USC 1030) to legalize certain forms of "hacking back" for the purposes of collecting information about an attacker in order to facilitate criminal prosecution or other countermeasures.
The bill as it currently stands is not a good bill, for the following reasons:
1. It ignores the recommendations in a recent report, "Into the Gray Zone: Active Defense by the Private Sector Against Cyber Threats," from the Center for Cyber & Homeland Security at the George Washington University. This report distinguishes between low-risk active defense activities within the boundaries of the defender's own network, such as the use of deceptive technology (honeypots, honeynets, tarpitting), the use of beaconing technology to provide notifications in case of intrusions, and research in deep and dark web underground sites, on the one hand, and higher-risk active defense activities such as botnet takedowns, sanctions and indictments, white-hat ransomware, and rescue missions to recover stolen assets, on the other. One of the report's key questions for an active defense measure is "is the active defense measure authorized,
whether by an oversight body, law enforcement,
or the owner of the affected network?" This bill creates no mechanism for providing particular authorizations (also see points 2 and 3, below).
The "Into the Gray Zone" report also suggests that if a decision is made to authorize the accessing of a remote system (an attacker's system is almost always the system of another victim) for information collection purposes, it should be limited to cases in which a defender can "assert a positive identification of the hostile actor
with near certainty, relying on multiple credible attribution
methods." This, however, seems too strict a condition to impose.
Finally, however, this report advises that, even without a change in the law, DOJ "should exercise
greater discretion in choosing when to enforce
the CFAA and other relevant laws, and should provide
clarity about how it intends to exercise such discretion.
Companies engaging in activities that may push the
limits of the law, but are intended to defend corporate
data or end a malicious attack against a private server
should not be prioritized for investigation or prosecution." (p. 28) The report cites active defense activity by Google in response to hacking from China as an example where there was no prosecution or sanction for accessing remote systems being used by attackers. This proposal seems to me a wiser course of action than adopting this bill. (Also see point 5, below.)
2. It disregards the recommendations from the Center for Strategic and International Studies Cyber Policy Task Force on the subject of active defense. The CSIS Cyber Policy Task Force report contains a short three-paragraph section on active defense (p. 14) which throws cold water on the idea, calling active defense "at best a stopgap measure, intended to address companies’ frustration
over the seeming impunity of transborder criminals" and affirming that only governments should be authorized to engage in activities on the high-risk side, and that it is their responsibility to coordinate and engage in such activity. It does offer up a possibility for a proposal that allows accessing remote systems by private parties in its last sentence: "Additionally,
the administration could consider measures, carried out with the prior approval of federal law
enforcement agencies (most likely requiring a warrant to enter a third-party network) to recover or
delete stolen data stored on servers or networks under U.S. jurisdiction." This bill does not require approval from federal law enforcement agencies or a warrant for accessing remote systems or networks, and jurisdiction is only implicit.
3. While the proposal in the bill resembles a proposal made in a Mercatus Center at George Mason University proposal by Anthony Glosson, it adopts the carrot element of the proposal while neglecting the stick. Glosson's proposal is that, like this bill, private parties should be permitted to access remote attacking systems in order to collect information ("observation and access"), but not to engage in "disruption and destruction." However, Glosson suggests three requirements be present to make such access and information collection permissible, and if those requirements are not present, that there be "stiff statutory damages" imposed. The bill omits any statutory damages, and imposes only one of Glosson's three requirements (though a previous version of the bill included the second). Glosson's three requirements are (1) that the defender's actions are limited to observation and access, (2) that the attacker was routing traffic through the defender's network at the time of the active defense action, and (3) that obtaining the owner of the attacking system's cooperation at the time of the attack was impractical. This third criterion is a critical one, and a good way to observe the undesirability of this bill is to imagine that you are the owner of the intermediary system used by the attacker to go after a third party--what would you want that third party to be able to do with your system without your permission or consent?
4. The bill appears to have been somewhat hastily written and sloppily updated, failing to update a persistent typographical error ("the victim' [sic] own network") through its revisions, and the current version seems to be somewhat incoherent. In its current form it is unlikely to meet its short title objective of encouraging certainty.
The current version of the bill makes it legal for a victim of a "persistent unauthorized intrusion" to access "without authorization the computer of the attacker to the victim' [sic] own network to gather information in order to establish attribution of criminal activity to share with law enforcement or to disrupt continued unauthorized activity against the victim's own network," so long as this does not destroy information on the system, cause physical injury, or create a threat to public health or safety.
The phrase "without authorization the computer of the attacker to the victim's own network" doesn't make sense [it should say "attacker of" or "attacker against"], and appears to be the result of poor editing from the prior version of the bill, which made permissible accessing "without authorization a computer connected to the victim' [sic] own network", with the rest of the text remaining the same. This prior wording apparently attempted to thread the needle of the GWU "Into the Gray Zone" report by defining the accessing of a remote system as being within the boundaries of the defender's own network, and thus on the low-risk side of the equation. However, the wording "connected to the victim's own network" is ambiguous and unclear--does it mean directly connected (e.g., to a WiFi access point or LAN port on a switch), in which case this is much less useful, or does it mean any active session flow of packets over the Internet into the victim's network (similar to Glosson's second requirement)? The latter is the more reasonable and charitable interpretation, but it should be made more explicit and could perhaps be too strict--what happens if the attacker disconnects just moments before the active defense activity begins?
Left unsaid in the bill is what can be done with information collected from the attacking system, which might include information belonging to other victims, the exposure of which could cause harm. Presumably other remedies from other statutes would exist if a defender engaged in such exposure, but it seems to me that this bill would be improved by making the parameters of permissible action more explicit and restrictive. Perhaps the current wording limits actions to information sharing with law enforcement and reconfiguration of one's own defensive systems based on the collected information, but "to disrupt continued unauthorized activity against the victim's own network" is a purpose that could be achieved by a much broader set of actions, which could cause harm to other victims.
5. It's not clear that the bill is necessary, given that security researchers are today (as they have been for years) taking steps to access infrastructure used by malicious cyber threat actors in order to monitor their activity and collect intelligence information. They are already making legal and regulatory risk decisions which incorporate the existing CFAA, and deciding to proceed anyway.
If this bill is to move forward, it needs some additional work.
(News story on the bill: Michael Mimoso, "Active Defense Bill Raises Concerns of Potential Consequences," ThreatPost.
Further reading: Paul Rosenzweig, "A Typology for Evaluating Active Cyber Defenses," Lawfare blog)
UPDATE (March 14, 2017): Robert Chesney wrote a good critique of the bill at the Lawfare blog, "Legislative Hackback: Notes on the Active Cyber Defense Certainty Act discussion draft," in which he points out that the word "persistent" is undefined and vague, notes that "intrusion" excludes distributed denial of service attacks from permissible cases of response under this bill, and wisely notes that there may be multiple computers in an attack chain used by the attacker, while the bill is written as though there is only one. (It is also noteworthy that an attacking IP could be a firewall in front of an attacking machine, and a response attempting to connect to that IP could be redirected to a completely different system.) Chesney also questions whether destroying information is the right limit on responsive activity, as opposed to altering information (such as system configurations). He also notes that the restrictions for destruction, physical injury, and threats to public health and safety are probably insufficient, noting as I did above that there could be other forms of harm from disseminating confidential information discovered on the attacking system.
I think a more interesting bill that would create incentives for companies to invest in security and to appropriately share information about attacks (rather than trying to hide it) would be a bill that created a safe harbor or liability limits for a company whose systems are used to attack third parties, if they have taken certain precautionary measures (such as having patched all known vulnerabilities more than 30 days old, and having a continuous monitoring program) and if they also share in a timely manner information about their breach.
UPDATE (May 25, 2017): Rep. Graves has released a version 2.0 of his bill which is vastly improved, addressing almost all of my concerns above. The new Sec. 2 of the bill puts the use beaconing technology on a sound legal footing, which is consistent with the recommendations of the CSIS "Into the Gray Zone" report. The new Sec. 4 of the bill requires notification of the FBI, which, while it isn't the notification of/deferral to organizations which have their own cyber defense teams to protect and investigate their own compromised infrastructure, it might effectively serve the same purpose, and it also provides a deterrent to irresponsible active defense. The core of the former bill, Sec. 3, has been revised to limit what can be done, so that now taking or exposing content on the attacker machine belonging to other parties would not be permissible. And there is also a new Sec. 5 of the bill, which sunsets it after two years. I cautiously support the new bill as a potentially useful experiment.
UPDATE (October 14, 2017): A new version of the bill was released this week which has further improvements. Instead of just creating an exemption to the CFAA, it creates a defense to a criminal charge, and makes clear that it is not a defense for civil liability. This means if you are within the bounds of the new rules accessing the systems of a third party which is another victim of the attacker, you won't go to jail for it, but you could still be successfully sued for damages by that third party. The new version of the bill also lists a few more things which you are NOT permitted to do in order to use the defense, and it requires that the FBI create a program for receiving advance notices from individuals and organizations that intend to use these measures, as well as a requirement for an annual assessment of this legislation's effectiveness.
UPDATE (February 2, 2018): There are still a few issues with the current version of the Graves bill. (1) It doesn't require defenders to document and disclose actions taken against systems not owned by the attacker to the owners of those systems. (2) It places no limits on what vulnerabilities may be exploited on intermediary or attacker systems. (3) It allows destructive actions against information which belongs to the defender, as well as against any information or system which belongs to the attacker. (4) It does not limit the targets to systems within U.S. jurisdiction, or does it require any judicial approval. Attacks on systems outside U.S. jurisdiction could result in state-sponsored blowback. (5) The exception to permitted activity for any action which "intentionally results in intrusive or remote access into an intermediary's computer" seems at odds with the overall proposal, since 90%+ of the time the systems used by attackers will belong to an intermediary. (6) Sec. 5's requirement that the FBI be notified and presented with various pieces of information prior to the active defense seems both too strict and too loose. Too strict in that it doesn't allow pre-certification and must occur in the course of an attack, too loose in that it requires that the FBI acknowledge receipt before proceeding but no actual approval or certification, and that there's a loophole on one of the required pieces of information to be given to the FBI, which is any other information requested by the FBI for the purposes of oversight. Since all the active defender requires is acknowledgment of receipt, if the FBI doesn't request any such further information as part of that acknowledgement, the defender is good to go immediately at that point before any further information is provided. Sec. 5 is kind of a fake certification process--there is no actual certification or validation process that must occur.
Sunday, March 12, 2017
Thursday, February 16, 2017
Confusing the two Trump cybersecurity executive orders
In Andy Greenberg's Wired article on February 9, 2017, "Trump Cybersecurity Chief Could Be a 'Voice of Reason," he writes:
The positive remarks, instead, were for a revised version of the cybersecurity executive order which was verbally described to reporters on the morning of January 31, the day that the signing of the order was expected to happen at 3 p.m., after Trump met for a listening session with security experts. The signing was cancelled, and the order has not yet been issued, but a draft subsequently got some circulation later in the week and was made public at the Lawfare blog on February 9.
This executive order contains recommendations consistent with both the Cybersecurity Commission report and the CSIS Cyber Policy Task Force report, mandating the use of the NIST Cybersecurity Framework by federal agencies, putting the Office of Management and Budget (OMB) in charge of enterprise risk assessment across agencies, promoting IT modernization and the promotion of cloud and shared services infrastructure, and directing DHS and other agency heads to work with private sector critical infrastructure owners on defenses.
One key thing it does not do, which was recommended by both reports, is elevate the White House cybersecurity coordinator role (a role which the Trump administration has not yet filled, which was held by Michael Daniel in the Obama administration) to an Assistant to the President, reflecting the importance of cybersecurity. Greenberg's piece seems to assume that Thomas Bossert is in the lead cybersecurity coordinator role, but his role is Homeland Security Advisor (the role previously held by Lisa Monaco in the Obama administration), with broad responsibility for homeland security and counterterrorism, not cybersecurity-specific.
Despite Greenberg's error confusing the two executive orders being pointed out to him on Twitter on February 9, the article hasn't been corrected as of February 16.
But when Trump’s draft executive order on cybersecurity emerged last week, it surprised the cybersecurity world by hewing closely to the recommendations of bipartisan experts—including one commission assembled by the Obama administration.The described timing and the link both refer to the original draft cybersecurity executive order, which does not at all resemble the recommendations of Obama's Commission on Enhancing National Cybersecurity or the recommendations of the Center for Strategic and International Studies Cyber Policy Task Force, which both included input from large numbers of security experts. Contrary to what Greenberg says, the executive order he refers to was widely criticized on a number of grounds, including that it is incredibly vague and high level, specifies an extremely short time frame for its reviews, and that it seemed to think it was a good idea to collect information about major U.S. vulnerabilities and defenses into one place and put it into the hands of then-National Security Advisor Michael T. Flynn. That original version of the executive order resembled the Trump campaign's website policy proposal on cybersecurity.
The positive remarks, instead, were for a revised version of the cybersecurity executive order which was verbally described to reporters on the morning of January 31, the day that the signing of the order was expected to happen at 3 p.m., after Trump met for a listening session with security experts. The signing was cancelled, and the order has not yet been issued, but a draft subsequently got some circulation later in the week and was made public at the Lawfare blog on February 9.
This executive order contains recommendations consistent with both the Cybersecurity Commission report and the CSIS Cyber Policy Task Force report, mandating the use of the NIST Cybersecurity Framework by federal agencies, putting the Office of Management and Budget (OMB) in charge of enterprise risk assessment across agencies, promoting IT modernization and the promotion of cloud and shared services infrastructure, and directing DHS and other agency heads to work with private sector critical infrastructure owners on defenses.
One key thing it does not do, which was recommended by both reports, is elevate the White House cybersecurity coordinator role (a role which the Trump administration has not yet filled, which was held by Michael Daniel in the Obama administration) to an Assistant to the President, reflecting the importance of cybersecurity. Greenberg's piece seems to assume that Thomas Bossert is in the lead cybersecurity coordinator role, but his role is Homeland Security Advisor (the role previously held by Lisa Monaco in the Obama administration), with broad responsibility for homeland security and counterterrorism, not cybersecurity-specific.
Despite Greenberg's error confusing the two executive orders being pointed out to him on Twitter on February 9, the article hasn't been corrected as of February 16.
Sunday, January 01, 2017
Books read in 2016
Not much blogging going on here still, but here's my annual list of books read for 2016. Items with hyperlinks are linked directly to the item online (usually PDF, some of these are reports rather than books), with no paywall or fee.
- Andreas Antonopoulos, The Internet of Money
- Herbert Asbury, The Gangs of New York: An Informal History of the Underworld
- Rob Brotherton, Suspicious Minds: Why We Believe Conspiracy Theories
- Center for Cyber & Homeland Security, Into the Gray Zone: The Private Sector and Active Defense Against Cyber Threats
- Michael D'Antonio, Never Enough: Donald Trump and the Pursuit of Success
- Henning Diedrich, Ethereum: Blockchains, Digital Assets, Smart Contracts, Decentralized Autonomous Organizations
- Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future
- Emma A. Jane and Chris Fleming, Modern Conspiracy: The Importance of Being Paranoid
- Roger Z. George and James B. Bruce, editors, Analyzing Intelligence: Origins, Obstacles, and Innovations
- Peter Gutmann, Engineering Security
- House Homeland Security Committee, Going Dark, Going Forward: A Primer on the Encryption Debate
- Dr. Rob Johnston, Analytic Culture in the U.S. Intelligence Community: An Ethnographic Study
- R.V. Jones, Most Secret War
- Fred Kaplan, Dark Territory: The Secret History of Cyber War
- Maria Konnikova, The Confidence Game: Why We Fall for It...Every Time
- Adam Lee, hilarious blog commentary on Atlas Shrugged
- Deborah Lipstadt, Denying the Holocaust: The Growing Assault on Truth and Memory
- Dan Lyons, Disrupted: My Misadventure in the Startup Bubble
- Geoff Manaugh, A Burglar's Guide to the City
- Felix Martin, Money: The Unauthorized Biography--From Coinage to Cryptocurrencies
- Nathaniel Popper, Digital Gold: Bitcoin and the Inside Story of the Misfits and Millionaires Trying to Reinvent Money
- John Allen Paulos, A Numerate Life: A Mathematician Explores the Vagaries of Life, His Own and Probably Yours
- Mary Roach, Grunt: The Curious Science of Humans at War
- Jon Ronson, The Elephant in the Room: A Journey into the Trump Campaign and the "Alt-Right"
- Oliver Sacks, On the Move: A Life
- Luc Sante, Low Life: Lures and Snares of Old New York
- Adam Segal, The Hacked World Order: How Nations Fight, Trade, Maneuver, and Manipulate in the Digital Age
- Steve Silberman, NeuroTribes: The Legacy of Autism and the Future of Neurodiversity
- Richard Stiennon, There Will Be Cyberwar: How the Move to Network-Centric War Fighting Has Set the Stage for Cyberwar
- Russell G. Swenson, editor, Bringing Intelligence About: Practitioners Reflect on Best Practices
- U.S. Army Special Operations Command, "Little Green Men": A Primer on Modern Russian Unconventional Warfare, Ukraine, 2013-2014
- Joseph E. Uscinski and Joseph M. Parent, American Conspiracy Theories
- Paul Vigna and Michael J. Casey, The Age of Crypto Currency: How Bitcoin and the Blockchain Are Challenging the Global Economic Order
I made progress on a few other books (first four from 2016, one from 2015, next three from 2014, next three from 2013, last two still not finished from 2012--I have trouble with e-books, especially very long nonfiction e-books):
(Previously: 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005.)
- Andreas Antonopoulos, Mastering Bitcoin: Unlocking Digital Cryptocurrencies
- Robert M. Gates, Duty: Memoirs of a Secretary at War
- Jocelyn Godwin, Upstate Cauldron: Eccentric Spiritual Movements in Early New York State
- Thomas Rid, Rise of the Machines: A Cybernetic History
- John Searle, Making the Social World
- Andrew Jaquith, Security Metrics: Replacing Fear, Uncertainty, and Doubt
- Massimo Pigliucci and Maarten Boudry, Philosophy of Pseudoscience: Reconsidering the Demarcation Problem
- Steven Pinker, The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century
- Richard Bejtlich, The Practice of Network Security Monitoring
- James Grimmelmann, Internet Law: Cases & Problems (v2; v3 is out now)
- Douglas Hofstadter and Emmanuel Sander, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking
- Mark Dowd, John McDonald, and Justin Schuh, The Art of Software Security Assessment: Identifying and Avoiding Software Vulnerabilities
- Michal Zalewski, The Tangled Web: A Guide to Securing Modern Web Applications
(Previously: 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005.)