Saturday, November 19, 2005

Freedom Summit: Technological FUD

Sunday morning's first session was by Stuart Krone, billed as a computer security expert working at Intel. Krone, wearing a National Security Agency t-shirt, of a type sold at the National Cryptologic Museum outside Ft. Meade, spoke on the subject "Technology: Why We're Screwed." This was a fear-mongering presentation on technological developments that are infringing on freedom, mostly through invasion of privacy. The talk was a mix of fact, error, and alarmism. While the vast majority of what Krone talked about was real, a significant number of details were distorted or erroneous. In each case of distortion or error, the distortions enhanced the threat to individual privacy or the malice behind it, and attributed unrealistic near-omniscience and near-omnipotence to government agencies. I found his claim that the NSA had gigahertz processors twenty years before they were developed commercially to be unbelievable, for example. He also tended to omit available defenses--for instance, he bemoaned grocery store loyalty programs which track purchases and recommended against using them, while failing to note that most stores don't check the validity of signup information and there are campaigns to trade such cards to protect privacy.

Krone began by giving rather imprecise definitions for three terms: convenience, freedom, and technology. For convenience, he said it is something that is "easy to do," freedom is either "lack of coercion" or "privacy," and technology is "not the same as science" but is "building cool toys using scientific knowledge." While one could quibble about these definitions, I think they're pretty well on track, and that a lack of society intrusion into private affairs is a valuable aspect of freedom.

Krone then said that the thesis of his talk is to discuss ways in which technology is interfering with freedom, while noting that technology is not inherently good or evil, only its uses are.

He began with examples of advancements in audio surveillance, by saying that private corporations have been forced to do government's dirty work to avoid Freedom of Information Act issues, giving as an example CALEA (Communications Assistance for Law Enforcement Act) wiretaps. He stated that CALEA costs are added as a charge on your phone bill, so you're paying to have yourself wiretapped. He said that CALEA now applies to Voice Over IP (VOIP), including Skype and Vonage, and that the government is now tapping all of those, too. Actually, what he's referring to is that the FCC issued a ruling on August 5, 2005 on how CALEA impacts VOIP which requires providers of broadband and VOIP services which connect to the public telephone network to provide law enforcement wiretap capability within 18 months. There is no requirement for VOIP providers which don't connect to the public telephone network, so the peer-to-peer portion of Skype is not covered (but SkypeIn and SkypeOut are). This capability doesn't exist in most VOIP providers' networks, and there is strong argument that the FCC doesn't have statutory authority to make this ruling, which is inconsistent with past court cases--most telecom providers are strongly opposing this rule. The Electronic Frontier Foundation has an excellent site of information about CALEA.

Krone next talked about the ability to conduct audio surveillance on the inside of the home using 30-100 GHz microwaves to measure vibrations inside the home. This is real technology for which there was a recent patent application.

He raised the issue of cell phone tracking, as is being planned to use for monitoring traffic in Kansas City (though he spoke as though this was already in place--this was a common thread in his talk, to speak of planned or possible uses of technology as though they are already in place).
(This is actually currently being used in Baltimore, MD, the first place in the U.S. to use it.)

He spoke very briefly about Bluetooth, which he said was invented by Intel and other companies (it was invented by Ericsson, but Intel is a promoter member of the Bluetooth Special Interest Group along with Agere, Ericsson, IBM, Microsoft, Motorola, Nokia, and Toshiba). He stated that it is completely insecure, that others can turn on your phone and listen to your phone's microphone, get your address book, and put information onto your phone. While he's quite right that Bluetooth in general has major security issues, which specific issues you may have depend on your model of phone and whether you use available methods to secure or disable Bluetooth features. Personally, I won't purchase any Bluetooth product unless and until it is securable--except perhaps a device to scan with.

Next, Krone turned to video surveillance, stating that in addition to cameras being all over the place, there are now cameras that can see through walls via microwave, that can be used by law enforcement without a search warrant, which hasn't been fully decided by the courts yet. I haven't found anything about microwave cameras that can see through walls, but this sounds very much like thermal imaging, which the Supreme Court has addressed. In Kyllo v. U.S. (533 U.S. 27, 2001) it was ruled that the use of a thermal imaging device to "look through walls" constituted a search under the Fourth Amendment and thus requires a search warrant. Scalia, Souter, Thomas, Ginsburg, and Breyer ruled with the majority; Stevens, Rehnquist, O'Connor, and Kennedy dissented.

Krone briefly mentioned the use of "see through your clothes" X-ray scanners, stating that six airports are using them today. This technology exists and is in TSA trials, and was actually tested at a Florida airport back in 2002. A newer, even more impressive technology is the new Tadar system unveiled in Germany in mid-October 2005.

He addressed RFIDs, and specifically RFIDs being added to U.S. passports in 2006, and some of the risks this may create (such as facilitating an electronic "American detector"). This is a real threat that has been partially addressed by adding a radio shielding to the passport to prevent the RFID from being read except when the passport is open. As Bruce Schneier notes, this is not a complete safeguard. Krone also stated that there is a California bill to put RFIDs in cars, with no commercial justification, just to "know where everyone is and what they have with them at all times." I'm not aware of the bill he is referring to, but the use of transponders in cars for billing purposes for toll roads is a possible commercial justification.

He spoke about the laser printer codes that uniquely identify all documents printed by certain laser printers, which have been in place for the last decade and were recently exposed by the Electronic Frontier Foundation and reported in this blog (Krone mistakenly called it the "Electronic Freedom Foundation," a common mistake). He also briefly alluded to steganography, which he wrongly described as "the art of hiding information in a picture." While hiding a message in a picture is one form of steganography, what is characteristic of steganography is that it is hiding a message in such a way as to disguise the fact that a message is even present.

He then went on to talk about Intel's AMT product--"Advanced Management Technology." This is a technology that allows computers to be remotely rebooted, have the console redirected, obtain various information out of NVRAM about what software is installed, and to load software updates remotely, even if the system is so messed up that the operating system won't boot. This is a technology that will be extremely useful for large corporations with a geographically dispersed work force and a small IT staff; there is similar technology from Sun Microsystems in their Sun Fire v20z and v40z servers which allows remote access via SSH to the server independent of the operating system, which allows console port and keyboard access, power cycling of the server, etc. This is technology with perfectly legitimate uses, allowing the owner of the machine to remotely deal with issues that would previously have required either physically going to the box or the expense of additional hardware such as a console server.

Krone described AMT in such a way as to omit all of the legitimate uses, portraying it as a technology that would be present on all new computers sold whether you like it or not, which would allow the government to turn your computer on remotely, bypass all operating system security software including a PC firewall, and take an image of your hard drive without your being able to do anything about it. This is essentially nonsensical fear-mongering--this technology is specifically designed for the owner of the system, not for the government, and there are plenty of mechanisms which could and should be used by anyone deploying such systems to prevent unauthorized parties from accessing their systems via such an out-of-band mechanism, including access control measures built into the mechanisms and hardware firewalls.

He then went on to talk about Digital Rights Management (DRM), a subject which has been in the news lately as a result of Sony BMG's DRM foibles. Krone stated that DRM is being applied to videos, files, etc., and stated that if he were to write a subversive document that the government wanted to suppress, it would be able to use DRM to shut off all access to that file. This has DRM backwards--DRM is used by intellectual property owners to restrict the use of their property in order to maximize the potential paying customer base. The DRM technologies for documents designed to shut off access are intended for functions such as allowing corporations to be able to guarantee electronic document destruction in accordance with their policies. This function is a protection of privacy, not an infringement upon it. Perhaps Krone intended to spell out a possible future like that feared by Autodesk founder John Walker in his paper "The Digital Imprimatur," where he worries that future technology will require documents published online to be certified by some authority that would have the power to revoke it (or revoke one's license to publish). While this is a potential long-term concern, the infrastructure that would allow such restrictions does not exist today. On the contrary, the Internet of today makes it virtually impossible to restrict the publication of undesired content.

Krone spoke about a large number of other topics, including Havenco, Echelon, Carnivore/DCS1000, web bugs and cookies, breathalyzers, fingerprints, DNA evidence, and so on. With regard to web bugs, cookies, and malware, he stated that his defense is not to use Windows, and to rely on open source software, because he can verify that the content and function of the software is legitimate. While I hate to add to the fear-mongering, this was a rare instance where Krone doesn't go far enough in his worrying. The widespread availability of source code doesn't actually guarantee the lack of backdoors in software for two reasons. First, the mere availability of eyeballs doesn't help secure software unless the eyeballs know what to look for. There have been numerous instances of major security holes persisting in actively maintained open source software for many years (wu-ftpd being a prime example). Second, and more significantly, as Ken Thompson showed in his classic paper "Reflections On Trusting Trust" (the possibility of which was first mentioned in Paul Karger and Roger Schell's "Multics Security Evaluation" paper), it is possible to build code into a compiler that will insert a backdoor into code whenever a certain sequence is found in the source. Further, because compilers are typically written in the same language that they compile, one can do this in such a way that it is bootstrapped into the compiler and is not visible in the compiler's source code, yet will always be inserted into any future compilers which are compiled with that compiler or its descendants. Once your compiler has been compromised, you can have backdoors that are inserted into your code without being directly in any source code.

Of the numerous other topics that Krone discussed or made reference to, there are three more instances I'd like to comment on: MRIs used as lie detectors at airport security checkpoints, FinCen's monitoring of financial transactions, and a presentation on Cisco security flaws at the DefCon hacker conference. In each case, Krone said things that were inaccurate.

Regarding MRIs, Krone spoke of the use of MRIs as lie detectors at airport security checkpoints as though they were already in place. The use of fMRI as a lie detection measure is something being studied at Temple University, but is not deployed anywhere--and it's hard to see how it would be practical as an airport security measure. Infoseek founder and Propel CEO Steve Kirsch proposed in 2001 using a brainscan recognition system to identify potential terrorists, but this doesn't seem to have been taken seriously. There is a voice-stress analyzer being tested as an airport security "lie detector" in Israel, but everything I've read about voice stress analysis is that it is even less reliable than polygraphs (which themselves are so unreliable that they are inadmissible as evidence in U.S. courts). (More interesting is a "stomach grumbling" lie detector...) (UPDATE March 27, 2006: Stu Krone says in the comments on this post that he never said that MRIs were being used as lie detectors at airport security checkpoints. I've verified from a recording of his talk that this is my mistake--he spoke only of fMRI as a tool in interrogation.)

Regarding FinCen, the U.S. Financial Crimes Enforcement Network, Krone made the claim that "FinCen monitors all transactions" and "keeps a complete database of all transactions," and that for purchases made with cash, law enforcement can issue a National Security Letter, including purchases of automobiles. This is a little bit confused--National Security Letters have nothing specifically to do with financial transactions per se, but are a controversial USA PATRIOT Act invention designed to give the FBI the ability to subpoena information without court approval. I support the ACLU's fight against National Security Letters, but they don't have anything to do with FinCen. Krone was probably confused by the fact that the USA PATRIOT Act also expanded the requirement that companies whose customers make large cash purchases (more than $10,000 in one transaction or in two or more related transactions) fill out a Form 8300 and file it with the IRS. Form 8300 data goes into FinCen's databases and is available to law enforcement, as I noted in my description of F/Sgt. Charles Cohen's presentation at the Economic Crime Summit I attended. It's simply not the case that FinCen maintains a database of all financial transactions.

Finally, Krone spoke of a presentation at the DefCon hacker conference in Las Vegas about Cisco router security. He said that he heard from a friend that another friend was to give a talk on this subject at DefCon, and that she (the speaker) had to be kept in hiding to avoid arrest from law enforcement in order to successfully give the talk. This is a highly distorted account of Michael Lynn's talk at the Black Hat Briefings which precede DefCon. Lynn, who was an employee of Internet Security Systems, found a remotely exploitable heap overflow vulnerability in the IOS software that runs on Cisco routers as part of his work at ISS. ISS had cold feet about the presentation, and told Lynn that he would be fired if he gave the talk, and Cisco also threatened him with legal action. He quit his job and delivered the talk anyway, and ended up being hired by Juniper Networks, a Cisco competitor. As of late July, Lynn was being investigated by the FBI regarding this issue, but he was not arrested nor in hiding prior to his talk, nor is he female.

I found Krone's talk to be quite a disappointment. Not only was it filled with careless inaccuracies, it presented nothing about how to defend one's privacy. He's right to point out that there are numerous threats to privacy and liberty that are based on technology, but there are also some amazing defensive mechanisms. Strong encryption products can be used to enhance privacy, the EFF's TOR onion routing mechanism is a way of preserving anonymity, the Free Network Project has built mechanisms for preventing censorship (though which are also subject to abuse).

6 comments:

  1. Stu:

    You've given a long series of assertions, but largely without providing evidence to support your positions. I would be interested in seeing references on NSA gigahertz chips, microwave camera technology, and the alleged DefCon Cisco security flaw presentation DHS tried to suppress (but was somehow not covered by the press like Flynn's talk) story.

    My description was based on extensive notes I took during your talk, and I stand behind its accuracy in reporting the content of what you said.

    You say "your understanding of the security of VOIP is limited" without explaining why or how, and say "there are still ways of intercepting VOIP communications" without describing a specific threat or program that's in place. My point wasn't that it's not *possible*, my point was that you portrayed it as though it's happening right now, as part of a fully implemented program of interception, and that's not the case. (Where there is a fully implemented program of interception is in the TDM world, which we now know is augmented by things such as data mining of AT&T's Daytona call detail record database, which doesn't include call content.)

    You say that Skype to Skype transmission is "subject to normal monitoring." What do you mean by "normal monitoring" in this context? What percentage of Skype peer-to-peer communications--which are encrypted--do you think is actively being intercepted by U.S. government agencies, and how do you think they are recovering or intercepting the unencrypted signal?

    Your comment about cell phone tracking doesn't address my complaint--you say now that it's a future threat, but you presented it as something already in place.

    Regarding AMT, it was not until the question-and-answer session that you explained the legitimate reasons for such technology--you portrayed it as though the primary purpose was to give up control of home systems to the government, which is false. You say that it can "be subverted"--has this been published anywhere? Isn't this an issue your employer should be very interested in addressing?

    I stand behind my comments on DRM--I pointed out its negatives and explained why I thought you had it backwards. You are concerned about the currently nonexistent threat of the ability of third parties to use DRM to delete your content (though I als cited the cogent remarks about the future possibility of such a threat by John Walker), when the real issue is its prevention of fair use by consumers who pay for the content.

    I didn't quote you about MRI, so I couldn't have misquoted you. I'll have to check my notes (or perhaps obtain a copy of the recording of your talk), but I certainly got the impression you were saying that MRI was being deployed for lie detection as an airport security mechanism.

    You are correct about polygraph evidence--the U.S. Supreme Court opened the door to jurisdictions making their own decisions about the admissibility of polygraph evidence in 1998 in United States v. Scheffer; there are 18 states (including Arizona) which will admit it by stipulation, 31 states which have either rejected it, even by stipulation, or have failed to address the issue, and 1 state (New Mexico) which will admit it without stipulation by the parties, with some restrictions. I agree with you that its admissibility should be rejected across the board.

    You say that the use of strong encryption is "naive" and that this or other recommendations of mine can be "dangerous for those who listen to you." What do you mean?

    Despite the length of your comment, I don't think you've provided much clarity. That's pretty much the same problem I had with your original talk.

    I welcome you to comment further with more specifics. I'll be happy to admit and retract any errors you demonstrate that I've made, as I did with the polygraph issue.

    ReplyDelete
  2. Stu:

    Once again, you've not provided any of the requested evidence.

    A security professional should present evidence about the likelihood, as well as the mere possibility of threats, in order to help people arrive at cost-effective solutions that reduce risk. Merely creating fear, uncertainty, and doubt is not productive. The constraints of time are not a reasonable excuse for misrepresenting the landscape of threats by presenting future threats as current threats or unlikely threats as omnipresent, nor for failing to substantiate claims you've specifically been asked to substantiate.

    You're right that endpoint compromise is a serious issue, and that it can be a mechanism for defeating strong encryption (along with "rubber hose cryptography," keystroke loggers, and grabbing passphrases from memory or swap). If you're now suggesting that the U.S. government has a widescale program of intercepting Skype calls by making use of compromised systems, that's quite different from your initial suggestion that all VOIP calls are being intercepted through CALEA mechanisms. The latter is not the case, and the former is unlikely. But endpoint compromises occur all the time.

    The issue of endpoint compromise is largely driven by the economics of online criminal activity (e.g., spamming, phishing, botnets), and it's noteworthy that government enforcement actions against this activity have to date been relatively few--it's a problem that is still being grappled with. I think you greatly overestimate the ability of government to monitor and intercept, and would benefit from reading this Schneier post on "data mining for terrorists": http://www.schneier.com/blog/archives/2006/03/data_mining_for.html

    BTW, your concluding remark is misplaced--I'm not a beginner to information security or telecommunications, I'm employed in a senior position in information security for a global telecommunications provider which carries most of its voice traffic over IP.

    ReplyDelete
  3. Frankly, most of what I said could easily be checked on Google with a little digging. Some, although not much of the information I give comes from confidential sources. This I can not and will not discuss. I really don't care if anyone likes that or not, but that's the way it is. Also, I can't comment on what the press chooses to cover, or what you happen to read or not read.

    You make a speech with a number of bold claims but you don't care whether your audience believes it or not???

    If you don't care to back up your assertions then when we call them literally incredible you can't really get pissed at us, can you?

    And, btw, a little google searching can uncover lots of information on how to get rich quick, how to make cold fusion, and how to make a perpetual motion machine. What does that prove?

    ReplyDelete
  4. Stu:

    Your statement that you did not present future threats as current threats is contrary to what I and others heard you say, what I recorded in my notes, and how people responded in the Q&A. Perhaps you did not intend to convey that impression, but you did. If nothing else, take it as constructive criticism for future presentations.

    I'll accept your statement that you didn't say fMRI were being used in airports and update the original post to record your position on the matter.

    You say that you didn't say that all VOIP calls are being intercepted. What you said was that because CALEA now applies to VOIP, VOIP and Skype are now being tapped as well. The impression I got was that you were saying not only had a ruling been passed by the FCC, but it was already in effect and implemented--which is not the case.

    It is unfair of you to attempt to dismiss my opinions as biased on the grounds of my position--I have no interest in underestimating threats. On the contrary, I think a responsible presentation should accurately describe the threat landscape, which I don't believe your talk did. It's also ironic that you question whether I "truly have a background in infosec" and wonder whether I ever speak publicly, while pointing me to Google to provide documentation for your claims. My background and record of public speaking can be easily found with Google, in contrast to your background or sources. Are you employed by Intel in an information security position?

    ReplyDelete
  5. Again your comment is directed at the scope of my speech. I don’t have time to present cites in a 45 minute speech. If you look up reference material on the web and in various journals the information is available.

    Actually, my comment was directed at the litany of bald assertions you posted here in response to Jim's critique of your speech. It seems to me that, in this context, providing supporting evidence and persuasive argument is the appropriate thing to do--if you want people to take your positions seriously. That's not, however, what you did. You simply said, in effect, "You're wrong and you don't know what you're talking about. Go look it up!" What kind of response do you expect from that?

    I am sympathetic to your complaint about time-constraints at speaking engagements. However, a simple and low-cost solution to that problem is a hand-out that consists of a bibliography of reference materials. I hope you'll take the suggestion under consideration for future presentations.

    ReplyDelete
  6. In my original post I wrote: "Second, and more significantly, as Ken Thompson showed in his classic paper "Reflections On Trusting Trust" (the possibility of which was first mentioned in Paul Karger and Roger Schell's "Multics Security Evaluation" paper), it is possible to build code into a compiler that will insert a backdoor into code whenever a certain sequence is found in the source. Further, because compilers are typically written in the same language that they compile, one can do this in such a way that it is bootstrapped into the compiler and is not visible in the compiler's source code, yet will always be inserted into any future compilers which are compiled with that compiler or its descendants. Once your compiler has been compromised, you can have backdoors that are inserted into your code without being directly in any source code."

    There is now a countermeasure for this, the details of which have been worked out by David A. Wheeler in a paper titled "Countering Trusting Trust Through Diverse Double-Compiling."

    The paper and additional comments can be found here: http://www.dwheeler.com/trusting-trust/

    Whew.

    ReplyDelete