Showing posts with label Multics. Show all posts
Showing posts with label Multics. Show all posts

Tuesday, November 13, 2007

Multics source code released

The full source code to the last official release of the Multics operating system has been released to the general public (though full source was always made available to all customers, except for specific "unbundled" applications). Multics, the predecessor system to Unix (and in a number of ways still its superior), was a general purpose commercial operating system best known for its security.

That release, Multics MR12.5 (MR = "Multics Release"), was released to customers in November 1992. The last Multics system was shut down in 2000.

The software can be downloaded from a website at MIT, though it requires specialized hardware to run on, so don't expect to be able to run it. My name appears a few times throughout the software, as I worked as a Multics software developer from 1983 to 1988. The MIT site incorrectly states that Multics development was ended by Bull in 1985--that may have been the time when Bull decided to pull the plug, but there was still development (though primarily bug fixing) going on in 1988 when I left.

One of the pieces I wrote was a rewrite of the interactive message facility, in some ways a predecessor of instant messaging (except that it operated on a single timesharing host rather than over a network between hosts).

Most of the software is in the "ldd" hierarchy (for library directory directory, the directory of directories of libraries). The software is in Multics "archive" format which is similar to Unix tar files. The message facility software is in /ldd/sss/source/bound_msg_facility_.s.archive.

Kudos to Group Bull, the copyright holder of Multics, for making the software open source. Bull purchased Multics as part of its acquisition of Honeywell's Large Computer Products Division in the mid-eighties.

Wednesday, September 19, 2007

Lessons for information security from Multics

Bruce Schneier brings attention to a 2002 paper by Paul Karger and Roger Schell (PDF) about lessons learned from Multics security that are still relevant today, and Multicians come out of the woodwork in the comments.

Karger and Schell were part of the Air Force "tiger team" that ran penetration attacks against Multics in the 1970s. They were successful, which ultimately led to a Multics security enhancement project, the result of which was that Multics was the first commercial operating system to obtain a B2 security rating from the National Computer Security Center. I played a small part in that project, fixing some bugs and helping to run tests of Multics' Trusted Computing Base (TCB).

Sunday, December 11, 2005

Internet History

I've been reading back issues of 2600: The Hacker Quarterly, and just read the April 1985 issue. They are fascinating historical documents. The last two pages of that issue contain the ARPANet hosts file as of September 27, 1984, listing the hosts by geographic location. This was shortly after the ARPANet/MILNET split and about the time of the introduction of the domain name system. The ARPANet hosts used the 10 network (which is now private IP space--it's not publicly routed and can be used by any individual or organization for internal numbering) and MILNET used the 26 network (26.0.0.0/8 is still assigned to DISA, the Defense Information Systems Agency).

Arizona at that time had two hosts: YUMA-SW (26.3.0.75) and YUMA-TAC (26.2.0.75), both on MILNET. The TACs (Terminal Access Controllers) were systems that allowed telephone dialup access to the network; they essentially played the role of a terminal server. The MILNET TACs developed a system for user authentication called the TAC Access Control System, or TACACS, which allowed a user to authenticate to a given TAC without the actual credentials being stored on the TAC. This protocol was enhanced by Cisco into XTACACS and then TACACS+, which is still used today, mainly on Cisco routers and switches. (The original deployment of TACACS meant that ARPANet users could not login using MILNET TACs--this is something that led to author and computer enthusiast Jerry Pournelle being kicked off the ARPANet in 1985 when his account on MIT-MC was shut down.)

There were a number of Multics systems on the net, including MIT-MULTICS in Cambridge, Massachusetts (10.0.0.6, through which I got access to ARPANet mailing lists back then), HI-MULTICS (10.1.0.94, the only host in Minnesota, belonging to Honeywell), USGS2-MULTICS in Colorado (26.0.0.69, belonging to the U.S. Geological Survey), and RADC-MULTICS (26.0.0.18, at the Rome Air Development Center in Rome, NY). The only hosts outside of the United States were MINET-RDM-TAC (24.1.0.6, in the Netherlands), MINET-HLH-TAC (24.1.0.13, in Scotland), FRANKFURT-MIL-TAC (26.0.0.116, in Germany--along with about 10 other hosts in Germany), three hosts in Italy, two in England, and three in Korea--all on military bases.

Saturday, November 19, 2005

Freedom Summit: Technological FUD

Sunday morning's first session was by Stuart Krone, billed as a computer security expert working at Intel. Krone, wearing a National Security Agency t-shirt, of a type sold at the National Cryptologic Museum outside Ft. Meade, spoke on the subject "Technology: Why We're Screwed." This was a fear-mongering presentation on technological developments that are infringing on freedom, mostly through invasion of privacy. The talk was a mix of fact, error, and alarmism. While the vast majority of what Krone talked about was real, a significant number of details were distorted or erroneous. In each case of distortion or error, the distortions enhanced the threat to individual privacy or the malice behind it, and attributed unrealistic near-omniscience and near-omnipotence to government agencies. I found his claim that the NSA had gigahertz processors twenty years before they were developed commercially to be unbelievable, for example. He also tended to omit available defenses--for instance, he bemoaned grocery store loyalty programs which track purchases and recommended against using them, while failing to note that most stores don't check the validity of signup information and there are campaigns to trade such cards to protect privacy.

Krone began by giving rather imprecise definitions for three terms: convenience, freedom, and technology. For convenience, he said it is something that is "easy to do," freedom is either "lack of coercion" or "privacy," and technology is "not the same as science" but is "building cool toys using scientific knowledge." While one could quibble about these definitions, I think they're pretty well on track, and that a lack of society intrusion into private affairs is a valuable aspect of freedom.

Krone then said that the thesis of his talk is to discuss ways in which technology is interfering with freedom, while noting that technology is not inherently good or evil, only its uses are.

He began with examples of advancements in audio surveillance, by saying that private corporations have been forced to do government's dirty work to avoid Freedom of Information Act issues, giving as an example CALEA (Communications Assistance for Law Enforcement Act) wiretaps. He stated that CALEA costs are added as a charge on your phone bill, so you're paying to have yourself wiretapped. He said that CALEA now applies to Voice Over IP (VOIP), including Skype and Vonage, and that the government is now tapping all of those, too. Actually, what he's referring to is that the FCC issued a ruling on August 5, 2005 on how CALEA impacts VOIP which requires providers of broadband and VOIP services which connect to the public telephone network to provide law enforcement wiretap capability within 18 months. There is no requirement for VOIP providers which don't connect to the public telephone network, so the peer-to-peer portion of Skype is not covered (but SkypeIn and SkypeOut are). This capability doesn't exist in most VOIP providers' networks, and there is strong argument that the FCC doesn't have statutory authority to make this ruling, which is inconsistent with past court cases--most telecom providers are strongly opposing this rule. The Electronic Frontier Foundation has an excellent site of information about CALEA.

Krone next talked about the ability to conduct audio surveillance on the inside of the home using 30-100 GHz microwaves to measure vibrations inside the home. This is real technology for which there was a recent patent application.

He raised the issue of cell phone tracking, as is being planned to use for monitoring traffic in Kansas City (though he spoke as though this was already in place--this was a common thread in his talk, to speak of planned or possible uses of technology as though they are already in place).
(This is actually currently being used in Baltimore, MD, the first place in the U.S. to use it.)

He spoke very briefly about Bluetooth, which he said was invented by Intel and other companies (it was invented by Ericsson, but Intel is a promoter member of the Bluetooth Special Interest Group along with Agere, Ericsson, IBM, Microsoft, Motorola, Nokia, and Toshiba). He stated that it is completely insecure, that others can turn on your phone and listen to your phone's microphone, get your address book, and put information onto your phone. While he's quite right that Bluetooth in general has major security issues, which specific issues you may have depend on your model of phone and whether you use available methods to secure or disable Bluetooth features. Personally, I won't purchase any Bluetooth product unless and until it is securable--except perhaps a device to scan with.

Next, Krone turned to video surveillance, stating that in addition to cameras being all over the place, there are now cameras that can see through walls via microwave, that can be used by law enforcement without a search warrant, which hasn't been fully decided by the courts yet. I haven't found anything about microwave cameras that can see through walls, but this sounds very much like thermal imaging, which the Supreme Court has addressed. In Kyllo v. U.S. (533 U.S. 27, 2001) it was ruled that the use of a thermal imaging device to "look through walls" constituted a search under the Fourth Amendment and thus requires a search warrant. Scalia, Souter, Thomas, Ginsburg, and Breyer ruled with the majority; Stevens, Rehnquist, O'Connor, and Kennedy dissented.

Krone briefly mentioned the use of "see through your clothes" X-ray scanners, stating that six airports are using them today. This technology exists and is in TSA trials, and was actually tested at a Florida airport back in 2002. A newer, even more impressive technology is the new Tadar system unveiled in Germany in mid-October 2005.

He addressed RFIDs, and specifically RFIDs being added to U.S. passports in 2006, and some of the risks this may create (such as facilitating an electronic "American detector"). This is a real threat that has been partially addressed by adding a radio shielding to the passport to prevent the RFID from being read except when the passport is open. As Bruce Schneier notes, this is not a complete safeguard. Krone also stated that there is a California bill to put RFIDs in cars, with no commercial justification, just to "know where everyone is and what they have with them at all times." I'm not aware of the bill he is referring to, but the use of transponders in cars for billing purposes for toll roads is a possible commercial justification.

He spoke about the laser printer codes that uniquely identify all documents printed by certain laser printers, which have been in place for the last decade and were recently exposed by the Electronic Frontier Foundation and reported in this blog (Krone mistakenly called it the "Electronic Freedom Foundation," a common mistake). He also briefly alluded to steganography, which he wrongly described as "the art of hiding information in a picture." While hiding a message in a picture is one form of steganography, what is characteristic of steganography is that it is hiding a message in such a way as to disguise the fact that a message is even present.

He then went on to talk about Intel's AMT product--"Advanced Management Technology." This is a technology that allows computers to be remotely rebooted, have the console redirected, obtain various information out of NVRAM about what software is installed, and to load software updates remotely, even if the system is so messed up that the operating system won't boot. This is a technology that will be extremely useful for large corporations with a geographically dispersed work force and a small IT staff; there is similar technology from Sun Microsystems in their Sun Fire v20z and v40z servers which allows remote access via SSH to the server independent of the operating system, which allows console port and keyboard access, power cycling of the server, etc. This is technology with perfectly legitimate uses, allowing the owner of the machine to remotely deal with issues that would previously have required either physically going to the box or the expense of additional hardware such as a console server.

Krone described AMT in such a way as to omit all of the legitimate uses, portraying it as a technology that would be present on all new computers sold whether you like it or not, which would allow the government to turn your computer on remotely, bypass all operating system security software including a PC firewall, and take an image of your hard drive without your being able to do anything about it. This is essentially nonsensical fear-mongering--this technology is specifically designed for the owner of the system, not for the government, and there are plenty of mechanisms which could and should be used by anyone deploying such systems to prevent unauthorized parties from accessing their systems via such an out-of-band mechanism, including access control measures built into the mechanisms and hardware firewalls.

He then went on to talk about Digital Rights Management (DRM), a subject which has been in the news lately as a result of Sony BMG's DRM foibles. Krone stated that DRM is being applied to videos, files, etc., and stated that if he were to write a subversive document that the government wanted to suppress, it would be able to use DRM to shut off all access to that file. This has DRM backwards--DRM is used by intellectual property owners to restrict the use of their property in order to maximize the potential paying customer base. The DRM technologies for documents designed to shut off access are intended for functions such as allowing corporations to be able to guarantee electronic document destruction in accordance with their policies. This function is a protection of privacy, not an infringement upon it. Perhaps Krone intended to spell out a possible future like that feared by Autodesk founder John Walker in his paper "The Digital Imprimatur," where he worries that future technology will require documents published online to be certified by some authority that would have the power to revoke it (or revoke one's license to publish). While this is a potential long-term concern, the infrastructure that would allow such restrictions does not exist today. On the contrary, the Internet of today makes it virtually impossible to restrict the publication of undesired content.

Krone spoke about a large number of other topics, including Havenco, Echelon, Carnivore/DCS1000, web bugs and cookies, breathalyzers, fingerprints, DNA evidence, and so on. With regard to web bugs, cookies, and malware, he stated that his defense is not to use Windows, and to rely on open source software, because he can verify that the content and function of the software is legitimate. While I hate to add to the fear-mongering, this was a rare instance where Krone doesn't go far enough in his worrying. The widespread availability of source code doesn't actually guarantee the lack of backdoors in software for two reasons. First, the mere availability of eyeballs doesn't help secure software unless the eyeballs know what to look for. There have been numerous instances of major security holes persisting in actively maintained open source software for many years (wu-ftpd being a prime example). Second, and more significantly, as Ken Thompson showed in his classic paper "Reflections On Trusting Trust" (the possibility of which was first mentioned in Paul Karger and Roger Schell's "Multics Security Evaluation" paper), it is possible to build code into a compiler that will insert a backdoor into code whenever a certain sequence is found in the source. Further, because compilers are typically written in the same language that they compile, one can do this in such a way that it is bootstrapped into the compiler and is not visible in the compiler's source code, yet will always be inserted into any future compilers which are compiled with that compiler or its descendants. Once your compiler has been compromised, you can have backdoors that are inserted into your code without being directly in any source code.

Of the numerous other topics that Krone discussed or made reference to, there are three more instances I'd like to comment on: MRIs used as lie detectors at airport security checkpoints, FinCen's monitoring of financial transactions, and a presentation on Cisco security flaws at the DefCon hacker conference. In each case, Krone said things that were inaccurate.

Regarding MRIs, Krone spoke of the use of MRIs as lie detectors at airport security checkpoints as though they were already in place. The use of fMRI as a lie detection measure is something being studied at Temple University, but is not deployed anywhere--and it's hard to see how it would be practical as an airport security measure. Infoseek founder and Propel CEO Steve Kirsch proposed in 2001 using a brainscan recognition system to identify potential terrorists, but this doesn't seem to have been taken seriously. There is a voice-stress analyzer being tested as an airport security "lie detector" in Israel, but everything I've read about voice stress analysis is that it is even less reliable than polygraphs (which themselves are so unreliable that they are inadmissible as evidence in U.S. courts). (More interesting is a "stomach grumbling" lie detector...) (UPDATE March 27, 2006: Stu Krone says in the comments on this post that he never said that MRIs were being used as lie detectors at airport security checkpoints. I've verified from a recording of his talk that this is my mistake--he spoke only of fMRI as a tool in interrogation.)

Regarding FinCen, the U.S. Financial Crimes Enforcement Network, Krone made the claim that "FinCen monitors all transactions" and "keeps a complete database of all transactions," and that for purchases made with cash, law enforcement can issue a National Security Letter, including purchases of automobiles. This is a little bit confused--National Security Letters have nothing specifically to do with financial transactions per se, but are a controversial USA PATRIOT Act invention designed to give the FBI the ability to subpoena information without court approval. I support the ACLU's fight against National Security Letters, but they don't have anything to do with FinCen. Krone was probably confused by the fact that the USA PATRIOT Act also expanded the requirement that companies whose customers make large cash purchases (more than $10,000 in one transaction or in two or more related transactions) fill out a Form 8300 and file it with the IRS. Form 8300 data goes into FinCen's databases and is available to law enforcement, as I noted in my description of F/Sgt. Charles Cohen's presentation at the Economic Crime Summit I attended. It's simply not the case that FinCen maintains a database of all financial transactions.

Finally, Krone spoke of a presentation at the DefCon hacker conference in Las Vegas about Cisco router security. He said that he heard from a friend that another friend was to give a talk on this subject at DefCon, and that she (the speaker) had to be kept in hiding to avoid arrest from law enforcement in order to successfully give the talk. This is a highly distorted account of Michael Lynn's talk at the Black Hat Briefings which precede DefCon. Lynn, who was an employee of Internet Security Systems, found a remotely exploitable heap overflow vulnerability in the IOS software that runs on Cisco routers as part of his work at ISS. ISS had cold feet about the presentation, and told Lynn that he would be fired if he gave the talk, and Cisco also threatened him with legal action. He quit his job and delivered the talk anyway, and ended up being hired by Juniper Networks, a Cisco competitor. As of late July, Lynn was being investigated by the FBI regarding this issue, but he was not arrested nor in hiding prior to his talk, nor is he female.

I found Krone's talk to be quite a disappointment. Not only was it filled with careless inaccuracies, it presented nothing about how to defend one's privacy. He's right to point out that there are numerous threats to privacy and liberty that are based on technology, but there are also some amazing defensive mechanisms. Strong encryption products can be used to enhance privacy, the EFF's TOR onion routing mechanism is a way of preserving anonymity, the Free Network Project has built mechanisms for preventing censorship (though which are also subject to abuse).