|Cryptome DVDs are offered by Cryptome. Donate $25 for two DVDs of the Cryptome 12-and-a-half-years collection of 47,000 files from June 1996 to January 2009 (~6.9 GB). Click Paypal or mail check/MO made out to John Young, 251 West 89th Street, New York, NY 10024. The collection includes all files of cryptome.org, cryptome.info, jya.com, cartome.org, eyeball-series.org and iraq-kill-maim.org, and 23,100 (updated) pages of counter-intelligence dossiers declassified by the US Army Information and Security Command, dating from 1945 to 1985.The DVDs will be sent anywhere worldwide without extra cost.|
22 December 1998
Source: Hardcopy from National Academy Press, 243 pp.
This is the introduction to the full report: http://jya.com/tic.htm (882K)
Full report Zip-compressed: http://jya.com/tic.zip (302K)
September 29, 1998 Prepublication Copy
Subject to Further Editorial Corrections
Fred B. Schneider, Editor
Committee on Information Systems Trustworthiness
Computer Science and Telecommunications Board
Commission on Physical Sciences, Mathematics, and Applications
National Research Council
National Academy Press
Washington, D.C. 1998
Copyright 1998 by the National Academy of Sciences
COMMITTEE ON INFORMATION SYSTEMS TRUSTWORTHINESS
FRED B. SCHNEIDER, Cornell University, Chair
STEVEN M. BELLOVIN, AT&T Labs Research
MARTHA BRANSTAD, Trusted Information Systems Inc.
J. RANDALL CATOE, MCI Telecommunications Inc.
STEPHEN D. CROCKER, CyberCash Inc.
CHARLIE KAUFMAN, Iris Associates Inc.
STEPHEN T. KENT, BBN Corporation
JOHN C. KNIGHT, University of Virginia
STEVEN McGEADY, Intel Corporation
RUTH R. NELSON, Information System Security
ALLAN M. SCHIFFMAN, SPYRUS
GEORGE A. SPIX, Microsoft Corporation
DOUG TYGAR, University of California, Berkeley
W. EARL BOEBERT, Sandia National Laboratories
MARJORY S. BLUMENTHAL, Director
JANE BORTNICK GRIFFITH, Interim Director (1998)
HERBERT S. LIN, Senior Scientist
ALAN S. INOUYE, Program Officer
MARK BALKOVICH, Research Associate (until July 1998)
LISA L. SHUM, Project Assistant (until August 1998)
RITA A. GASKINS, Project Assistant
COMPUTER SCIENCE AND TELECOMMUNICATIONS BOARD
DAVID D. CLARK, Massachusetts Institute of Technology, Chair
FRANCES E. ALLEN, IBM T.J. Watson Research Center
JAMES CHIDDIX, Time Warner Cable
JOHN M. CIOFFI, Stanford University
W. BRUCE CROFT, University of Massachusetts, Amherst
A.G. FRASER, AT&T Corporation
SUSAN L. GRAHAM, University of California at Berkeley
JAMES GRAY, Microsoft Corporation
PATRICK M. HANRAHAN, Stanford University
JUDITH HEMPEL, University of California at San Francisco
BUTLER W. LAMPSON, Microsoft Corporation
EDWARD D. LAZOWSKA, University of Washington
DAVID LIDDLE, Interval Research
JOHN MAJOR, QUALCOMM Inc.
TOM M. MITCHELL, Carnegie Mellon University
DONALD NORMAN, Hewlett-Packard Company
RAYMOND OZZIE, Groove Networks
DAVID A. PATTERSON, University of California at Berkeley
DONALD SIMBORG, KnowMed Systems
LEE SPROULL, Boston University
LESLIE L. VADASZ, Intel Corporation
MARJORY S. BLUMENTHAL, Director
JANE BORTNICK GRIFFITH, Interim Director (1998)
HERBERT S. LIN, Senior Staff Officer
JERRY R. SHEEHAN, Program Officer
ALAN S. INOUYE, Program Officer
JON EISENBERG, Program Officer
JANET BRISCOE, Administrative Associate
NICCI DOWD, Project Assistant
RITA GASKINS, Project Assistant
DAVID PADGHAM, Project Assistant
COMMISSION ON PHYSICAL SCIENCES, MATHEMATICS, AND APPLICATIONS
ROBERT J. HERMANN, United Technologies Corporation, Co-chair
W. CARL LINEBERGER, University of Colorado, Co-chair
PETER M. BANKS, Environmental Research Institute of Michigan
WILLIAM BROWDER, Princeton University
LAWRENCE D. BROWN, University of Pennsylvania
RONALD G. DOUGLAS, Texas A&M University
JOHN E. ESTES, University of California at Santa Barbara
MARTHA P. HAYNES, Cornell University
L. LOUIS HEGEDUS, Elf Atochem North America Inc.
JOHN E. HOPCROFT, Cornell University
CAROL M. JANTZEN, Westinghouse Savannah River Company
PAUL G. KAMINSKI, Technovation, Inc.
KENNETH H. KELLER, University of Minnesota
KENNETH I. KELLERMANN, National Radio Astronomy Observatory
MARGARET G. KIVELSON, University of California at Los Angeles
DANIEL KLEPPNER, Massachusetts Institute of Technology
JOHN KREICK, Sanders, a Lockheed Martin Company
MARSHA I. LESTER, University of Pennsylvania
NICHOLAS P. SAMIOS, Brookhaven National Laboratory
CHANG-LIN TIEN, University of California at Berkeley
NORMAN METZGER, Executive Director
Experts have known for some time that networked information systems are not trustworthy and that the technology needed to make them trustworthy was, by and large, not at hand. Our nation is nevertheless becoming dependent on such systems for operating its critical infrastructures (e.g., transportation, communication, finance, and energy distribution). Over the past 2 years, the implications of this dependence -- vulnerability to attack and susceptibility to disaster -- have become a part of the national agenda. Concerns first voiced from within the defense establishment (under the rubric of"information warfare") led the executive branch to create the President's Commission on Critical Infrastructure Protection and, later, the Critical Infrastructure Assurance Office. The popular press embraced the issues, carrying them to a public already sensitized by direct and collateral experience with the failings of computing systems and networks. So a subject once discussed only in the technical literature is now regularly appearing on the front pages of newspapers and being debated in the Congress. And the present study, initiated at the request of the Defense Advanced Research Projects Agency (DARPA) and the National Security Agency (NSA) some 2 years ago, today informs a discussion of national significance. In particular, this study moves the focus of the discussion forward from matters of policy and procedure and from vulnerabilities and their consequences toward questions about the richer set of options that only new science and technology can provide.
The study committee was convened by the Computer Science and Telecommunications Board (CSTB) of the National Research Council (NRC) to assess the nature of information systems trustworthiness and the prospects for technology that increase it. The committee was asked to examine, discuss, and report on interrelated issues associated with the research, development, and commercialization of technologies for trustworthy systems and to use its assessment to develop recommendations for research to enhance information systems trustworthiness (see Box P.1). This volume contains the results of that study: a detailed research agenda that examines the many dimensions of trustworthiness (e.g., correctness, security, reliability, safety, survivability), the state of the practice, and the available technology and science base. Since the economic and political context is critical to the successful deployment of new technologies, that too is discussed.
The alert reader will have noted that the volume's title Trust in Cyberspace admits two interpretations. This ambiguity was intentional. Parse "trust" as a noun (as in "confidence" or "reliance") and the title succinctly describes the contents of the volume -- technologies that help make networked information systems more trustworthy. Parse "trust" as a verb (as in "to believe") and the title is an invitation to contemplate a future where networked information systems have become a safe place for conducting parts of our daily lives.1 Whether "trust" is being parsed as a noun or the verb, more research is key for trust in cyberspace.
1 One reviewer, contemplating the present, suggested that a question mark be placed at the end of the title to raise questions about the trustworthiness of cyberspace today. And this is a question that the report does raise.
The study committee included experts on computing and communications systems from industry and academia whose expertise spanned computer and communications security, software engineering, fault- tolerance, systems design and implementation, and networking (see Appendix A). The committee did its work through its own expert deliberations and by soliciting input and discussion from key officials in its sponsoring agencies, other government officials, academic experts, and representatives of a wide range of developers and users of information systems in industry (see Appendix B). The committee did not make use of classified information, believing that detailed knowledge of threats was not important to the task at hand.
The committee first met in June 1996 and eight times subsequently. Three workshops were held to obtain input from a broad range of experts in systems security, software, and networking drawn primarily from industry (see Appendixes C and D). Since information about the NSA R2 research program is less-widely available than for relevant programs at DARPA and other federal agencies, the entire committee visited NSA for a more in-depth examination of R2's research program; subsequent meetings involving NSA R2 personnel and a subset of the committee provided still further input to the study. Staff tracked the progress of relevant activities in the legislative and executive branches in government, including the President's Commission on Critical Infrastructure Protection, Critical Information Assurance Office, and congressional hearings. Staff also sought input from other governmental and quasi-governmental organizations with relevant emphases. Additional inputs included perspectives from professional conferences, technical literature, and government reports gleaned by committee members and staff.
In April 1997, the committee released an interim report that outlined key concepts and known technologies. That report, subject to the NRC review process, generated a number of follow-up comments that helped to guide the committee in its later work.
The committee is grateful to the many thoughtful reviewers of its interim and final reports, and it appreciates the efforts of the review coordinator. The committee would like to acknowledge Thomas A. Berson (Anagram Laboratories), Dan Boneh (Stanford University), Eric A. Brewer (University of California, Berkeley), Dorothy Denning (Georgetown University), Bruce Fette (Motorola), John D. Gannon (University of Maryland), Li Gong (JavaSoft Inc., Sun Microsystems Inc.), Russ Housley (Spyrus Inc.), John C. Klensin (MCI Communications Corporation), Jimmy Kuo (McAfee Associates Inc.), Steven B. Lipner (Mitretek Systems), Keith Marzullo (University of California at San Diego), Alan J. McLaughlin (Massachusetts Institute of Technology), Robert Morris, Sr. (National Security Agency (retired)), Peter G. Neumann (SRI International), Jimmy Omura (Cylink Corporation), Stewart Personick (Drexel University), Roy Radner (New York University), Morteza Rahimi (Northwestern University), Jeffrey I. Schiller (Massachusetts Institute of Technology), Michael St. Johns (@Home Network), Joseph Sventek (Hewlett- Packard Laboratories), J. Marty Tenenbaum (CNgroup, Inc.), Abel Weinrib (Intel Corporation), Jeannette M. Wing (Carnegie Mellon University), and Mary Ellen Zurko (The Open Group Research Institute).
The committee appreciates the support of its sponsoring agencies, and especially the numerous inputs and responses to requests for information provided by Howard Frank and Teresa Lunt at DARPA, Robert Meushaw at NSA, and John Davis at NSA and the Critical Infrastructure Assurance Office. The support of K. David Nokes at Sandia National Laboratories was extremely helpful in facilitating this study and the preparation of this report.
In addition, the committee would like to thank Jeffrey Schiller for his valuable perspective on Internet standards-setting. The committee would also like to thank individuals who contributed their expertise to the committee's deliberations: Robert H. Anderson (RAND Corp.), Ken Birman (Cornell University), Chip Boylan (Hilb, Rogal, and Hamilton Co.), Robert L. Constable (Cornell University), Dale Drew (MCI Security Services), Bill Flanagan (Perot Systems Corporation), Fred Howard (Bell Atlantic Voice Operations), Keith Marzullo (University of California at San Diego), J.S. Moore (University of Texas at Austin), Peter G. Neumann (SRI International), John Pescatore (Trusted Information Systems), John Rushby (SRI International), Sami Saydjari (Defense Advanced Research Projects Agency), Dan Shoemaker (Bell Atlantic Data Operations), Steve Sigmond (Wessels Arnold Investment Banking), Gadi Singer (Intel), Steve Smaha (Haystack Inc.), Kevin Sullivan (University of Virginia), L. Nick Trefethen (Oxford University), and Werner Vogels (Cornell University).
Several members of the Computer Science and Telecommunications Board provided valuable guidance to the committee and were instrumental in the response to review process. For these contributions, the committee would like to thank David D. Clark Jim Gray and Butler Lampson. The committee also acknowledges the helpful feedback from Board members Donald Norman and Ed Lazowska.
Special thanks are owed Steve Crocker for his seminal role in launching this study and in helping to shape the committee. The committee and the chairman especially-benefited from Steve's involvement.
Finally, the committee would like to acknowledge all the hard work by the staff of the National Research Council. Marjory Blumenthal's role in the content and conduct of this study was pivotal. Not only was Marjory instrumental in moving the committee from its initial discussions through the production of an Interim Report and then to a first draft of this report, but her insights into the nontechnical dimensions of trustworthiness were critical in developing Chapter 6. This committee was truly fortunate to have the benefit of Marjory's insights concerning content and process; and this chairman was thankful to have such a master in the business as a teacher and advisor. Alan Inouye joined the project mid-stream. To him fell the enormous task of assembling this final report. Alan did a remarkable job, remaining unfailingly up-beat despite the long hours required and the frustrations that accompanied working to a deadline. First Leslie Wade and later Lisa Shum supported the logistics for the committee's meetings, drafts, and reviews in a careful yet cheery fashion. As a research associate, Mark Balkovich enthusiastically embraced a variety of research and fact-finding assignments. Thanks to Jane Bortnick Griffith for her support as the Interim Director of CSTB who inherited this challenging project mid-stream and did the right thing. Herb Lin was available when we needed him despite his numerous other commitments. The contributions of Laura 0st (editor-consultant) are gratefully acknowledged. Rita Gaskins, David Padgham, and Cris Banks also assisted in completing the report.
Fred B. Schneider, Chair
Committee on Information Systems Trustworthiness
Committee Composition and Process
1 INTRODUCTION (40K)
Trustworthy Networked Information Systems
What Erodes Trust
This Study in Context
Scope of This Study
2 PUBLIC TELEPHONE NETWORK AND INTERNET TRUSTWORTHINESS (103K)
The Public Telephone Network
Network Services and Design
Progress of a Typical Call
Network Services and Design
Authentication (and other Security Protocols)
Progress of a Typical Connection
Network Failures and Fixes
Software and Hardware Failures
Attacks on the Telephone System
Attacks on the Internet
Name Server Attacks
Routing System Attacks
Protocol Design and Implementation Flaws
Is the Internet Ready for "Prime Time"?
3 SOFTWARE FOR NETWORKED INFORMATION SYSTEMS (127K)
The Role of Software
Development of an NIS
System Planning, Requirements, and Top-Level Design
Planning and Program Management
Requirements at the System Level
The System Requirements Document
Notation and Style
Where to Focus Effort in Requirements Analysis and Documentation
The Integration Plan
Project Structure, Standards, and Process
Barriers to Acceptance of New Software Technologies
Building and Acquiring Components
Component Design and Implementation
The Changing Role of COTS Software
General Problems with COTS Components
Interfacing Legacy Software
Review and Inspection
4 REINVENTING SECURITY (132K)
Evolution of Security Needs and Mechanisms
Access Control Policies
Shortcomings of Formal Policy Models
A New Approach
Identification and Authentication Mechanisms
Cryptography and Public-Key Infrastructure
The Key-Management Problem
Actual Large-Scale KDC and CA Deployments
Network Access Control Mechanisms
Closed User Groups
Virtual Private Networks
Limitations of Firewalls
Foreign Code and Application-Level Security
The ActiveX Approach
The Java Approach
Fine-Grained Access Control and Application Security
Language-Based Security: Software Fault Isolation and Proof Carrying Code
Denial of Service
5 TRUSTWORTHY SYSTEMS FROM UNTRUSTWORTHY COMPONENTS (46K)
Replication and Diversity
Monitor, Detect, Respond
Limitations in Detection
Response and Reconfiguration
Perfection and Pragmatism
Placement of Trustworthiness Functionality
Public Telephone Network
Minimum Essential Information Infrastructure
6 THE ECONOMIC AND PUBLIC POLICY CONTEXT (205K)
Nature of Consequences
Risk Management Strategies
Selecting a Strategy
Consumers and Trustworthiness
Issues Affecting Risk Management
Some Market Observations
Producers and Trustworthiness
The Larger Marketplace and the Trend Toward Homogeneity
Risks of Homogeneity
Producers and Their Costs
Costs of Integration and Testing
Identifying the Specific Costs Associated with Trustworthiness
Time to Market
The Market for Trustworthiness
Supply and Demand Considerations
Standards and Criteria
The Character and Context of Standards
Standards and Trustworthiness
Security-Based Criteria and Evaluation
Cryptography and Trustworthiness
Factors Inhibiting Widespread Cryptography Deployment
Cryptography and Confidentiality
Federal Government Interests in NIS Trustworthiness
The Changing Market-Government Relationship
The Roles of the NSA, DARPA, and other Federal Agencies in NIS Trustworthiness Research and Development
National Security Agency
Partnerships with Industry
Issues for the Future
Defense Advanced Research Projects Agency
Issues for the Future
7 CONCLUSIONS AND RESEARCH RECOMMENDATIONS (40K)
Protecting the Evolving Public Telephone Network
Meeting the Urgent Need for Software that Improves Trustworthiness
Reinventing Security for Computers and Communications
Building Trustworthiness from Untrustworthy Components
Social and Economic Factors that Inhibit the Deployment of Trustworthy Technology
Implementing Trustworthiness Research and Development, the Public Policy Role
A Study Committee Biographies
B Briefers to the Committee
C Workshop Participants and Agenda
D List of Position Papers Prepared for the Workshop
E Trends in Software
F Some Related Trustworthiness Studies
G Some Operating System Security Examples
H Types of Firewalls
I Secrecy of Design
J Research in Information System Security and Survivability Funded by the NSA and DARPA
|This is the tale of the infosys folk:
Multics to UNIX to DOS.
We once had protection that wasn't a joke
Multics to UNIX to DOS.
Now hackers and crackers and similar nerds
Pass viruses, horses, and horrible words
Through access controls that are for the birds.
Multics to UNIX to DOS.
With apologies to Franklin P. Adam.
The nation's security and economy rely on infrastructures for communication, finance, energy distribution, and transportation-all increasingly dependent on networked information systems. When these networked information systems perform badly or do not work at all, they put life, liberty, and property at risk. Interrupting service can threaten lives and property; destroying information or changing it improperly can disrupt the work of governments and corporations; and disclosing secrets can embarrass people or hurt organizations. The widespread interconnection of networked information systems allows outages and disruptions to spread from one system to others; it enables attacks to be waged anonymously and from a safe distance; and it compounds the difficulty of understanding and controlling these systems. With an expanding fraction of users and operators who are technologically unsophisticated, greater numbers can cause or fall victim to problems. Some see this as justification for alarm; others dismiss such fears as alarmist. Most agree that the trends warrant study and better understanding.
Recent efforts, such as those by the President's Commission on Critical Infrastructure Protection, have been successful in raising public awareness and advocating action. However, taking that action is constrained by available knowledge and technologies for ensuring that networked information systems perform properly. Research is needed, and this report gives, in its body, a detailed agenda for that research. Specifically, the report addresses how the trustworthiness of networked information systems can be enhanced by improving computing and communications technology. The intent is to create more choices for consumers and vendors and, therefore, for the government. The report also surveys technical and market trends, to better inform public policy about where progress is likely and where incentives could help. And the report discusses a larger nontechnical context-public policy, procedural aspects of how networked information systems are used, how people behave-because that context affects the viability of technical solutions as well as affecting actual risks and losses.
Benefits, Costs, and Context
Networked information systems (NISs) integrate computing systems, communication systems, people (both as users and operators), procedures, and more. Interfaces to other systems and control algorithms are their defining elements; communication and interaction are the currency of their operation. Increasingly, the information exchanged between NISs includes software (and, therefore, instructions to the systems themselves), often without users knowing what software has entered their systems, let alone what it can do or has done.
Trustworthiness of an NIS asserts that the system does what is required -- despite environmental disruption, human user and operator errors, and attacks by hostile parties -- and that it does not do other things. Design and implementation errors must be avoided, eliminated, or somehow tolerated. Addressing only some aspects of the problem is not sufficient. Moreover, achieving trustworthiness requires more than just assembling components that are themselves trustworthy.
Laudable as a goal, ab initio building of trustworthiness into an NIS has proved to be impractical. It is neither technically nor economically feasible for designers and builders to manage the complexity of such large artifacts or to anticipate all of the problems that an NIS will confront over its lifetime. Experts now recognize steps that can be taken to enhance trustworthiness after a system has been deployed. It is no accident that the market for virus detectors and firewalls is thriving. Virus detectors identify and eradicate attacks embedded in exchanged files, and firewalls hinder attacks by filtering messages between a trusted enclave of networked computers and its environment (from which attacks might originate). Both of these mechanisms work in specific contexts and address problems contemplated by their designers; but both are imperfect, with user expectations often exceeding what is prudent.
The costs of NIS trustworthiness are borne by the system's producers and consumers and sometimes by the public at large. So are the benefits, but they are often distributed differently from the costs. The market has responded best in areas, such as reliability, that are easy for consumers (and producers) to evaluate, as compared with other areas, such as security, which addresses exposures that are difficult to quantify or even fully articulate. Few have an incentive to worry about security problems since such problems rarely prevent work from getting done and publicizing them sometimes even tarnishes the reputation of the institution involved (as in the case of banks).
Market conditions today strongly favor the use of commercial off-the-shelf (COTS) components over custom-built solutions, in part because COTS technology is relatively inexpensive to acquire. The COTS market's earliest entrants can gain a substantial advantage, and so COTS producers are less inclined to include trustworthiness functionality, which they believe can cause delay. COTS producers are also reluctant to include in their products mechanisms to support trustworthiness (and especially security) that can make systems harder to configure or use. While today's market for system trustworthiness is bigger than that of a decade ago the market remains small, reflecting present circumstances and perceptions: to date, publicized trustworthiness breaches have not been catastrophic, and consumers have been able to cope or recover from the incidents. Thus, existing trustworthiness solutions -- though needed -- are not being widely deployed because often they cannot be justified.
Today's climate of deregulation will further increase NIS vulnerability in several ways. The most obvious is the new cost pressures on what had been regulated monopolies in the electric power and telecommunications industries. One easy way to cut costs is to reduce reserve capacity and eliminate rarely needed emergency systems; a related way is to reduce diversity (a potential contributor to trustworthiness) in the technology or facilities used. Producers in these sectors are now competing on the basis of features, too. New features invariably lead to more complex systems, which are liable to behave in unexpected and undesirable ways. Finally, deregulation leads to new interconnections, as some services are more cost- effectively imported from other providers into what once were monolithic systems. Apart from the obvious dangers of the increased complexity, the interconnections themselves create new weak points and interdependencies. Problems could grow beyond the annoyance level that characterizes infrastructure outages today, and the possibility of catastrophic incidents is growing.
The role of government in protecting the public welfare implies an interest in promoting the trustworthiness of NISs. Contemporary examinations, of issues ranging from information warfare to critical infrastructure, have advanced hypotheses and assumptions about specific, substantial, and proactive roles for government. But their rationales are incomplete. Part of the problem stems from the difficulty of describing the appropriate scope for government action when the government's own NISs are creatures of private-sector components and services. The rise of electronic commerce and, more generally, growing publication and sharing of all kinds of content via NISs are generating a variety of different models for the role of government and the balance of public and private action. In all of these contexts, debates about cryptography policy and the alleged inhibition of the development and deployment of technology (encryption and authentication) that can advance many aspects of trustworthiness make discussion of government roles particularly sensitive and controversial. The necessary public debates have only just begun, and they are complicated by the underlying activity to redefine concepts of national and economic security.
Technology offers the opportunities and imposes the limits facing all sectors. Research and development changes technological options and the cost of various alternatives. It can provide new tools for individuals and organizations and better inform private and public choices and strategies. Once those tools have been developed, demands for trustworthiness could be more readily met. Due to the customary rapid rate of upgrade and replacement for computing hardware and software (at least for systems based on COTS products), upgrades embodying enhanced trustworthiness could occur over years rather than decades (impeded mostly by needs for backward compatibility). Moreover, the predominance of COTS software allows investments in COTS software that enhance trustworthiness to have broad impact, and current events, such as concern about the "year 2000" and the European Union monetary conversion, are causing older software systems to be replaced with new COTS software. Finally, communications infrastructures are likely to undergo radical changes in the coming years: additional players, such as cable and satellite- based services, in the market will not only to lead to new pricing structures but will also likely force the introduction of new communications system architectures and services. Taken together, these trends imply that now is the time to take steps to develop and deploy better technology.
The goal of further research is to provide a science base and engineering expertise for building trustworthy NISs. Commercial and industrial software producers have been unwilling to pay for this research, doing the research will take time, and the construction of trustworthy NISs presupposes appropriate technology for which this research is needed. Therefore, the central recommendations of this study concern an agenda for research (outlined below). The recommendations are aimed at federal funders of relevant research-in particular the Defense Advanced Research Projects Agency (DARPA) and the National Security Agency (NSA). But the research agenda should also be of interest to policy makers who, in formulating legislation and initiating other actions, will profit from knowing which technical problems do have solutions, which will have solutions if research is supported, and which cannot have solutions. Those who manage NISs can profit from the agenda in much the same way as policy makers. Product developers can benefit from the predictions of market needs and promising directions to address'those needs.
Research to Identify and Understand NIS Vulnerabilities
Because a typical NIS is large and complex, few people are likely to have analyzed one, much less had an opportunity to study several. The result is a remarkably poor understanding today of design and engineering practices that foster NIS trustworthiness. Careful study of deployed NISs is needed to inform NIS builders of problems that they are likely to encounter, leading to more-intelligent choices about what to build and how to build it. The President's Commission on Critical Infrastructure Protection and other federal government groups have successfully begun this process by putting NIS trustworthiness on the national policy agenda. The next step is to provide specific technical guidance for NIS designers, implementers, and managers. A study of existing NISs can help determine what problems dominate NIS architecture and software development, the interaction of different aspects of trustworthiness in design and implementation or use, and how to quantify the actual benefits of using proposed methods and techniques.
The public telephone network (PTN) and the Internet, both familiar NISs, figure prominently in this report. Both illustrate the scope and nature of the technical problems that will confront developers and operators of future NISs, and the high cost of building a global communications infrastructure from the ground up implies that one or both of these two networks is likely to furnish communications services for most other NISs. The trustworthiness and vulnerabilities of the PTN and the Internet are thus likely to have far-reaching implications. And PTN trustworthiness, for example, would seem to be eroding as the PTN becomes increasingly dependent on complex software and databases for establishing calls and for providing new or improved services to customers. Protective measures need to be developed and implemented. Some Internet vulnerabilities are being eliminated by deploying improved protocols, but the Internet's weak quality of service guarantees along with other routing-protocol inadequacies and dependence on a centralized naming-service architecture remain sources of vulnerability for it; additional research will be needed to significantly improve the Internet's trustworthiness.
Operational errors today represent a major source of outages for both the PTN and the Internet. Today's methods and tools for facilitating an operator's understanding and control of an NIS of this scale and complexity are inadequate. Research and development is needed to produce conceptual models (and ultimately methods of control) that can allow human operators to grasp the state of an NIS and to initiate actions that will have predictable, desired consequences.
Research in Avoiding Design and Implementation Errors
The challenges of software engineering, so formidable for so many years, become especially urgent when designing and implementing an NIS. And new problems arise in connection with all facets of the system development process. System-level trustworthiness requirements must be transformed from informal notions into precise requirements that can be imposed on individual components, something that all too often is beyond the current state of the art. When an NIS is being built, subsystems spanning distributed networks must be integrated and tested despite limited visibility and control over their operation. Yet the trend has been for researchers to turn their attention away from such integration and testing questions-a trend that needs to be reversed by researchers and by those who fund research. Even modest advances in testing methods can have a significant impact, because testing so dominates system development costs. Techniques for composing subsystems in ways that contribute directly to trustworthiness are also badly needed.
Whereas a large software system, such as an NIS, cannot be developed defect-free, it is possible to improve the trustworthiness of such a system by anticipating and targeting vulnerabilities. But to determine, analyze, and -- most importantly -- prioritze these vulnerabilities, a good understanding is required of how subsystems interact with each other and with the other elements of the larger system. Obtaining such an understanding is not possible without further research.
NISs today and well into the foreseeable future are likely to include large numbers of COTS components. The relationship between the use of COTS components and NIS trustworthiness is unclear -- does the increased use of COTS components enhance or detract from trustworthiness? And how can the trustworthiness of a COTS component be enhanced by its developers and (when needed) by its users? Moreover, more so than most other software systems, NISs are developed and deployed incrementally, significantly evolving in functionality and structure over the system's lifetime. Yet little is known about architectures that can support such growth and about development processes that facilitate it; additional research is required.
There are accepted processes for component design and implementation, although the novel characteristics of NISs raise questions about the utility of these processes. Modern programming languages include features that promote trustworthiness, such as compile-time checks and support for modularity and component integration, and the potential exists for further gains from research. The performance needs of NISs can be inconsistent with modular design, though, and this limits the applicability of many extant software development processes and tools.
Formal methods should be regarded as an important piece of technology for eliminating design errors in hardware and software; increased support for both fundamental research and demonstration exercises is warranted. Formal methods are particularly well suited for identifying errors that only become apparent in scenarios not likely to be tested or testable. Therefore, formal methods could be viewed as a complementary technology to testing. Research directed at the improved integration of testing and formal methods is likely to have payoffs for increasing assurance in trustworthy NISs.
New Approaches to Computer and Communications Security
Much security research during the past two decades has been based on formal policy models that focus on protecting information from unauthorized access by specifying which users should have access to data or other system resources. These formal policy models oversimplify: they do not completely account for malicious or erroneous software, they largely ignore denial-of-service attacks, and they are unable to represent defensive measures, such as virus scan software or firewalls -- mechanisms that in "theory" should not work or be needed but do, in practice, hinder attacks. And the practical impacts of this "absolute security" paradigm have been largely disappointing. A new approach to security is needed, especially for environments (like NISs) where foreign and mobile code and COTS software cannot be ignored. The committee recommends that rather than being based on "absolute security," future security research be based on techniques for identifying vulnerabilities and making design changes to reposition those vulnerabilities in light of anticipated threats. By repositioning vulnerabilities, the likelihood and consequences of attacks can be made less severe.
Effective cryptographic authentication is essential for NIS security. But obstacles exist to more widespread deployment of key-management technology, and there has been little experience with public-key infrastructures -- especially large-scale ones. Issues related to the timely notification of revocation, recovery from the compromise of certificate authority private keys, and name-space management all require further attention. Most applications that make use of certificates have poor certificate-management interfaces for users and for system administrators. Research is also needed to support new cryptographic authentication protocols (e.g., for practical multicast communication authentication) and to support faster encryption and authentication/integrity algorithms to keep pace with rapidly increasing communication speeds. The use of hardware tokens holds promise for implementing authentication, although using personal identification numbers (PINs) constitutes a vulnerability (which might be somewhat mitigated through the use of biometrics).
Because NISs are distributed systems, network access control mechanisms, such as virtual private networks (VPNs) and firewalls, can play a central role in NIS security. VPN technology, although promising, is not today being used in larger-scale settings because of the proprietary protocols and simplistic key-management schemes found in products. Further work is needed before wholesale and flexible VPN deployments will become realistic. Firewalls, despite their limitations, will persist into the foreseeable future as a key defense mechanism. And, as support for VPNs is added, firewall enhancements will have to be developed for supporting sophisticated security management protocols, negotiation of traffic security policies across administratively independent domains, and management tools. The development of increasingly sophisticated network-wide applications will create a need for application-layer firewalls and a better understanding of how to define and enforce useful traffic policies at this level.
Operating system support for fine-grained access control would facilitate construction of systems that obey the principle of least privilege, which holds that users be accorded the minimum access that is needed to accomplish a task. This, in turn, would be an effective defense against a variety of attacks that might be delivered using foreign code or hidden in application programs. Enforcement of application- specific security policies is likely to be a responsibility shared between the application program and the operating system. Research is needed to determine how to partition this responsibility and which mechanisms are best implemented at what level. Attractive opportunities exist for programming language research to play a role in enforcing such security policies.
Finally, defending against denial-of-service attacks can be critical for the security of an NIS, since availability is often an important system property. This dimension of security has received relatively little attention up to now. and research is urgently needed to identify ways to defend against such attacks.
Research in Building Trustworthy Systems from Untrustworthy Components
Even when it is possible to build them, highly trustworthy components are costly. Therefore, the goal of creating trustworthy NISs from untrustworthy components is attractive, and research should be undertaken that will enable the trustworthiness of components to be amplified by the architecture and by the methods used to integrate components.
Replication and diversity can be employed to build systems that amplify the trustworthiness of their components, and there are successful commercial products (e.g., hardware fault-tolerant computers) in the marketplace that do exactly this. However, the potential and limits of the approach are not understood. For example, research is needed to determine the ways in which diversity can be added to a set of software replicas, thereby improving their trustworthiness.
Trustworthiness functionality could be positioned at different places within an NIS. Little is known about the advantages and disadvantages of the various possible positionings and system architectures, and an analysis of existing NISs should prove instructive along these lines. One architecture that has been suggested is based on the idea of a broadly useful core minimum functionality -- a minimum essential information infrastructure (MEII). But building an MEII would be a misguided initiative, because it presumes that such a "core minimum functionality" could be identified, and that is unlikely to be the case.
Monitoring and detection can be employed to build systems that enhance the trustworthiness of their components. But limitations intrinsic in system monitoring and in technology to recognize incidents such as attacks and failures impose fundamental limits on the use of monitoring and detection for implementing trustworthiness. In particular, the limits and coverage of the various approaches to intruder and anomaly detection are necessarily imperfect; additional study is needed to determine their practicality.
A number of other promising research areas merit investigation. For example, systems could be designed to respond to an attack or failure by reducing their functionality in a controlled, graceful manner. And a variety of research directions involving new types of algorithms -- self-stabilization, emergent behavior, biological metaphors -- may be useful in designing systems that are trustworthy. These new research directions are speculative. Thus, they are plausible topics for longer-range research that should be pursued.
Research in NIS trustworthiness is supported by the U.S. government, primarily through DARPA and NSA, but also through other DOD and civilian agencies. Much of DARPA and NSA funding goes to industry research, in part because of the nature of the work (i.e., fostering the evaluation and deployment of research ideas) and, in part, because the academic base is relatively limited in areas relating to security. There is also industry-funded research and development work in NIS trustworthiness; that work understandably tends to have more direct relevance to existing or projected markets (it emphasizes development relative to research). A firm calibration of federal funding for trustworthiness research is difficult, both because of conventional problems in understanding how different projects are accounted for and because this is an area where some relevant work is classified. In addition, the nature of relevant research often implies a necessary systems-development component, and that can inflate associated spending levels.
DARPA's Information Technology Office (ITO) provides most of the government' s external research funding for NIS trustworthiness. Increasingly, DOD is turning to COTS products, which means that DARPA can justifiably be concerned with a much broader region of the present-day computing landscape. But DARPA-funded researchers are being subjected to pressure to produce short-term research results and rapid transitions to industry -- so much so that the pursuit of high-risk theoretical and experimental investigations is seemingly discouraged. This influences what research topics get explored. Many of the research problems outlined above are deep and difficult, and expecting short-term payoff can only divert effort from the most critical areas. In addition, DARPA has deemphasized its funding of certain security-oriented topics (e.g., containment, defending against denial-of-service attacks, and the design of cryptographic infrastructures), which has caused researcher effort and interest to shift away from these key problems. Therefore, DARPA needs to increase its focus on information security and NIS trustworthiness research, especially with regard to long-term research efforts. DARPA's mechanisms for communicating and interacting with the research community are generally effective.
NSA funds information security research through R2 and other of its organizational units. The present study deals exclusively with R2. In contrast to DARPA, NSA R2 consumes a large portion of its budget internally, including significant expenditures on nonresearch activities. NSA's two missions- protecting U.S. sensitive information and acquiring foreign intelligence information-can confound its interactions with others in the promotion of trustworthiness. Its defensive mission makes knowing how to protect systems paramount; its offensive need to exploit system vulnerabilities can inhibit its sharing of knowledge. This tension is not new. What is relevant for future effort is the lingering distrust for the agency in the academic research community and some quarters of industry, which has had a negative impact on R2' s efforts at outreach. The rise of NISs creates new needs for expertise in computer systems that NSA is challenged to develop internally and procure externally. R2's difficulty in recruiting and retaining highly qualified technical research staff is a reason for "outsourcing" research, when highly skilled research staff are available elsewhere. R2's effectiveness depends on better leveraging of talent both outside and inside the organization.
The committee believes that increased funding is warranted for both information security research in particular and NIS trustworthiness research in general. The appropriate level of increased funding should be based on a realistic assessment of the size and availability of the current population of researchers in relevant disciplines and projections of how this population of researchers may be increased in the coming years.
Cyberspace is no longer science fiction. Today, networked information systems transport millions of people there to accomplish routine as well as critical tasks. And the current trajectory is clear: increased dependence on networked information systems. Unless these systems are made trustworthy, such dependence may well lead to disruption and disaster. The aphorism "Where there's a will, there's a way" provides a succinct way to summarize the situation. The "way," which today is missing, will require basic components, engineering expertise, and an expanded science base necessary for implementing trustworthy networked information systems. This study articulates a research agenda so that there will be a way when there is a will.