A Roadmap for Cybersecurity Research

From Cybersecurity Wiki
Jump to navigation Jump to search

Full Title of Reference

A Roadmap for Cybersecurity Research

Full Citation

Department of Homeland Security, A Roadmap for Cybersecurity Research (2009). Web

BibTeX

Categorization

Key Words

Botnet, Civilian Participation, Computer Network Attack, COTS Software, Cyber Crime, Cyber Security as a Public Good, Cyber Terrorism, Department of Homeland Security, Honeypot, Interdependencies, Malware, National Security, Outreach and Collaboration, Privacy Law

Synopsis

This cybersecurity research roadmap is an attempt to begin to define a national R&D agenda that is required to enable us to get ahead of our adversaries and produce the technologies that will protect our information systems and networks into the future. The research, development, test, evaluation, and other life cycle considerations required are far reaching—from technologies that secure individuals and their information to technologies that will ensure that our critical infrastructures are much more resilient. The R&D investments recommended in this roadmap must tackle the vulnerabilities of today and envision those of the future.

The intent of this document is to provide detailed research and development agendas for the future relating to 11 hard problem areas in cybersecurity, for use by agencies of the U.S. Government and other potential R&D funding sources. For each of the problems discussed, the roadmap examines some or all of the following:

  • The background of the problem
    • What is the problem being addressed?
    • What are the potential threats?
    • Who are the potential beneficiaries? What are their respective needs?
    • What is the current state of the practice?
    • What is the status of current research?
  • Future directions
    • On what categories can we subdivide the topic?
    • What are the major research gaps?
    • What are some exemplary problems for R&D on this topic?
    • What R&D is evolutionary, and what is more basic, higher risk, game changing?
    • What resources are required?
    • Measures of success
    • What needs to be in place for test and evaluation?
    • To what extent can we test real systems?


The 11 hard problems examined are:

Scalable trustworthy systems

Growing interconnectedness among existing systems results, in effect, in new composite systems at increasingly large scales. Existing hardware, operating system, networking, and application architectures do not adequately account for combined requirements for security, performance, and usability—confounding attempts to build trustworthy systems on them. As a result, today the security of a system of systems may be drastically less than that of most of its components.

The primary focus of this topic area is scalability that preserves and enhances trustworthiness in real systems. The perceived order of importance for research and development in this topic area is as follows: (1) trustworthiness, (2) composability, and (3) scalability. Thus, the challenge addressed here is threefold: (a) to provide a sound basis for composability that can scale to the development of large and complex trustworthy systems; (b) to stimulate the development of the components, analysis tools, and testbeds required for that effort; and (c) to ensure that trustworthiness evaluations themselves can be composed.

This topic area interacts strongly with enterprise-level metrics (Section 2) and evaluation methodology (Section 3) to provide assurance of trustworthiness.

Enterprise-level metrics (ELMs)

Defining effective metrics for information security (and for trustworthiness more generally) has proven very difficult, even though there is general agreement that such metrics could allow measurement of progress in security measures and at least rough comparisons between systems for security. Metrics underlie and quantify progress in all other roadmap topic areas. We cannot manage what we cannot measure, as the saying goes. However, general community agreement on meaningful metrics has been hard to achieve, partly because of the rapid evolution of information technology (IT), as well as the shifting locus of adversarial action.

Lack of effective ELMs leaves one in the dark about cyberthreats in general. With respect to enterprises as a whole, cybersecurity has been without meaningful measurements and metrics throughout the history of information technology. (Some success has been achieved with specific attributes at the component level.) This lack seriously impedes the ability to make enterprise-wide informed decisions of how to effectively avoid or control innumerable known and unknown threats and risks at every stage of development and operation.

System evaluation life cycle

The security field lacks methods to systematically and cost-effectively evaluate its products in a timely fashion. Without realistic, precise evaluations, the field cannot gauge its progress toward handling security threats, and system procurement is seriously impeded. Evaluations that take longer than the existence of a particular system version are of minimal use. A suitable life cycle methodology would allow us to allocate resources in a more informed manner and enable consistent results across multiple developments and applications.

Systematic, realistic, easy-to-use and standardized evaluation methods are needed to objectively quantify performance of any security artifact [i.e., a protocol, device, architecture or system] and the security of environments where these artifacts are to be deployed, before and after deployment, as well as the performance of proposed solutions. The evaluation techniques should objectively quantify security posture throughout the critical system life cycle. This evaluation should support research, development, and operational decisions, and maximize the impact of the investment.

Combatting insider threats

Cybersecurity measures are often focused on threats from outside an organization, rather than threats posed by untrustworthy individuals inside an organization. Experience has shown that insiders pose significant threats.

The insider threat today is addressed mostly with procedures such as awareness training, background checks, good labor practices, identity management and user authentication, limited audits and network monitoring, two-person controls, application-level profiling and monitoring, and general access controls. However, these procedures are not consistently and stringently applied because of high cost, low motivation, and limited effectiveness.

At a high level, opportunities exist to mitigate insider threats through aggressive profiling and monitoring of users of critical systems, “fishbowling” suspects, “chaffing” data and services users who are not entitled to access, and finally “quarantining” confirmed malevolent actors to contain damage and leaks while collecting actionable counter-intelligence and legally acceptable evidence.

Combatting malware and botnets

Malware refers to a broad class of attack software or hardware that is loaded on machines, typically without the knowledge of the legitimate owner, that compromises the machine to the benefit of an adversary. Present classes of malware include viruses, worms, Trojan horses, spyware, and bot executables. Malware infects systems via many vectors, including propagation from infected machines, tricking users to open tainted files, or getting users to visit malwarepropagating websites. The World Wide Web has become a major vector for malware propagation.

Beyond its nuisance impact, malware can have serious economic and national security consequences. Malware can enable adversary control of critical computing resources, which in turn may lead, for example, to information compromise, disruption and destabilization of infrastructure systems (“denial of control”), and manipulation of financial markets. The potential of malware to compromise confidentiality, integrity, and availability of the Internet and other critical information infrastructures is a serious concern.

Current detection and remediation approaches are losing ground, because it is relatively easy for an adversary (whether sophisticated or not) to alter malware to evade most existing detection approaches. Emerging approaches such as behavior-based detection and semantic malware descriptions have shown promise and are deployed in commercial A/V software. However, new techniques must be developed to keep pace with the development of malware.

Global-scale identity management

Global-scale identity management concerns identifying and authenticating entities such as people, hardware devices, distributed sensors and actuators, and software applications when accessing critical information technology (IT) systems from anywhere. The term global-scale is intended to emphasize the pervasive nature of identities and implies the existence of identities in federated systems that may be beyond the control of any single organization. In this context, global-scale identity management encompasses the establishment of identities, management of credentials, oversight and accountability, scalable revocation, establishment and enforcement of relevant policies, and resolution of potential conflicts.

Our concern here is mainly the IT-oriented aspects of the broad problems of identity and credential management, including authentication, authorization, and accountability. However, we recognize that there will be many trade-offs and privacy implications that will affect identity management. In particular, global-scale identity management may require not only advances in technology, but also open standards, social norms, legal frameworks, and policies for the creation, use, maintenance, and audit of identities and privilege information (e.g., rights or authorizations). Clearly, managing and coordinating people and other entities on a global scale also raises many issues relating to international laws and regulations that must be considered. In addition, the question of when identifying information must be provided is fundamentally a policy question that can and should be considered. In all likelihood, any acceptable concept of global identity management will need to incorporate policies governing release of identifying information.

Survivability of time-critical systems

Survivability is the capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents. A time-critical system is a system for which faster-than-human reaction is required to avoid adverse mission consequences and/or system instability in the presence of attacks, failures, or accidents. Of particular interest here are systems for which impaired survivability would have large-scale consequences, particularly in terms of the number of people affected. Examples of such systems include electric power grids and other critical infrastructure systems, regional transportation systems, large enterprise transaction systems, and Internet infrastructure such as routing or DNS.

At present, IT systems attempt to maximize survivability through replication of components, redundancy of information (e.g., error-correcting coding), smart load sharing, journaling and transaction replay, automated recovery to a stable state, deferred committing for configuration changes, and manually maintained filters to block repeated bad requests. Toward the same goal, control systems today are supposedly disconnected from external networks (especially when attacks are suspected), although not consistently. Embedded systems typically have no real protection for survivability from malicious attacks (apart from some physical security), even when external connections exist.

The current metrics for survivability, availability, and reliability of time-critical systems are based on the probabilities of natural and random failures (e.g., MTBF). These metrics typically ignore intentional attacks, cascading failures, and other correlated causes or effects. One often-cited reason is that we do not have many real-world examples of intentional well-planned attacks against time-critical systems. However, because of the criticality of the systems considered here and because of many confirmed vulnerabilities in such systems, we cannot afford to wait for such data to be gathered and analyzed.

Significant advances in attacks on survivability may require research in new areas. Future research should be divided into three categories: understanding the mission and risks; survivability architectures, methods, and tools; and test and evaluation. For a subject this broad and all-encompassing (it depends on security, reliability, situational awareness and attack attribution, metrics, usability, life cycle evaluation, combating malware and insider misuse, and many other aspects), it seems wise to be prepared to launch multiple efforts targeting this topic area.

Situational understanding and attack attribution

Situational understanding is information scaled to one’s level and areas of interest. It encompasses one’s role, environment, the adversary, mission, resource status, what is permissible to view, and which authorities are relevant. The challenges lie in the path from massive data to information to understanding if a system is under attack, who is the attacker, what is the attacker's intent, how do we defend against the attack and how can we prevent or deter the attack in the future. Situational understanding includes the state of one’s own system from a defensive posture irrespective of whether an attack is taking place. It is critical to understand system performance and behavior during non-attack periods, in that some attack indicators may be observable only as deviations from “normal behavior.”

Attack attribution is defined as determining the identity or location of an attacker or an attacker’s intermediary. Accurate attribution supports improved situational understanding and is therefore a key element of research in this area. Appropriate attribution may often be possible only incrementally, as situational understanding becomes clearer through interpretation of available information.

Situational understanding of events within infrastructures spanning multiple domains may require significant coordination and collaboration on multiple fronts, such as decisions about when/whether to share data, how to depict the situation as understanding changes over time, and how to interpret or respond to the information. Attribution is a key element of this process, since it is concerned with who is doing what and what should be done in response. Of special concern are attacks on information systems with potentially significant strategic impact, such as wide-scale power blackouts or loss of confidence in the banking system. Attacks may come from insiders, from adversaries using false credentials, from botnets, or from other sources or a blend of sources. Understanding the attack is essential for defense, remediation, attribution to the true adversary or instigator, hardening of systems against similar future attacks, and deterring future attacks. Attribution should also encompass shell companies, such as rogue domain resellers whose business model is to provide an enabling infrastructure for malfeasance.

Provenance (relating to information, systems, and hardware)

Provenance refers to the chain of successive custody—including sources and operations—of computer-related resources such as hardware, software, documents, databases, data, and other entities. Provenance includes pedigree, which relates to the total directed graph of historical dependencies. It also includes tracking, which refers to the maintenance of distribution and usage information that enables determination of where resources went and how they may have been used.

Individuals and organizations routinely work with, and make decisions based on, data that may have originated from many different sources and also may have been processed, transformed, interpreted, and aggregated by numerous entities between the original sources and the consumers. Without good knowledge about the sources and intermediate processors of the data, it can be difficult to assess the data’s trustworthiness and reliability, and hence its real value to the decision-making processes in which it is used.

The granularity of provenance ranges from whole systems through multi-level security, file, paragraph, and line, and even to bit. For certain applications (such as access control) the provenance of a single bit may be very important. Provenance itself may require meta-provenance, that is, provenance markings on the provenance information. The level of assurance provided by information provenance systems may be graded and lead to graded responses. Note that in some cases provenance information may be more sensitive, or more highly classified, than the underlying data. The policies for handling provenance information are complex and differ for different applications and granularities.

Without trustworthy provenance tracking systems, there are threats to the data and to processes that rely on the data, including, for example, unattributed sources of software and hardware; unauthorized modification of data provenance; unauthorized exposure of provenance, where presumably protected; and misattribution of provenance (intentional or otherwise).

Privacy-aware security

The goal of privacy-aware security is to enable users and organizations to better express, protect, and control the confidentiality of their private information, even when they choose to —- or are required to -— share it with others. Privacy-aware security encompasses several distinct but closely related topics, including anonymity, pseudo-anonymity, confidentiality, protection of queries, monitoring, and appropriate accessibility. It is also concerned with protecting the privacy of entities (such as individuals, corporations, government agencies) that need to access private information.

Threats to private information may be intrinsic or extrinsic to computer systems. Intrinsic computer security threats attributable to insiders include mistakes, accidental breach, misconfiguration, and misuse of authorized privileges, as well as insider exploitations of internal security flaws. Intrinsic threats attributable to outsiders (e.g., intruders) include potential exploitations of a wide variety of intrusion techniques. Extrinsic threats arise once information has been viewed by users or made available to external media (via printers, e-mail, wireless emanations, and so on), and has come primarily outside the purview of authentication, computer access controls, audit trails, and other monitoring on the originating systems.

The central problem in privacy-aware security is the tension between competing goals in the disclosure and use of private information. This document takes no position on what goals should be considered legitimate or how the tension should be resolved. Rather, the goal of research in privacy-aware security is to provide the tools necessary to express and implement trade-offs between competing legitimate goals in the protection and use of private information.

Privacy-aware security involves a complex mix of legal, policy, and technological considerations. Work along all these dimensions has struggled to keep up with the pervasive information sharing that cyberspace has enabled. Although the challenges have long been recognized, progress on solutions has been slow, especially on the technology side. At present, there are no widely adopted, uniform frameworks for expressing and enforcing protection requirements for private information while still enabling sharing for legitimate purposes.

Usable security

Security policy making tends to be reactive in nature, developed in response to an immediate problem rather than planned in advance based on clearly elucidated goals and requirements, as well as thoughtful understanding and analysis of the risks. This reactive approach gives rise to security practices that compromise system usability, which in turn can compromise security — even to the point where intended improvements in a system’s security posture are negated. Typically, as the security of systems increases, the usability of those systems tends to decrease, because security enhancements are commonly introduced in ways that are difficult for users to comprehend and that increase the complexity of users’ interactions with systems. When the relationship between security controls and security risks is not clear, users may simply not understand how best to interact with the system to accomplish their main goals while minimizing risk. Even when there is some appreciation of the risks, frustration can lead users to disregard, evade, and disable security controls, thus negating the potential gains of security enhancements.

Security issues must be made as transparent as possible. For example, security mechanisms, policies, and controls must be intuitively clear and perspicuous to all users and appropriate for each user. In particular, the relationships among security controls and security risks must be presented to users in ways that can be understood in the context of system use. In addition, users must be considered as fundamental components of systems during all phases of the system life cycle. Different assumptions and requirements pertaining to users’ interactions with systems must be made explicit to each type of user—novices, intermittent users, experts, and system administrators, to name a few. In general, one-size-fits-all approaches are unlikely to succeed.

In the short term, the current situation can be significantly improved by R&D that focuses on making security technology work sensibly “out of the box”—ideally with no direct user intervention. More basic, higher-risk, game-changing research would be to identify fundamental system design principles for trustworthy systems that minimize direct user responsibility for trustworthy operation.

Additional Notes and Highlights

Expertise Required: Technology - Low

An 2005 version of the Hard Problem List along with a discussion of changes in the list from the 1997 original is available at INFOSEC Research Council (2005) Hard Problem List.

This reference includes an appendix which examines the interdependencies among the 11 topic areas and assigns pairs of interdependent topics a grade of L[ow], M[edium] or H[igh] based on the extent to which the first topic can contribute to the success of the second, the extent to which the second can benefit from progress in the first and whether the second may in some way depend on the trustworthiness of the first.