Publications

Refine Results

(Filters Applied) Clear All

Designing agility and resilience into embedded systems

Summary

Cyber-Physical Systems (CPS) such as Unmanned Aerial Systems (UAS) sense and actuate their environment in pursuit of a mission. The attack surface of these remotely located, sensing and communicating devices is both large, and exposed to adversarial actors, making mission assurance a challenging problem. While best-practice security policies should be followed, they are rarely enough to guarantee mission success as not all components in the system may be trusted and the properties of the environment (e.g., the RF environment) may be under the control of the attacker. CPS must thus be built with a high degree of resilience to mitigate threats that security cannot alleviate. In this paper, we describe the Agile and Resilient Embedded Systems (ARES) methodology and metric set. The ARES methodology pursues cyber security and resilience (CSR) as high level system properties to be developed in the context of the mission. An analytic process guides system developers in defining mission objectives, examining principal issues, applying CSR technologies, and understanding their interactions.
READ LESS

Summary

Cyber-Physical Systems (CPS) such as Unmanned Aerial Systems (UAS) sense and actuate their environment in pursuit of a mission. The attack surface of these remotely located, sensing and communicating devices is both large, and exposed to adversarial actors, making mission assurance a challenging problem. While best-practice security policies should be...

READ MORE

Towards a universal CDAR device: a high performance adapter-based inline media encryptor

Summary

As the rate at which digital data is generated continues to grow, so does the need to ensure that data can be stored securely. The use of an NSA-certified Inline Media Encryptor (IME) is often required to protect classified data, as its security properties can be fully analyzed and certified with minimal coupling to the environment in which it is embedded. However, these devices are historically purpose-built and must often be redesigned and recertified for each target system. This tedious and costly (but necessary) process limits the ability for an information system architect to leverage advances made in storage technology. Our universal Classified Data At Rest (CDAR) architecture represents a modular approach to reduce this burden and maximize interface flexibility. The core module is designed around NVMe, a high-performance storage interface built directly on PCIe. Interfacing with non-NVMe interfaces such as SATA is achieved with adapters which are outside the certification boundary and therefore can be less costly and leverage rapidly evolving commercial technology. This work includes an analysis for both the functionality and security of this architecture. A prototype was developed with peak throughput of 23.9 Gb/s at a power consumption of 8.5W, making it suitable for a wide range of storage applications.
READ LESS

Summary

As the rate at which digital data is generated continues to grow, so does the need to ensure that data can be stored securely. The use of an NSA-certified Inline Media Encryptor (IME) is often required to protect classified data, as its security properties can be fully analyzed and certified...

READ MORE

Automated provenance analytics: a regular grammar based approach with applications in security

Published in:
9th Intl. Workshop on Theory and Practice of Provenance, TaPP, 22-23 June 2017.

Summary

Provenance collection techniques have been carefully studied in the literature, and there are now several systems to automatically capture provenance data. However, the analysis of provenance data is often left "as an exercise for the reader". The provenance community needs tools that allow users to quickly sort through large volumes of provenance data and identify records that require further investigation. By detecting anomalies in provenance data that deviate from established patterns, we hope to actively thwart security threats. In this paper, we discuss issues with current graph analysis techniques as applied to data provenance, particularly Frequent Subgraph Mining (FSM). Then we introduce Directed Acyclic Graph regular grammars (DAGr) as a model for provenance data and show how they can detect anomalies. These DAGr provide an expressive characterization of DAGs, and by using regular grammars as a formalism, we can apply results from formal language theory to learn the difference between "good" and "bad" provenance. We propose a restricted subclass of DAGr called deterministic Directed Acyclic Graph automata (dDAGa) that guarantees parsing in linear time. Finally, we propose a learning algorithm for dDAGa, inspired by Minimum Description Length for Grammar Induction.
READ LESS

Summary

Provenance collection techniques have been carefully studied in the literature, and there are now several systems to automatically capture provenance data. However, the analysis of provenance data is often left "as an exercise for the reader". The provenance community needs tools that allow users to quickly sort through large volumes...

READ MORE

SoK: cryptographically protected database search

Summary

Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly, systems are offered by academia, start-ups, and established companies. However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases. At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions:(1) An identification of the important primitive operations across database paradigms. We find there are a small number of base operations that can be used and combined to support a large number of database paradigms.(2) An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality.(3) An analysis of attacks against protected search for different base queries.(4) A roadmap and tools for transforming a protected search system into a protected database, including an open-source performance evaluation platform and initial user opinions of protected search.
READ LESS

Summary

Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly, systems are offered by academia, start-ups...

READ MORE

Fabrication security and trust of domain-specific ASIC processors

Summary

Application specific integrated circuits (ASICs) are commonly used to implement high-performance signal-processing systems for high-volume applications, but their high development costs and inflexible nature make ASICs inappropriate for algorithm development and low-volume DoD applications. In addition, the intellectual property (IP) embedded in the ASIC is at risk when fabricated in an untrusted foundry. Lincoln Laboratory has developed a flexible signal-processing architecture to implement a wide range of algorithms within one application domain, for example radar signal processing. In this design methodology, common signal processing kernels such as digital filters, fast Fourier transforms (FFTs), and matrix transformations are implemented as optimized modules, which are interconnected by a programmable wiring fabric that is similar to the interconnect in a field programmable gate array (FPGA). One or more programmable microcontrollers are also embedded in the fabric to sequence the operations. This design methodology, which has been termed a coarse-grained FPGA, has been shown to achieve a near ASIC level of performance. In addition, since the signal processing algorithms are expressed in firmware that is loaded at runtime, the important application details are protected from an unscrupulous foundry.
READ LESS

Summary

Application specific integrated circuits (ASICs) are commonly used to implement high-performance signal-processing systems for high-volume applications, but their high development costs and inflexible nature make ASICs inappropriate for algorithm development and low-volume DoD applications. In addition, the intellectual property (IP) embedded in the ASIC is at risk when fabricated in...

READ MORE

Bounded-collusion attribute-based encryption from minimal assumptions

Published in:
IACR 20th Int. Conf. on Practice and Theory of Public Key Cryptography, PKC 2017, 28-31 March 2017.

Summary

Attribute-based encryption (ABE) enables encryption of messages under access policies so that only users with attributes satisfying the policy can decrypt the ciphertext. In standard ABE, an arbitrary number of colluding users, each without an authorized attribute set, cannot decrypt the ciphertext. However, all existing ABE schemes rely on concrete cryptographic assumptions such as the hardness of certain problems over bilinear maps or integer lattices. Furthermore, it is known that ABE cannot be constructed from generic assumptions such as public-key encryption using black-box techniques. In this work, we revisit the problem of constructing ABE that tolerates collusions of arbitrary but a priori bounded size. We present two ABE schemes secure against bounded collusions that require only semantically secure public-key encryption. Our schemes achieve significant improvement in the size of the public parameters, secret keys, and ciphertexts over the previous construction of bounded-collusion ABE from minimal assumptions by Gorbunov et al. (CRYPTO 2012). In fact, in our second scheme, the size of ABE secret keys does not grow at all with the collusion bound. As a building block, we introduce a multidimensional secret-sharing scheme that may be of independent interest. We also obtain bounded-collusion symmetric-key ABE (which requires the secret key for encryption) by replacing the public-key encryption with symmetric-key encryption, which can be built from the minimal assumption of one-way functions.
READ LESS

Summary

Attribute-based encryption (ABE) enables encryption of messages under access policies so that only users with attributes satisfying the policy can decrypt the ciphertext. In standard ABE, an arbitrary number of colluding users, each without an authorized attribute set, cannot decrypt the ciphertext. However, all existing ABE schemes rely on concrete...

READ MORE

Interactive synthesis of code-level security rules

Author:
Published in:
Thesis (M.S.)--Northeastern University, 2017.

Summary

Software engineers inadvertently introduce bugs into software during the development process and these bugs can potentially be exploited once the software is deployed. As the size and complexity of software systems increase, it is important that we are able to verify and validate not only that the software behaves as it is expected to, but also that it does not violate any security policies or properties. One of the approaches to reduce software vulnerabilities is to use a bug detection tool during the development process. Many bug detection techniques are limited by the burdensome and error prone process of manually writing a bug specification. Other techniques are able to learn specifications from examples, but are limited in the types of bugs that they are able to discover. This work presents a novel, general approach for synthesizing security rules for C code. The approach combines human knowledge with an interactive logic programming synthesis system to learn Datalog rules for various security properties. The approach has been successfully used to synthesize rules for three intraprocedural security properties: (1) out of bounds array accesses, (2) return value validation, and (3) double freed pointers. These rules have been evaluated on randomly generated C code and yield a 0% false positive rate and a 0%, 20%, and 0% false negative rate, respectively for each rule.
READ LESS

Summary

Software engineers inadvertently introduce bugs into software during the development process and these bugs can potentially be exploited once the software is deployed. As the size and complexity of software systems increase, it is important that we are able to verify and validate not only that the software behaves as...

READ MORE

Resilience of cyber systems with over- and underregulation

Published in:
Risk Analysis, Vol. 37, No. 9, 2017, pp. 1644-51, DOI:10.1111/risa.12729.

Summary

Recent cyber attacks provide evidence of increased threats to our critical systems and infrastructure. A common reaction to a new threat is to harden the system by adding new rules and regulations. As federal and state governments request new procedures to follow, each of their organizations implements their own cyber defense strategies. This unintentionally increases time and effort that employees spend on training and policy implementation and decreases the time and latitude to perform critical job functions, thus raising overall levels of stress. People's performance under stress, coupled with an overabundance of information, results in even more vulnerabilities for adversaries to exploit. In this article, we embed a simple regulatory model that accounts for cybersecurity human factors and an organization's regulatory environment in a model of a corporate cyber network under attack. The resulting model demonstrates the effect of under- and overregulation on an organization's resilience with respect to insider threats. Currently, there is a tendency to use ad-hoc approaches to account for human factors rather than to incorporate them into cyber resilience modeling. It is clear that using a systematic approach utilizing behavioral science, which already exists in cyber resilience assessment, would provide a more holistic view for decisionmakers.
READ LESS

Summary

Recent cyber attacks provide evidence of increased threats to our critical systems and infrastructure. A common reaction to a new threat is to harden the system by adding new rules and regulations. As federal and state governments request new procedures to follow, each of their organizations implements their own cyber...

READ MORE

Bootstrapping and maintaining trust in the cloud

Published in:
32nd Annual Computer Security Applications Conf., ACSAC 2016, 5-9 December 2016.

Summary

Today's infrastructure as a service (IaaS) cloud environments rely upon full trust in the provider to secure applications and data. Cloud providers do not offer the ability to create hardware-rooted cryptographic identities for IaaS cloud resources or sufficient information to verify the integrity of systems. Trusted computing protocols and hardware like the TPM have long promised a solution to this problem. However, these technologies have not seen broad adoption because of their complexity of implementation, low performance, and lack of compatibility with virtualized environments. In this paper we introduce keylime, a scalable trusted cloud key management system. keylime provides an end-to-end solution for both bootstrapping hardware rooted cryptographic identities for IaaS nodes and for system integrity monitoring of those nodes via periodic attestation. We support these functions in both bare-metal and virtualized IaaS environments using a virtual TPM. keylime provides a clean interface that allows higher level security services like disk encryption or configuration management to leverage trusted computing without being trusted computing aware. We show that our bootstrapping protocol can derive a key in less than two seconds, we can detect system integrity violations in as little as 110ms, and that keylime can scale to thousands of IaaS cloud nodes.
READ LESS

Summary

Today's infrastructure as a service (IaaS) cloud environments rely upon full trust in the provider to secure applications and data. Cloud providers do not offer the ability to create hardware-rooted cryptographic identities for IaaS cloud resources or sufficient information to verify the integrity of systems. Trusted computing protocols and hardware...

READ MORE

Leveraging data provenance to enhance cyber resilience

Summary

Building secure systems used to mean ensuring a secure perimeter, but that is no longer the case. Today's systems are ill-equipped to deal with attackers that are able to pierce perimeter defenses. Data provenance is a critical technology in building resilient systems that will allow systems to recover from attackers that manage to overcome the "hard-shell" defenses. In this paper, we provide background information on data provenance, details on provenance collection, analysis, and storage techniques and challenges. Data provenance is situated to address the challenging problem of allowing a system to "fight-through" an attack, and we help to identify necessary work to ensure that future systems are resilient.
READ LESS

Summary

Building secure systems used to mean ensuring a secure perimeter, but that is no longer the case. Today's systems are ill-equipped to deal with attackers that are able to pierce perimeter defenses. Data provenance is a critical technology in building resilient systems that will allow systems to recover from attackers...

READ MORE