Publications

Refine Results

(Filters Applied) Clear All

Timely rerandomization for mitigating memory disclosures

Published in:
22nd ACM Conf. on Computer and Communications Security, 12-16 October 2015.

Summary

Address Space Layout Randomization (ASLR) can increase the cost of exploiting memory corruption vulnerabilities. One major weakness of ASLR is that it assumes the secrecy of memory addresses and is thus ineffective in the face of memory disclosure vulnerabilities. Even fine-grained variants of ASLR are shown to be ineffective against memory disclosures. In this paper we present an approach that synchronizes randomization with potential runtime disclosure. By applying rerandomization to the memory layout of a process every time it generates an output, our approach renders disclosures stale by the time they can be used by attackers to hijack control flow. We have developed a fully functioning prototype for x86_64 C programs by extending the Linux kernel, GCC, and the libc dynamic linker. The prototype operates on C source code and recompiles programs with a set of augmented information required to track pointer locations and support runtime rerandomization. Using this augmented information we dynamically relocate code segments and update code pointer values during runtime. Our evaluation on the SPEC CPU2006 benchmark, along with other applications, show that our technique incurs a very low performance overhead (2.1% on average).
READ LESS

Summary

Address Space Layout Randomization (ASLR) can increase the cost of exploiting memory corruption vulnerabilities. One major weakness of ASLR is that it assumes the secrecy of memory addresses and is thus ineffective in the face of memory disclosure vulnerabilities. Even fine-grained variants of ASLR are shown to be ineffective against...

READ MORE

Enhancing the far-ultraviolet sensitivity of silicon complementary metal oxide semiconductor imaging arrays

Summary

We report our progress toward optimizing backside-illuminated silicon P-type intrinsic N-type complementary metal oxide semiconductor devices developed by Teledyne Imaging Sensors (TIS) for far-ultraviolet (UV) planetary science applications. This project was motivated by initial measurements at Southwest Research Institute of the far-UV responsivity of backside-illuminated silicon PIN photodiode test structures, which revealed a promising QE in the 100 to 200 nm range. Our effort to advance the capabilities of thinned silicon wafers capitalizes on recent innovations in molecular beam epitaxy (MBE) doping processes. Key achievements to date include the following: (1) representative silicon test wafers were fabricated by TIS, and set up for MBE processing at MIT Lincoln Laboratory; (2) preliminary far-UV detector QE simulation runs were completed to aid MBE layer design; (3) detector fabrication was completed through the pre-MBE step; and (4) initial testing of the MBE doping process was performed on monitoring wafers, with detailed quality assessments.
READ LESS

Summary

We report our progress toward optimizing backside-illuminated silicon P-type intrinsic N-type complementary metal oxide semiconductor devices developed by Teledyne Imaging Sensors (TIS) for far-ultraviolet (UV) planetary science applications. This project was motivated by initial measurements at Southwest Research Institute of the far-UV responsivity of backside-illuminated silicon PIN photodiode test structures...

READ MORE

Enhanced signal processing algorithms for the ASR-9 Weather Systems Processor

Author:
Published in:
J. Atmos. Ocean. Technol., Vol. 32, No. 10, October 2015, pp. 1847-59.

Summary

New signal processing algorithms for the Airport Surveillance Radar-9 (ASR-9) Weather Systems Processor (WSP) are introduced. The Moving Clutter Spectral Processing for Uneven-Sampled Data with Dealiasing (MCSPUDD) algorithm suite removes isolated moving clutter targets and corrects aliased velocity values on a per-range-gate basis. The spectral differencing technique is applied to the low- and high-beam data to produce a dual-beam velocity estimate that is more accurate than the current autocorrelation-lag-1-based approach. Comparisons with Terminal Doppler Weather Radar (TDWR) data show that estimate errors are reduced by 8%, 15%, and 15% for the low-, high-, and dual-beam velocities, respectively.
READ LESS

Summary

New signal processing algorithms for the Airport Surveillance Radar-9 (ASR-9) Weather Systems Processor (WSP) are introduced. The Moving Clutter Spectral Processing for Uneven-Sampled Data with Dealiasing (MCSPUDD) algorithm suite removes isolated moving clutter targets and corrects aliased velocity values on a per-range-gate basis. The spectral differencing technique is applied to...

READ MORE

Characterizing phishing threats with natural language processing

Author:
Published in:
2015 IEEE Conf. on Communications and Network Security (CNS), 28-30 September 2015.

Summary

Spear phishing is a widespread concern in the modern network security landscape, but there are few metrics that measure the extent to which reconnaissance is performed on phishing targets. Spear phishing emails closely match the expectations of the recipient, based on details of their experiences and interests, making them a popular propagation vector for harmful malware. In this work we use Natural Language Processing techniques to investigate a specific real-world phishing campaign and quantify attributes that indicate a targeted spear phishing attack. Our phishing campaign data sample comprises 596 emails - all containing a web bug and a Curriculum Vitae (CV) PDF attachment - sent to our institution by a foreign IP space. The campaign was found to exclusively target specific demographics within our institution. Performing a semantic similarity analysis between the senders' CV attachments and the recipients' LinkedIn profiles, we conclude with high statistical certainty (p < 10^-4) that the attachments contain targeted rather than randomly selected material. Latent Semantic Analysis further demonstrates that individuals who were a primary focus of the campaign received CVs that are highly topically clustered. These findings differentiate this campaign from one that leverages random spam.
READ LESS

Summary

Spear phishing is a widespread concern in the modern network security landscape, but there are few metrics that measure the extent to which reconnaissance is performed on phishing targets. Spear phishing emails closely match the expectations of the recipient, based on details of their experiences and interests, making them a...

READ MORE

Very large graphs for information extraction (VLG) - detection and inference in the presence of uncertainty

Summary

In numerous application domains relevant to the Department of Defense and the Intelligence Community, data of interest take the form of entities and the relationships between them, and these data are commonly represented as graphs. Under the Very Large Graphs for Information Extraction effort--a one year proof-of-concept study--MIT LL developed novel techniques for anomalous subgraph detection, building on tools in the signal processing research literature. This report documents the technical results of this effort. Two datasets--a snapshot of Thompson Reuters' Web of Science database and a stream of web proxy logs--were parsed, and graphs were constructed from the raw data. From the phenomena in these datasets, several algorithms were developed to model the dynamic graph behavior, including a preferential attachment mechanism with memory, a streaming filter to model a graph as a weighted average of its past connections, and a generalized linear model for graphs where connection probabilities are determined by additional side information or metadata. A set of metrics was also constructed to facilitate comparison of techniques. The study culminated in a demonstration of the algorithms on the datasets of interest, in addition to simulated data. Performance in terms of detection, estimation, and computational burden was measured according to the metrics. Among the highlights of this demonstration were the detection of emerging coauthor clusters in the Web of Science data, detection of botnet activity in the web proxy data after 15 minutes (which took 10 days to detect using state-of-the-practice techniques), and demonstration of the core algorithm on a simulated 1-billion-vertex graph using a commodity computing cluster.
READ LESS

Summary

In numerous application domains relevant to the Department of Defense and the Intelligence Community, data of interest take the form of entities and the relationships between them, and these data are commonly represented as graphs. Under the Very Large Graphs for Information Extraction effort--a one year proof-of-concept study--MIT LL developed...

READ MORE

Cyber network mission dependencies

Published in:
MIT Lincoln Laboratory Report TR-1189

Summary

Cyber assets are critical to mission success in every arena of the Department of Defense. Because all DoD missions depend on cyber infrastructure, failure to secure network assets and assure the capabilities they enable will pose a fundamental risk to any defense mission. The impact of a cyber attack is not well understood by warfighters or leadership. It is critical that the DoD develop better cognizance of Cyber Network Mission Dependencies (CNMD). This report identifies the major drivers for mapping missions to network assets, introduces existing technologies in the mission-mapping landscape, and proposes directions for future development.
READ LESS

Summary

Cyber assets are critical to mission success in every arena of the Department of Defense. Because all DoD missions depend on cyber infrastructure, failure to secure network assets and assure the capabilities they enable will pose a fundamental risk to any defense mission. The impact of a cyber attack is...

READ MORE

Evaluation of the baseline NEXRAD icing hazard project

Published in:
37th Conference on Radar Meteorology, 14-18 September 2015

Summary

MIT Lincoln Laboratory has developed an icing hazard product that is now operational throughout the NEXRAD network. This initial version of the Icing Hazard Levels (IHL) algorithm is predicated on the presence of graupel as determined by the NEXRAD Hydrometeor Classification Algorithm (HCA). Graupel indicates that rime accretion on ice crystal aggregates is present. It is inferred that the riming process occurs at the altitude that HCA reports graupel as well as to some vertical depth above. To capture some of that depth, temperature and relative humidity interest fields are computed from meteorological model data based on the technique used in the National Center for Atmospheric Research's Current Icing Potential Product and utilized within IHL as warranted. A critical aspect of the IHL development has focused on the verification of the presence of icing. Two methods are used. For the first, pilot reports of icing (PIREPs) are used to score the performance of IHL. Since PIREPs are provided with inherent time and space uncertainties, a buffer of influence is associated with each PIREP when scoring IHL. Results show the IHL as configured is an effective indicator of a potential icing hazard when HCA graupel classifications are present. Results also show the importance of radar volume coverage pattern selection in detecting weak returns in winter weather. For the second, in situ icing missions were performed within range of a dual pol NEXRAD to provide quantitative data to identify the presence of supercooled liquid water. Comparisons of in situ data to HCA classifications show that HCA graupel indications do not fully expose the icing hazard and these findings are being used to direct future attention of IHL development. This paper will describe the verification method and performance assessment of the IHL initial capability.
READ LESS

Summary

MIT Lincoln Laboratory has developed an icing hazard product that is now operational throughout the NEXRAD network. This initial version of the Icing Hazard Levels (IHL) algorithm is predicated on the presence of graupel as determined by the NEXRAD Hydrometeor Classification Algorithm (HCA). Graupel indicates that rime accretion on ice...

READ MORE

The AFRL-MITLL WMT15 System: there's more than one way to decode it!

Published in:
Proc. 10th Workshop on Statistical Machine Translation, 17-18 September 2015, pp. 112-9.

Summary

This paper describes the AFRL-MITLL statistical MT systems and the improvements that were developed during the WMT15 evaluation campaign. As part of these efforts we experimented with a number of extensions to the standard phrase-based model that improve performance on the Russian to English translation task creating three submission systems with different decoding strategies. Out of vocabulary words were addressed with named entity postprocessing.
READ LESS

Summary

This paper describes the AFRL-MITLL statistical MT systems and the improvements that were developed during the WMT15 evaluation campaign. As part of these efforts we experimented with a number of extensions to the standard phrase-based model that improve performance on the Russian to English translation task creating three submission systems...

READ MORE

Using a power law distribution to describe big data

Published in:
HPEC 2015: IEEE Conf. on High Performance Extreme Computing, 15-17 September 2015.

Summary

The gap between data production and user ability to access, compute and produce meaningful results calls for tools that address the challenges associated with big data volume, velocity and variety. One of the key hurdles is the inability to methodically remove expected or uninteresting elements from large data sets. This difficulty often wastes valuable researcher and computational time by expending resources on uninteresting parts of data. Social sensors, or sensors which produce data based on human activity, such as Wikipedia, Twitter, and Facebook have an underlying structure which can be thought of as having a Power Law distribution. Such a distribution implies that few nodes generate large amounts of data. In this article, we propose a technique to take an arbitrary dataset and compute a power law distributed background model that bases its parameters on observed statistics. This model can be used to determine the suitability of using a power law or automatically identify high degree nodes for filtering and can be scaled to work with big data.
READ LESS

Summary

The gap between data production and user ability to access, compute and produce meaningful results calls for tools that address the challenges associated with big data volume, velocity and variety. One of the key hurdles is the inability to methodically remove expected or uninteresting elements from large data sets. This...

READ MORE

Parallel vectorized algebraic AES in MATLAB for rapid prototyping of encrypted sensor processing algorithms and database analytics

Published in:
HPEC 2015: IEEE Conf. on High Performance Extreme Computing, 15-17 September 2015.

Summary

The increasing use of networked sensor systems and networked databases has led to an increased interest in incorporating encryption directly into sensor algorithms and database analytics. MATLAB is the dominant tool for rapid prototyping of sensor algorithms and has extensive database analytics capabilities. The advent of high level and high performance Galois Field mathematical environments allows encryption algorithms to be expressed succinctly and efficiently. This work leverages the Galois Field primitives found the MATLAB Communication Toolbox to implement a mode of the Advanced Encrypted Standard (AES) based on first principals mathematics. The resulting implementation requires 100x less code than standard AES implementations and delivers speed that is effective for many design purposes. The parallel version achieves speed comparable to native OpenSSL on a single node and is sufficient for real-time prototyping of many sensor processing algorithms and database analytics.
READ LESS

Summary

The increasing use of networked sensor systems and networked databases has led to an increased interest in incorporating encryption directly into sensor algorithms and database analytics. MATLAB is the dominant tool for rapid prototyping of sensor algorithms and has extensive database analytics capabilities. The advent of high level and high...

READ MORE