Publications

Refine Results

(Filters Applied) Clear All

High performance, 3D-printable dielectric nanocomposites for millimeter wave devices

Summary

The creation of millimeter wave, 3D-printable dielectric nanocomposite is demonstrated. Alumina nanoparticles were combined with styrenic block copolymers and solvent to create shear thinning, viscoelastic inks that are printable at room temperature. Particle loadings of up to 41 vol % were achieved. Upon being dried, the highest-performing of these materials has a permittivity of 4.61 and a loss tangent of 0.00298 in the Ka band (26.5-40 GHz), a combination not previously demonstrated for 3D printing. These nanocomposite materials were used to print a simple resonator device with predictable pass-band features.
READ LESS

Summary

The creation of millimeter wave, 3D-printable dielectric nanocomposite is demonstrated. Alumina nanoparticles were combined with styrenic block copolymers and solvent to create shear thinning, viscoelastic inks that are printable at room temperature. Particle loadings of up to 41 vol % were achieved. Upon being dried, the highest-performing of these materials...

READ MORE

LLTools: machine learning for human language processing

Summary

Machine learning methods in Human Language Technology have reached a stage of maturity where widespread use is both possible and desirable. The MIT Lincoln Laboratory LLTools software suite provides a step towards this goal by providing a set of easily accessible frameworks for incorporating speech, text, and entity resolution components into larger applications. For the speech processing component, the pySLGR (Speaker, Language, Gender Recognition) tool provides signal processing, standard feature analysis, speech utterance embedding, and machine learning modeling methods in Python. The text processing component in LLTools extracts semantically meaningful insights from unstructured data via entity extraction, topic modeling, and document classification. The entity resolution component in LLTools provides approximate string matching, author recognition and graph-based methods for identifying and linking different instances of the same real-world entity. We show through two applications that LLTools can be used to rapidly create and train research prototypes for human language processing.
READ LESS

Summary

Machine learning methods in Human Language Technology have reached a stage of maturity where widespread use is both possible and desirable. The MIT Lincoln Laboratory LLTools software suite provides a step towards this goal by providing a set of easily accessible frameworks for incorporating speech, text, and entity resolution components...

READ MORE

An overview of the DARPA Data Driven Discovery of Models (D3M) Program

Published in:
29th Conf. on Neural Information Processing Systems, NIPS, 5-10 December 2016.

Summary

A new DARPA program called Data Driven Discovery of Models (D3M) aims to develop automated model discovery systems that can be used by researchers with specific subject matter expertise to create empirical models of real, complex processes. Two major goals of this program are to allow experts to create empirical models without the need for data scientists and to increase the productivity of data scientists via automation. Automated model discovery systems developed will be tested on real-world problems that progressively get harder during the course of the program. Toward the end of the program, problems will be both unsolved and underspecified in terms of data and desired outcomes. The program will emphasize creating and leveraging open source technology and architecture. Our presentation reviews the goals and structure of this program which will begin early in 2017. Although the deadline for submitting proposals has past, we welcome suggestions concerning challenge tasks, evaluations, or new open-source data sets to be included for system development and evaluation that would supplement data currently being curated from many sources.
READ LESS

Summary

A new DARPA program called Data Driven Discovery of Models (D3M) aims to develop automated model discovery systems that can be used by researchers with specific subject matter expertise to create empirical models of real, complex processes. Two major goals of this program are to allow experts to create empirical...

READ MORE

Bootstrapping and maintaining trust in the cloud

Published in:
32nd Annual Computer Security Applications Conf., ACSAC 2016, 5-9 December 2016.

Summary

Today's infrastructure as a service (IaaS) cloud environments rely upon full trust in the provider to secure applications and data. Cloud providers do not offer the ability to create hardware-rooted cryptographic identities for IaaS cloud resources or sufficient information to verify the integrity of systems. Trusted computing protocols and hardware like the TPM have long promised a solution to this problem. However, these technologies have not seen broad adoption because of their complexity of implementation, low performance, and lack of compatibility with virtualized environments. In this paper we introduce keylime, a scalable trusted cloud key management system. keylime provides an end-to-end solution for both bootstrapping hardware rooted cryptographic identities for IaaS nodes and for system integrity monitoring of those nodes via periodic attestation. We support these functions in both bare-metal and virtualized IaaS environments using a virtual TPM. keylime provides a clean interface that allows higher level security services like disk encryption or configuration management to leverage trusted computing without being trusted computing aware. We show that our bootstrapping protocol can derive a key in less than two seconds, we can detect system integrity violations in as little as 110ms, and that keylime can scale to thousands of IaaS cloud nodes.
READ LESS

Summary

Today's infrastructure as a service (IaaS) cloud environments rely upon full trust in the provider to secure applications and data. Cloud providers do not offer the ability to create hardware-rooted cryptographic identities for IaaS cloud resources or sufficient information to verify the integrity of systems. Trusted computing protocols and hardware...

READ MORE

Predicting and analyzing factors in patent litigation

Published in:
30th Conf. on Neural Information Processing System, NIPS 2016, 5-10 December 2016.

Summary

Patent litigation is an expensive and time-consuming process. To minimize its impact on the participants in the patent lifecycle, automatic determination of litigation potential is a compelling machine learning application. In this paper, we consider preliminary methods for the prediction of a patent being involved in litigation using metadata, content, and graph features. Metadata features are top-level easily-extractable features, i.e., assignee, number of claims, etc. The content feature performs lexical analysis of the claims associated to a patent. Graph features use relational learning to summarize patent references. We apply our methods on US patents using a labeled data set. Prior work has focused on metadata-only features, but we show that both graph and content features have significant predictive capability. Additionally, fusing all features results in improved performance. We also perform a preliminary examination of some of the qualitative factors that may have significant importance in patent litigation.
READ LESS

Summary

Patent litigation is an expensive and time-consuming process. To minimize its impact on the participants in the patent lifecycle, automatic determination of litigation potential is a compelling machine learning application. In this paper, we consider preliminary methods for the prediction of a patent being involved in litigation using metadata, content...

READ MORE

Biomimetic sniffing improves the detection performance of a 3D printed nose of a dog and a commercial trace vapor detector

Published in:
Scientific Reports, Vol. 6 , art. no. 36876, December 2016. DOI: 10.1038/srep36876.

Summary

Unlike current chemical trace detection technology, dogs actively sniff to acquire an odor sample. Flow visualization experiments with an anatomically-similar 3D printed dog's nose revealed the external aerodynamics during canine sniffing, where ventral-laterally expired air jets entrain odorant-laden air toward the nose, thereby extending the "aerodynamic reach" for inspiration of otherwise inaccessible odors. Chemical sampling and detection experiments quantified two modes of operation with the artificial nose-active sniffing and continuous inspiration-and demonstrated an increase in odorant detection by a factor of up to 18 for active sniffing. A 16-fold improvement in detection was demonstrated with a commercially-available explosives detector by applying this bio-inspired design principle and making the device "sniff" like a dog. These lessons learned from the dog may benefit the next-generation of vapor samplers for explosives, narcotics, pathogens, or even cancer, and could inform future bio-inspired designs for optimized sampling of odor plumes.
READ LESS

Summary

Unlike current chemical trace detection technology, dogs actively sniff to acquire an odor sample. Flow visualization experiments with an anatomically-similar 3D printed dog's nose revealed the external aerodynamics during canine sniffing, where ventral-laterally expired air jets entrain odorant-laden air toward the nose, thereby extending the "aerodynamic reach" for inspiration of...

READ MORE

Terminal Flight Data Manager (TFDM) environmental benefits assessment

Published in:
MIT Lincoln Laboratory Report ATC-420

Summary

This work monetizes the environmental benefits of Terminal Flight Data Manager (TFDM) capabilities which reduce fuel burn and gaseous emissions, and in turn reduce climate change and air quality effects. A methodology is created which takes TFDM "engines-on" taxi time savings and converts them to fuel and carbon dioxide (CO2) emissions savings, accounting for aircraft fleet mix at each of 27 TFDM analysis airports over a 2016-2048 analysis timeframe. Total fuel reductions of approximately 300 million U.S. gallons are estimated, resulting in monetized benefits from all TFDM capabilities of $65m-$582m undiscounted, $23m-$310m discounted, depending on the Social Cost of CO2 (SCC) and discount rate used. A similar methodology is used to estimate monetized benefits of reduced air quality emissions as well.
READ LESS

Summary

This work monetizes the environmental benefits of Terminal Flight Data Manager (TFDM) capabilities which reduce fuel burn and gaseous emissions, and in turn reduce climate change and air quality effects. A methodology is created which takes TFDM "engines-on" taxi time savings and converts them to fuel and carbon dioxide (CO2)...

READ MORE

The role of master clock stability in quantum information processing

Published in:
npj Quantum Inf., Vol. 2, 8 November 2016, doi:10.1038/npjqi.2016.33.

Summary

Experimentalists seeking to improve the coherent lifetimes of quantum bits have generally focused on mitigating decoherence mechanisms through, for example, improvements to qubit designs and materials, and system isolation from environmental perturbations. In the case of the phase degree of freedom in a quantum superposition, however, the coherence that must be preserved is not solely internal to the qubit, but rather necessarily includes that of the qubit relative to the 'master clock' (e.g., a local oscillator) that governs its control system. In this manuscript, we articulate the impact of instabilities in the master clock on qubit phase coherence and provide tools to calculate the contributions to qubit error arising from these processes. We first connect standard oscillator phase-noise metrics to their corresponding qubit dephasing spectral densities. We then use representative lab-grade and performance-grade oscillator specifications to calculate operational fidelity bounds on trapped-ion and superconducting qubits with relatively slow and fast operation times. We discuss the relevance of these bounds for quantum error correction in contemporary experiments and future large-scale quantum information systems, and consider potential means to improve master clock stability.
READ LESS

Summary

Experimentalists seeking to improve the coherent lifetimes of quantum bits have generally focused on mitigating decoherence mechanisms through, for example, improvements to qubit designs and materials, and system isolation from environmental perturbations. In the case of the phase degree of freedom in a quantum superposition, however, the coherence that must...

READ MORE

Covariance estimation in terms of Stokes parameters with application to vector sensor imaging

Published in:
2016 Asilomar Conf. on Signals, Systems and Computers, Asilomar 2016, 6-9 November 2016.

Summary

Vector sensor imaging presents a challenging problem in covariance estimation when allowing arbitrarily polarized sources. We propose a Stokes parameter representation of the source covariance matrix which is both qualitatively and computationally convenient. Using this formulation, we adapt the proximal gradient and expectation maximization (EM) algorithms and apply them in multiple variants to the maximum likelihood and least squares problems. We also show how EM can be cast as gradient descent on the Riemannian manifold of positive definite matrices, enabling a new accelerated EM algorithm. Finally, we demonstrate the benefits of the proximal gradient approach through comparison of convergence results from simulated data.
READ LESS

Summary

Vector sensor imaging presents a challenging problem in covariance estimation when allowing arbitrarily polarized sources. We propose a Stokes parameter representation of the source covariance matrix which is both qualitatively and computationally convenient. Using this formulation, we adapt the proximal gradient and expectation maximization (EM) algorithms and apply them in...

READ MORE

Leveraging data provenance to enhance cyber resilience

Summary

Building secure systems used to mean ensuring a secure perimeter, but that is no longer the case. Today's systems are ill-equipped to deal with attackers that are able to pierce perimeter defenses. Data provenance is a critical technology in building resilient systems that will allow systems to recover from attackers that manage to overcome the "hard-shell" defenses. In this paper, we provide background information on data provenance, details on provenance collection, analysis, and storage techniques and challenges. Data provenance is situated to address the challenging problem of allowing a system to "fight-through" an attack, and we help to identify necessary work to ensure that future systems are resilient.
READ LESS

Summary

Building secure systems used to mean ensuring a secure perimeter, but that is no longer the case. Today's systems are ill-equipped to deal with attackers that are able to pierce perimeter defenses. Data provenance is a critical technology in building resilient systems that will allow systems to recover from attackers...

READ MORE