Publications

Refine Results

(Filters Applied) Clear All

Optical phased-array ladar

Published in:
Appl. Opt., Vol. 53, No. 31, 1 November 2014, pp. 7551-5.

Summary

We demonstrate a ladar with 0.5 m class range resolution obtained by integrating a continuous-wave optical phased-array transmitter with a Geiger-mode avalanche photodiode receiver array. In contrast with conventional ladar systems, an array of continuous-wave sources is used to effectively pulse illuminate a target by electro-optically steering far-field fringes. From the reference frame of a point in the far field, a steered fringe appears as a pulse. Range information is thus obtained by measuring the arrival time of a pulse return from a target to a receiver pixel. This ladar system offers a number of benefits, including broad spectral coverage, high efficiency, small size, power scalability, and versatility.
READ LESS

Summary

We demonstrate a ladar with 0.5 m class range resolution obtained by integrating a continuous-wave optical phased-array transmitter with a Geiger-mode avalanche photodiode receiver array. In contrast with conventional ladar systems, an array of continuous-wave sources is used to effectively pulse illuminate a target by electro-optically steering far-field fringes. From...

READ MORE

Finding good enough: a task-based evaluation of query biased summarization for cross language information retrieval

Published in:
EMNLP 2014, Proc. of Conf. on Empirical Methods in Natural Language Processing, 25-29 October, 2014, pp. 657-69.

Summary

In this paper we present our task-based evaluation of query biased summarization for cross-language information retrieval (CLIR) using relevance prediction. We describe our 13 summarization methods each from one of four summarization strategies. We show how well our methods perform using Farsi text from the CLEF 2008 shared-task, which we translated to English automatically. We report precision/recall/F1, accuracy and time-on-task. We found that different summarization methods perform optimally for different evaluation metrics, but overall query biased word clouds are the best summarization strategy. In our analysis, we demonstrate that using the ROUGE metric on our sentence-based summaries cannot make the same kinds of distinctions as our evaluation framework does. Finally, we present our recommendations for creating much-needed evaluation standards and databases.
READ LESS

Summary

In this paper we present our task-based evaluation of query biased summarization for cross-language information retrieval (CLIR) using relevance prediction. We describe our 13 summarization methods each from one of four summarization strategies. We show how well our methods perform using Farsi text from the CLEF 2008 shared-task, which we...

READ MORE

Bayesian discovery of threat networks

Published in:
IEEE Trans. Signal Process., Vol. 62, No. 20, 15 October 2014, pp. 5324-38.

Summary

A novel unified Bayesian framework for network detection is developed, under which a detection algorithm is derived based on random walks on graphs. The algorithm detects threat networks using partial observations of their activity, and is proved to be optimum in the Neyman-Pearson sense. The algorithm is defined by a graph, at least one observation, and a diffusion model for threat. A link to well-known spectral detection methods is provided, and the equivalence of the random walk and harmonic solutions to the Bayesian formulation is proven. A general diffusion model is introduced that utilizes spatio-temporal relationships between vertices, and is used for a specific space-time formulation that leads to significant performance improvements on coordinated covert networks. This performance is demonstrated using a new hybrid mixed-membership blockmodel introduced to simulate random covert networks with realistic properties.
READ LESS

Summary

A novel unified Bayesian framework for network detection is developed, under which a detection algorithm is derived based on random walks on graphs. The algorithm detects threat networks using partial observations of their activity, and is proved to be optimum in the Neyman-Pearson sense. The algorithm is defined by a...

READ MORE

Increasing the coherence time in a magnetically-sensitive stimulated Raman transition in 85Rb

Published in:
FIO 2014: Frontiers in Optics, 14 October 2014.

Summary

We experimentally study the Ramsey, spin echo, and CPMG pulse sequences of a magnetically sensitive transition of a cold 85Rb gas. We can increase the coherence time by up to a factor of 10 by using CPMG pulse sequences as compared to Ramsey or spin echo.
READ LESS

Summary

We experimentally study the Ramsey, spin echo, and CPMG pulse sequences of a magnetically sensitive transition of a cold 85Rb gas. We can increase the coherence time by up to a factor of 10 by using CPMG pulse sequences as compared to Ramsey or spin echo.

READ MORE

Energy efficiency benefits of subthreshold-optimized transistors for digital logic

Published in:
2014 IEEE SOI-3D-Subthreshold Microelectronics Technology Unified Conf. (S3S), 6-9 October 2014.

Summary

The minimum energy point of an integrated circuit (IC) is defined as the value of the supply voltage at which the energy per operation of the circuit is minimized. Several factors influence what the value of this voltage can be, including the topology of the circuit itself, the input activity factor, and the process technology in which the circuit is implemented. For application-specific ICs (ASICs), the minimum energy point usually occurs at a subthreshold supply voltage. Advances in subthreshold circuit design now permit correct circuit operation at, or even below, the minimum energy point. Since energy consumption is proportional to the square of the supply voltage, circuit design techniques and process technology choices that reduce the minimum energy point inherently improve the energy efficiency of ICs. Previous research has shown that optimizing process technology for subthreshold operation can improve IC energy efficiency. This, coupled with the energy efficiency advantages offered by fully-depleted silicon-on-insulator (FDSOI) processes, have led to the development of a subthreshold-optimized FDSOI process at MIT Lincoln Laboratory (MITLL) called xLP (Extreme Low Power). However, to date there has not been a quantitative estimate of the energy efficiency benefit of xLP or other analagous technology for complex digital circuits. This paper will show via simulation that the xLP process technology enables energy efficiency improvements that exceed that of process scaling by one generation. Specifically, the process is shown to improve power delay product by 57% vs. the IBM 90nm low power bulk process, and by 9% vs. the IBM 65 nm low power bulk technology at 0.3V.
READ LESS

Summary

The minimum energy point of an integrated circuit (IC) is defined as the value of the supply voltage at which the energy per operation of the circuit is minimized. Several factors influence what the value of this voltage can be, including the topology of the circuit itself, the input activity...

READ MORE

Quantitative evaluation of dynamic platform techniques as a defensive mechanism

Published in:
RAID 2014: 17th Int. Symp. on Research in Attacks, Intrusions, and Defenses, 17-19 September 2014.

Summary

Cyber defenses based on dynamic platform techniques have been proposed as a way to make systems more resilient to attacks. These defenses change the properties of the platforms in order to make attacks more complicated. Unfortunately, little work has been done on measuring the effectiveness of these defenses. In this work, we first measure the protection provided by a dynamic platform technique on a testbed. The counter-intuitive results obtained from the testbed guide us in identifying and quantifying the major effects contributing to the protection in such a system. Based on the abstract effects, we develop a generalized model of dynamic platform techniques which can be used to quantify their effectiveness. To verify and validate out results, we simulate the generalized model and show that the testbed measurements and the simulations match with small amount of error. Finally, we enumerate a number of lessons learned in our work which can be applied to quantitative evaluation of other defensive techniques.
READ LESS

Summary

Cyber defenses based on dynamic platform techniques have been proposed as a way to make systems more resilient to attacks. These defenses change the properties of the platforms in order to make attacks more complicated. Unfortunately, little work has been done on measuring the effectiveness of these defenses. In this...

READ MORE

Using deep belief networks for vector-based speaker recognition

Published in:
INTERSPEECH 2014: 15th Annual Conf. of the Int. Speech Communication Assoc., 14-18 September 2014.

Summary

Deep belief networks (DBNs) have become a successful approach for acoustic modeling in speech recognition. DBNs exhibit strong approximation properties, improved performance, and are parameter efficient. In this work, we propose methods for applying DBNs to speaker recognition. In contrast to prior work, our approach to DBNs for speaker recognition starts at the acoustic modeling layer. We use sparse-output DBNs trained with both unsupervised and supervised methods to generate statistics for use in standard vector-based speaker recognition methods. We show that a DBN can replace a GMM UBM in this processing. Methods, qualitative analysis, and results are given on a NIST SRE 2012 task. Overall, our results show that DBNs show competitive performance to modern approaches in an initial implementation of our framework.
READ LESS

Summary

Deep belief networks (DBNs) have become a successful approach for acoustic modeling in speech recognition. DBNs exhibit strong approximation properties, improved performance, and are parameter efficient. In this work, we propose methods for applying DBNs to speaker recognition. In contrast to prior work, our approach to DBNs for speaker recognition...

READ MORE

Talking Head Detection by Likelihood-Ratio Test(220.2 KB)

Published in:
Second Workshop on Speech, Language, Audio in Multimedia

Summary

Detecting accurately when a person whose face is visible in an audio-visual medium is the audible speaker is an enabling technology with a number of useful applications. The likelihood-ratio test formulation and feature signal processing employed here allow the use of high-dimensional feature sets in the audio and visual domain, and the approach appears to have good detection performance for AV segments as short as a few seconds.
READ LESS

Summary

Detecting accurately when a person whose face is visible in an audio-visual medium is the audible speaker is an enabling technology with a number of useful applications. The likelihood-ratio test formulation and feature signal processing employed here allow the use of high-dimensional feature sets in the audio and visual domain...

READ MORE

A survey of cryptographic approaches to securing big-data analytics in the cloud

Published in:
HPEC 2014: IEEE Conf. on High Performance Extreme Computing, 9-11 September 2014.

Summary

The growing demand for cloud computing motivates the need to study the security of data received, stored, processed, and transmitted by a cloud. In this paper, we present a framework for such a study. We introduce a cloud computing model that captures a rich class of big-data use-cases and allows reasoning about relevant threats and security goals. We then survey three cryptographic techniques - homomorphic encryption, verifiable computation, and multi-party computation - that can be used to achieve these goals. We describe the cryptographic techniques in the context of our cloud model and highlight the differences in performance cost associated with each.
READ LESS

Summary

The growing demand for cloud computing motivates the need to study the security of data received, stored, processed, and transmitted by a cloud. In this paper, we present a framework for such a study. We introduce a cloud computing model that captures a rich class of big-data use-cases and allows...

READ MORE

Computing on masked data: a high performance method for improving big data veracity

Published in:
HPEC 2014: IEEE Conf. on High Performance Extreme Computing, 9-11 September 2014.

Summary

The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. Along with these standard three V's of big data, an emerging fourth "V" is veracity, which addresses the confidentiality, integrity, and availability of the data. Traditional cryptographic techniques that ensure the veracity of data can have overheads that are too large to apply to big data. This work introduces a new technique called Computing on Masked Data (CMD), which improves data veracity by allowing computations to be performed directly on masked data and ensuring that only authorized recipients can unmask the data. Using the sparse linear algebra of associative arrays, CMD can be performed with significantly less overhead than other approaches while still supporting a wide range of linear algebraic operations on the masked data. Databases with strong support of sparse operations, such as SciDB or Apache Accumulo, are ideally suited to this technique. Examples are shown for the application of CMD to a complex DNA matching algorithm and to database operations over social media data.
READ LESS

Summary

The growing gap between data and users calls for innovative tools that address the challenges faced by big data volume, velocity and variety. Along with these standard three V's of big data, an emerging fourth "V" is veracity, which addresses the confidentiality, integrity, and availability of the data. Traditional cryptographic...

READ MORE