2015–2016 Technical Seminar Series

Members of the technical staff at MIT Lincoln Laboratory are pleased to present these technical seminars to interested college and university groups. Costs related to the staff members' visits for these seminars will be assumed by the Laboratory.

To arrange a technical seminar, please contact

College Recruiting Program Administrator
Human Resources Department
MIT Lincoln Laboratory
244 Wood Street
Lexington, Massachusetts 02420-9108
781-981-2465
email: collegerecr@ll.mit.edu

and provide the following information:

  • A rank-ordered list of requested seminars
  • Preferred date/time options
  • A description of the target audience

Index of 2015–2016 Seminars

Air Traffic Control

Communication Systems

Homeland Protection

Optical Propagation and Technology

Radar and Signal Processing

Space Control Technology

Systems and Architectures

Solid State Devices, Materials, and Processes

Cyber Security and Information Sciences


SEMINAR ABSTRACTS

Air Traffic Control

Human-System Integration in Aeronautical Decision Support Systems

Dr. Hayley J. Davison Reynolds1
MIT Lincoln Laboratory

MIT Lincoln Laboratory has had a successful history of integrating decision support systems into the air traffic control domain, even though the introduction of new technologies often meets user resistance. Because of the nature of the work, air traffic controllers heavily rely on certain technologies and decision processes to maintain a safe operating environment while maximizing efficiency. This reliance on familiar technology and processes makes the introduction of a new tool into the environment a difficult task, even though the tool may ultimately improve the decision and raise levels of safety and/or efficiency of the operation. Poorly integrated systems can result in users being disappointed by a system that provides information for which they have no concept of use, or more likely, that results in the system’s not being used despite its existence in the array of available information systems; neither option yields the benefits for which the tool was designed.

In this seminar, a practical methodology for designing and fielding decision support systems that maximizes the potential of effective integration of the system into the users' operational context will be presented. Several examples of Federal Aviation Administration air traffic control decision support prototype systems designed by Lincoln Laboratory, including the Route Availability Planning Tool (RAPT) and the Tower Flight Data Manager (TFDM), will be described to demonstrate this process. Included in the presentation will be areas in which the designers ran into roadblocks in making the systems effective and the combination of qualitative and quantitative techniques used to eventually integrate the system well in the field, yielding measurable operational benefits.

1PhD, Aeronautical Systems and Applied Psychology, Massachusetts Institute of Technology

top


Integrating Unmanned Aircraft Systems Safely into the National Airspace System

Dr. Rodney E. Cole1
MIT Lincoln Laboratory

Unmanned aircraft systems (UAS) such as the Air Force's Global Hawk and Predator are increasingly employed by the military and Department of Homeland Security in roles that require sharing airspace with civilian aircraft. Missions include pilot training, border patrol, highway and agricultural observation, and disaster management. Because of the pressure for widespread access for UASs to the national airspace and the risk of collision with passenger aircraft, UAS operators must find a way to integrate with manned aircraft with a very high degree of safety. The key to safe integration of UAS into the national airspace is the development and assessment of "sense and avoid" (SAA) technologies to replace the manned aircraft pilot's ability to "see and avoid" other aircraft.

MIT Lincoln Laboratory is conducting research to address safe and flexible UAS integration with commercial and general aviation aircraft. Research areas include development of sophisticated computer models that simulate millions of encounters between UAS and civilian aircraft to characterize airspace hazards and collision rates. These models can be applied to assess the performance of SAA algorithms designed to maintain separation, "well clear," between UAS and civilian aircraft while observing and adhering to established right-of-way rules. The Laboratory is also conducting groundbreaking research in the area of collision avoidance logic and is pursuing a probabilistic approach to collision avoidance that considers the uncertainty in pilot response to alerts and uncertainty in future states of the threat aircraft. This approach offers the potential to provide increased safety with decreased false alarms over conventional techniques and is a candidate for future Traffic Alert Collision Avoidance Systems (TCAS) and UAS SAA applications.

The Laboratory is also working with the Department of Defense (DoD) and Department of Homeland Security to develop ground-based sense and avoid (GBSAA) and airborne sense and avoid (ABSAA) surveillance architectures to satisfy the Federal Aviation Administration’s (FAA) requirement for replacing the onboard pilot's "see and avoid" function. Under DoD sponsorship, Lincoln Laboratory has deployed a service-oriented architecture GBSAA test bed that will be utilized in operational and simulation-over-live environments to collect data and operator feedback that can then be used to support future certification with the FAA. This seminar will provide a broad overview of the Laboratory's efforts in UAS airspace integration and next-generation aircraft collision avoidance algorithms and will provide an overview of the GBSAA test bed that is under development for the DoD.

1PhD, Mathematics, University of Colorado–Boulder

top


Machine Learning Applications in Aviation Weather and Traffic Management

Dr. Mark S. Veillette1
MIT Lincoln Laboratory

Adverse weather accounts for the majority of air traffic delays in the United States. When weather is expected to impact operations, air traffic managers (ATMs) are often faced with an overload of weather information required to make decisions. These data include multiple numerical weather model forecasts, satellite observations, Doppler radar, lightning detections, wind information, and other forms of meteorological data. Many of these data contain a great deal of uncertainty. Absorbing and utilizing these data in an optimal way is challenging even for experienced ATMs.

This talk will provide two examples of using machine learning to assist ATMs. In our first example, data from weather satellites are combined with global lightning detections and numerical weather models to create Doppler radar-like displays of precipitation in regions outside the range of weather radar. In the second example, multiple weather forecasts are used to refine the prediction of airspace capacity within selected regions of airspace. Examples and challenges of data collection, translation, modelling, and operational prototype evaluations will be discussed.

1PhD, Mathematics, Boston University

top


Radar Detection of Aviation Weather Hazards

Dr. John Y. N. Cho1
MIT Lincoln Laboratory

Bad weather plays a factor in many aviation accidents and incidents. Microburst, hail, icing, lightning, fog, turbulence—these are atmospheric phenomena that can interfere with aircraft performance and a pilot’s ability to fly safely. Thus, for safe and efficient operation of the air traffic system, it is crucial to continuously observe meteorological conditions and accurately characterize phenomena hazardous to aircraft. Radar is the most important weather-sensing instrument for aviation. This seminar will discuss technical advances that led to today’s operational terminal wind-shear detection radars. An overview of recent and ongoing research to improve radar capability to accurately observe weather hazards to aviation will also be presented.

1PhD, Electrical Engineering, Cornell University

top


System Design in an Uncertain World: Decision Support
for Mitigating Thunderstorm Impacts on Air Traffic

Richard A. DeLaura1
MIT Lincoln Laboratory

Weather accounts for 70% of the cost of air traffic delays—about $28 billion annually—within the United States National Airspace System (NAS). Most weather-related delays occur during the summer months, when thunderstorms affect air traffic, particularly in the crowded Northeast. The task of air traffic management, complicated even in the best of circumstances, can become overwhelmingly complex as air traffic managers struggle to route traffic reliably through rapidly evolving thunderstorms. A new generation of air traffic management decision support tools promises to reduce air traffic delays by accounting for the potential effects of convective weather, such as thunderstorms, on air traffic flow. Underpinning these tools are models that translate high-resolution convective weather forecasts into estimates of impact on aviation operations.

This seminar will present the results of new research to develop models of pilot decision making and air traffic capacity in the presence of thunderstorms. The models will be described, initial validation will be presented, and sources of error and uncertainty will be discussed. Finally, some applications of these models and directions for future research will be briefly described.

1AB, Physics, Harvard University

top


Communication Systems

Diversity in Air-to-Ground Lasercom: The Focal Demonstration

Dr. Frederick G. Walther1
MIT Lincoln Laboratory

Laser communications (lasercom) provides significant advantages, compared to radio-frequency (RF) communications, which include a large, unregulated bandwidth and high beam directionality for free-space links. These advantages provide capabilities for high (multi Gb/s) data transfer rates; reduced terminal size, weight and power; and a degree of physical link security against out-of-beam interferers or detectors. This seminar addresses the key components of lasercom system design, including modeling and simulation of atmospheric effects, link budget development, employment of spatial and temporal diversity techniques to mitigate signal fading due to scintillation, and requirements for acquisition and tracking system performance. Results from recent flight demonstrations show stable tracking, rapid reacquisition after cloud blockages, and high data throughput for a multi-Gb/s communications link out to 80 km. Potential technologies for further development include a compact optical gimbal that has low size, weight, and power, and more efficient modem and coding techniques to extend range and/or data rate.

1PhD, Physics, Massachusetts Institute of Technology

top


Dynamic Link Adaptation for Satellite Communications

Dr. Huan Yao1
MIT Lincoln Laboratory

Future protected military satellite communications will continue to use high transmission frequencies to capitalize on the large amounts of available bandwidth. However, the data flowing through these satellites will transition from the circuit-switched traffic of today's satellite systems to Internet-like packet traffic. One of the main differences in migrating to packet-switched communications is that the traffic will become bursty (i.e., the data rate from particular users will not be constant). The variation in data rate is only one of the potential system variations. At the frequencies of interest, rain and other weather phenomena can introduce significant path attenuation for relatively short time periods. Current protected satellite communications systems are designed with sufficient link margins to provide a desired availability under such degraded path conditions. These systems do not have provisions to use the excess link margins for additional capacity when weather conditions are good. The focus of this seminar is the design of a future satellite system that autonomously reacts to changes in link conditions and offered traffic. This automatic adaptation drastically improves the overall system capacity and the service that can be provided to ground terminals.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Group-Centric Networking: A New Approach for Wireless Multi-hop Networking to Enable the Internet of Things

Dr. Gregory Kuperman1
MIT Lincoln Laboratory

This talk introduces a new networking architecture called Group Centric Networking (GCN) that is designed to support the large number of devices expected with the emergence of the Internet of Things. Future networks will include hundreds of users needing to work collaboratively in a low-power and high-loss environment. Despite decades of research and development, wireless multi-hop networks are yet to offer the robust and scalable connectivity needed to support large numbers of users operating in this type of setting. MIT Lincoln Laboratory is designing GCN to (1) efficiently handle the various types of traffic that these future networks will carry, and (2) take advantage of the wireless medium to resiliently connect the users of these networks. This talk discusses how GCN utilizes up to an order of magnitude fewer network resources than traditional wireless routing schemes while providing superior connectivity and reliability.

1PhD, Communications and Networking, Massachusetts Institute of Technology

top


High-Rate Laser Communications to the Moon and Back

Dr. Farzana I. Khatri1
MIT Lincoln Laboratory

Radio waves have been the standard method for deep-space communications since the Apollo mission. Over the past decades, scientists at MIT Lincoln Laboratory have been working to develop free-space optical communications systems, and the recent success of the Lunar Laser Communication Demonstration (LLCD) program will clearly revolutionize future deep-space communication. systems. The LLCD demonstrated record-breaking optical up and down links between Earth and the Lunar Lasercom Space Terminal (LLST) payload on NASA’s Lunar Atmosphere and Dust Environment Explorer (LADEE) satellite orbiting the Moon. The system included an innovative space terminal, a novel ground terminal, two major upgrades of existing ground terminals, and a capable and flexible ground operations infrastructure. This talk will give an overview of the technologies involved in the demonstration, the system architecture, the basic operations of both the link and the whole system, and some typical results.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


How Effective is Routing for Wireless Networking?

Dr. Gregory Kuperman1
MIT Lincoln Laboratory

This talk examines the question of how effective routing is for reliably and efficiently delivering data in a wireless network. With the emergence of the Internet of Things (IoT), there is a renewed focus on multi-hop wireless networking to connect the large number of smart devices that will need to communicate among one another. Many of the proposals to support this new networking paradigm continue to use the concept of routing: a path between users is formed via a series of point-to-point links. MIT Lincoln Laboratory believes that the characteristics of the wireless environment inherently make link-based routing unsuitable for wireless networking and that new approaches are needed for the IoT to succeed. This seminar demonstrates that link-based routing (1) experiences high packet loss resulting from the inherently unreliable nature of control information, (2) is unable to ensure reliable message delivery in a lossy environment, and (3) incurs a high cost for route maintenance and repair.

1PhD, Communications and Networking, Massachusetts Institute of Technology

top


Implementation Considerations for Wideband Wireless Communications

Dr. Nancy B. List1
MIT Lincoln Laboratory

Unexpected technical challenges often arise in the process of transferring technology from theory into practical applications. It is well known that modulator distortion causes problems for the transmission of communications signals. Less obvious, however, is the effect of modulator distortion on signals used for time tracking in wireless systems requiring strict timing control. Narrowband tracking signals are often used to synchronize systems transmitting wideband communications signals. While narrowband tracking signals may be less sensitive than communications signals to many types of distortion, they are particularly sensitive to group delay variation. As a result, relatively small levels of group delay variation across the frequency band can cause unexpected overall system degradation. This seminar will describe the real-world challenges of time-tracking in frequency-hopped satellite communications systems transmitting signals at high data rates, as well as practical methods to analyze and overcome these challenges.

1PhD, Electrical Engineering, Georgia Institute of Technology

top


Practical Capacity Benchmarking for Wireless Networks
Dr. Sun Jun1
MIT Lincoln Laboratory

Despite the prevalence of ad hoc networks in wireless mesh networks, wireless sensor networks, and the tactical communication environment, there is not yet a fundamental understanding of their achieved performance relative to the optimal limit. Many evaluations of emerging wireless ad hoc networking systems have left the impression that the systems exhibit poor performance, but these evaluations have not provided either a clear vision of what performance should be attained or which details of network implementation are responsible for the systems’ disappointing results. The goal of the work discussed in this seminar is to develop a network benchmarking capability that can be used to evaluate the performance of emerging wireless ad hoc networks. MIT Lincoln Laboratory's focus is on the development of a network capacity benchmark.

This talk provides an upper bound on the wireless network capacity that is computationally efficient to implement. The upper-bound calculations consider multiple different wireless interference models. The seminar shows that in a wireless network with n nodes and bounded degree, the gap between the network capacity and its upper bound is O(log n) under uniform traffic and several different interference models. For the more constrained case of a wireless N*M grid network, the upper bound is at most twice the value of the network capacity.

The talk shows that, in practice, Lincoln Laboratory's approach performs extremely well. A polynomial time randomized algorithm generates the upper bound in general wireless networks. To ascertain the performance of their upper bound, the Laboratory's researchers use the cross-layer approach to develop a lower bound by considering only a subset of independent sets. The lower bound is shown to be within 95% of the upper bound on average for the primary interference model. For 802.11, 802.16 and the 2-hop interference models, the lower bound is within 70% of the upper bound. These bounds are used to quantify the impact of commonly used algorithms at the routing and the scheduling layers on the overall network throughput. The seminar compares the individual effects of these algorithms on overall network throughput performance.

1PhD, Electrical Engineering and Computer Science, Massachusetts Institute of Technology

top


Providing Information Security with Quantum Physics—A Practical Engineering Perspective

Dr. P. Benjamin Dixon1
MIT Lincoln Laboratory

Quantum information technology enables promising advances in the area of communications security. A well-engineered application of quantum mechanics enables results that are unattainable with classical-only processing. The most well-known example is quantum key distribution (QKD), a family of protocols to distribute a secret key. With a carefully designed protocol, quantum mechanics bounds the information an eavesdropper could obtain without being detected.

A verifiable random number generator (vRNG) is another important quantum information technology. A secure source of random bits is an important input for nearly all cryptographic protocols. Classical RNGs, which are subject to silent failures, potentially create security vulnerabilities. A vRNG uses the observation of a quantum phenomenon—the presence of entanglement via a violation of Bell's inequalities—to certify the entropy of the vRNG output.

MIT Lincoln Laboratory has had great success developing practical communication system demonstrations. This seminar will explore the ongoing efforts at the Laboratory to engineer quantum communication systems, exploring both theoretical and experimental advances.

1PhD, Physics, University of Rochester

top


Real-Time Modeling of Wireless Networks Through Emulation

David P. Ward1
MIT Lincoln Laboratory

As wireless networks continue to evolve, with developments ranging from advanced physical layer technologies to new content-delivery approaches, understanding network behavior in varied conditions remains challenging. Interactions between layers in the protocol stack are often subtle but can have a significant effect on network operation and ultimately end-to-end application performance. Emulation is a method for demonstrating network behavior in real time by using software on commodity computing servers to model the wireless interfaces at every node and leveraging complete implementations of network protocols and applications. This seminar will explain how emulation compares to other performance prediction methods, how it can be used by researchers, and what considerations are necessary as the modeled network increases in scale or complexity.

1MS, Electrical and Computer Engineering, Georgia Institute of Technology

top


Research Challenges in Airborne Networks and Communications

Dr. Bow-Nan Cheng1
MIT Lincoln Laboratory

In recent years, with the emergence of ubiquitous communications and the increasing availability of unmanned aerial vehicles, there has been an increased interest in building and leveraging airborne networks to facilitate data dissemination and coordination. Although the commercial world has made significant progress in building and deploying ground wireless networks, several military needs are currently not being met by available and emerging commercial technologies, particularly for use in airborne applications.

This talk presents four major airborne networking domains, examines unique domain characteristics, and identifies interesting research challenges for improving performance and scalability in the context of military airborne systems. Topics covered include techniques for improving physical layer performance in the presence of strong interference, antenna capabilities and limitations resulting from the antenna's mounting location on the aircraft surface, and methods for integrating radio-link state information with the platform network routers for improved routing decisions. The talk also presents an overview of some work performed at MIT Lincoln Laboratory in the past few years to address the research challenges, and to identify additional open areas for research, in airborne networking.

1PhD, Computer Science, Rensselaer Polytechnic Institute

top


Robust Multi-user Wireless Communications

Dr. Thomas C. Royster IV1
MIT Lincoln Laboratory

Many of today's wireless communications systems continue to push toward higher data rates to support diverse user applications. Users that must communicate under adverse conditions, however, place a premium on robustness and security. While robust communication systems require system designs that fundamentally differ from those of current commercial systems, advances in academic research and technology are often applicable to both. This seminar discusses the opportunities and challenges of incorporating current and emerging wireless communications techniques and technologies, such as multi-user detection, beamforming, and high-speed signal processing algorithms, into the physical layers and medium-access control layers of government wireless communication systems. Trade-offs among data rate, robustness, delay, and complexity are described. Also discussed are ways in which continuing advances in digital processing enable fresh design approaches for robust systems.

1PhD, Electrical Engineering, Clemson University

top


Undersea Laser Communication—The Next Frontier

Dr. Andrew S. Fletcher1
MIT Lincoln Laboratory

Undersea communication is of critical importance to many research and military applications, including, for example, oceanographic data collection, pollution monitoring, offshore exploration, and tactical surveillance. Undersea communication functions have traditionally been accomplished with acoustic and low-frequency radio communication. These technologies, however, impose speed limitations on the undersea platform, impose maximum communications data rates limited to a few bits per second (low-frequency radio) or multiple kilobits per second (acoustic), and radiate over a broad area. These factors lead to power inefficiencies, potential crosstalk, and multipath degradation. Undersea optical communication has the potential to mitigate these capability shortfalls, providing very-high-data-rate covert communication.

This seminar provides an overview of applications that might benefit from high-rate undersea communication and technologies, and it reports on new research into finding the ultimate performance limit of undersea optical communication. Using sophisticated physics models of laser and sunlight undersea propagation, Lincoln Laboratory researchers have found that gigabit-per-second links over appreciable ranges, in excess of 100 meters, are possible in much of the world's oceans. This capability promises to transform undersea communications. Finally, the seminar presents architectures based on currently realizable technology that can achieve the predicted ultimate performance limit.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Waveform Design for Airborne Networks

Dr. Frederick J. Block1
MIT Lincoln Laboratory

Airborne networks are an integral part of modern network-centric military operations. These networks operate in a unique environment that poses many research challenges. For example, nodes are much more mobile in airborne networks than in ground-based networks and are often separated by great distances that cause long delays and large propagation losses. Additionally, the altitude of an airborne receiver potentially places it within line of sight of many transmitters located over a wide area. Some of these transmitters may be part of the same network as the receiver but may cause multiple-access interference that is due to complications in channel access protocol design for airborne networks. Other transmitters that are outside the network may also cause interference. The waveform design must specifically account for this environment to achieve the required level of performance. Physical, link, and network layer techniques for providing high-rate, reliable airborne networks will be discussed.

1PhD, Electrical Engineering, Clemson University

top


Homeland Protection

Disease Modeling to Assess Outbreak Detection and Response

Dr. Diane C. Jamrog1 
MIT Lincoln Laboratory

Bioterrorism is a serious threat that has become widely recognized since the anthrax mailings of 2001. In response, one national research activity has been the development of biosensors and networks thereof. A driving factor behind biosensor development is the potential to provide early detection of a biological attack, thereby enabling timely treatment. This presentation introduces a disease progression and treatment model to quantify the potential benefit of early detection. To date, the model has been used to assess responses to inhalation anthrax and smallpox outbreaks.

1PhD, Computation and Applied Mathematics, Rice University

top


Optical Propagation and Technology

Mechanical Systems Engineering of Optical Sensors

Dr. Steven E. Forman1 
MIT Lincoln Laboratory

During the past 27 years, MIT Lincoln Laboratory has developed several different optical sensor experiments that have flown on airborne and space platforms. These sensors include the Space-Based Visible, Airborne Infrared Imager, and Advanced Land Imager. Each represents a one-of-a-kind sensor fully engineered at Lincoln Laboratory. This talk summarizes several of the mechanical systems engineering areas and issues that occurred throughout design, analysis, fabrication, integration, and testing of these systems. Included are discussions of optical, optomechanical, structural, and thermal engineering; electronic packaging; mechanism design; focal plane packaging; control-system engineering; materials selection and testing; environmental testing; failure analysis; and computer-aided design and analysis tools.

1PhD, Mechanical Engineering, Harvard University

top


Radar and Signal Processing

Adaptive Array Detection

Dr. Christ D. Richmond1
MIT Lincoln Laboratory

Adaptive detection theory began with the development of radar in the early 1940s, mostly in classified circles. Surveillance radar systems strove for automatic detection of target echoes in additive clutter and noise of unknown or time-varying power. The goal was to optimize signal detectability while constraining the number of false alerts generated. Signal and noise integration (coherent and incoherent) resulting in a sufficient statistic to be compared to a specified threshold was the accepted approach to improving signal detectability. Noise increases and nonstationarity caused by jamming interference, and variations in clutter power, however, prompted the use of cell-averaging constant false-alarm rate (CA-CFAR) processing that essentially uses a local estimate of the noise power to normalize the detection statistic. As digital technology evolved and hardware design improved, the use of multisensor arrays that exploit the spatial dimension to coherently cancel jamming interference and clutter became an attractive option. Development of a class of adaptive array detection algorithms emerged from this multivariate framework that represents multidimensional extensions of ideas intimately similar to classical CA-CFAR processing. This class of algorithms includes the adaptive matched filter (AMF), Kelly/Khatri's generalized likelihood ratio test (GLRT), Scharf's adaptive coherence estimator (ACE), and the 2D adaptive sidelobe blanker (ASB). This talk will review classic CA-CFAR processing as a backdrop to an enlightened extended discussion of the analysis, performance, and inherent properties of the more contemporary adaptive array detection approaches.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Adaptive Array Estimation

Dr. Christ D. Richmond1
MIT Lincoln Laboratory

Parameter estimation is a necessary step in most surveillance systems and typically follows detection processing. Estimation theory provides parameter bounds specifying the best achievable performance and suggests maximum-likelihood (ML) estimation as a viable strategy for algorithm development. Adaptive sensor arrays introduce the added complexity of bounding and assessing parameter estimation performance (i) in the presence of limiting interference whose statistics must be inferred from measured data and (ii) under uncertainty in the array manifold for the signal search space. This talk focuses on assessing the mean-squared-error (MSE) performance at low and high signal-to-noise ratio (SNR) of nonlinear ML estimation that (i) uses the sample covariance matrix as an estimate of the true noise covariance and (ii) has imperfect knowledge of the array manifold for the signal search space. The method of interval errors (MIE) is used to predict MSE performance and is shown to be remarkably accurate well below estimation threshold. SNR loss in estimation performance due to noise covariance estimation is quantified and is shown to be quite different from analogous losses obtained for detection. Lastly, a discussion of the asymptotic efficiency of ML estimation is also provided in the general context of misspecified models, the most general form of model mismatch.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


A Wideband 6 GHz to 12 GHz Power Amplifier with Enhanced Efficiency

Dr. Nestor D. Lopez1
MIT Lincoln Laboratory

In the work discussed in this seminar, MIT Lincoln Laboratory uses optimal nonuniform transmission lines (ONUTLs) for the design of wideband single-transistor power amplifiers. Single-transistor, class-AB power amplifiers are the building blocks of most radio-frequency (RF) transmitters because these amplifiers exhibit acceptable linearity and efficiency; however, these amplifiers are typically narrowband.

RF transistors support ultrawideband performance, but bandwidth is limited by the matching networks used. The matching networks transform the system impedance (typically 50 Ω) to the impedances the transistor needs to see (target impedances) to deliver maximal performance, i.e., output power, gain, and efficiency. Matching networks are needed to individually match source (input) and load (output) ports since these sets of target impedances are different. RF transistor target impedances can be small, are complex, and vary with frequency. Because these target impedances are also inherent to the transistor’s physical dimensions and its technology (GaN high-electron-mobility transistor [HEMT], Si laterally diffused MOSFET [LDMOS], or GaAs pseudomorphic HEMT [pHEMT]), each matching network is unique to a specific transistor.

These constraints make the design of RF transistor wideband matching networks a difficult task. This work uses an algorithm to modify the characteristic impedance profile of a nonuniform transmission line so that the error between the matching network transformed impedances and a set of target impedances is reduced. The target impedances are obtained from the large signal loadpull characterization of the RF transistor. The result of designing source and load ONUTL wideband matching networks is a class-AB power amplifier that exhibits wideband performance.

This work demonstrates a 6 GHz to 12 GHz power amplifier prototype implemented with a discrete 10 W GaN HEMT on SiC and wideband ONUTLs. The amplifier gain is 9 dB with a power-added efficiency of 40%.

1PhD, Electrical Engineering, University of Colorado at Boulder

top


Bioinspired Resource Management for Multiple-Sensor
Target Tracking Systems

Dr. Dana Sinno1and Dr. Hendrick C. Lambert2
MIT Lincoln Laboratory

We present an algorithm, inspired by self-organization and stigmergy observed in biological swarms, for managing multiple sensors tracking large numbers of targets. We have devised a decentralized architecture wherein autonomous sensors manage their own data collection resources and task themselves. Sensors cannot communicate with each other directly; however, a global track file, which is continuously broadcast, allows the sensors to infer their contributions to the global estimation of target states. Sensors can transmit their data (either as raw measurements or some compressed format) only to a central processor where their data are combined to update the global track file. We outline information-theoretic rules for the general multiple-sensor Bayesian target tracking problem and provide specific formulas for problems dominated by additive white Gaussian noise. Using Cramér-Rao lower bounds as surrogates for error covariances and numerical scenarios involving ballistic targets, we illustrate that the bioinspired algorithm is highly scalable and performs very well for large numbers of targets.

1PhD, Electrical Engineering, Arizona State University
2PhD, Applied Physics, University of California, San Diego

top


Multilithic Phased Array Architectures for Next-Generation Radar

Dr. Sean M. Duffy1
MIT Lincoln Laboratory

Phased array antennas provide significant operational capabilities beyond those achievable with dish antennas. Civilian agencies, such as the Federal Aviation Administration and Department of Homeland Security, are investigating the feasibility of using phased array radars to satisfy their next-generation needs. In particular, the Multifunction Phased Array Radar (MPAR) effort aims to eliminate nine different dish-based radars and replace them with radars employing a single, low-cost MPAR architecture. Also, unmanned air system (UAS) operation within the National Airspace System (NAS) requires an airborne sense-and-avoid (ABSAA) capability ideally satisfied by a small low-cost phased array.

Two example phased array panels are discussed in this talk. The first is the MPAR panel—a scalable, low-cost, highly capable S-band panel. This panel provides the functionality to perform the missions of air surveillance and weather surveillance for the NAS. The second example is the ABSAA phased array, a low-cost Ku-band panel for UAS collision avoidance radar.

The approach used in these phased arrays eliminates the drivers that lead to expensive systems. For example, the high-power amplifier is fabricated in a high-volume foundry and mounted in a surface mount package, thereby allowing industry-standard low-cost assembly processes. Also, all our integrated circuits contain multiple functions to save on semiconductor space and board-level complexity. Finally, the systems' multilayered printed circuit board assemblies combine the antenna, microwave circuitry, and integrated circuits, eliminating the need for hundreds of connectors between the subsystems; this streamlined design enhances overall reliability and lowers manufacturing costs.

1PhD, Electrical Engineering, University of Massachusetts–Amherst

top


Parameter Bounds Under Misspecified Models

Dr. Christ D. Richmond1
MIT Lincoln Laboratory

Parameter bounds are traditionally derived assuming perfect knowledge of data distributions. When the assumed probability distribution for the measured data differs from the true distribution, the model is said to be misspecified; mismatch at some level is inevitable in practice. Thus, several authors have studied the impact of model misspecification on parameter estimation. Most notably, Peter Huber explored in detail the performance of maximum-likelihood (ML) estimation under a very general form of misspecification; he showed consistency and normality, and derived ML estimation’s asymptotic covariance that is often referred to as the celebrated "sandwich covariance."

The goal of this talk is to consider the class of non-Bayesian parameter bounds emerging from the covariance inequality under the assumption of model misspecification. Casting the bound problem as one of constrained minimization is likewise considered. Primary attention is given to the Cramér-Rao bound (CRB). It is shown that Huber's sandwich covariance is the misspecified CRB and provides the greatest lower bound (tightest) under ML constraints. Consideration of the standard circular complex Gaussian ubiquitous in signal processing yields a generalization of the Slepian-Bangs formula under misspecification. This formula, of course, reduces to the usual one when the assumed distribution is in fact the correct one. The framework is outlined for consideration of the Barankin/Hammersley-Chapman-Robbins, Bhattacharyya, and Bobrovsky-MayerWolf-Zakai bound under misspecification.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Polynomial Rooting Techniques for Adaptive Array Direction Finding

Dr. Gary F. Hatke1 
MIT Lincoln Laboratory

Array processing has many applications in modern communications, radar, and sonar systems. Array processing is used when a signal in space, be it electromagnetic or acoustic, has some spatial coherence properties that can be exploited (such as far-field plane wave properties). The array can be used to sense the orientation of the plane wave and thus deduce the angular direction to the source. Adaptive array processing is used when there exists an environment of many signals from unknown directions as well as noise with unknown spatial distribution. Under these circumstances, classical Fourier analysis of the spatial correlations from an array data snapshot (the data seen at one instance in time) is insufficient to localize the signal sources.

In estimating the signal directions, most adaptive algorithms require computing an optimization metric over all possible source directions and searching for a maximum. When the array is multidimensional (e.g., planar), this search can become computationally expensive, as the source direction parameters are now also multidimensional. In the special case of one-dimensional (line) arrays, this search procedure can be replaced by solving a polynomial equation, where the roots of the polynomial correspond to estimates of the signal directions. This technique had not been extended to multidimensional arrays because these arrays naturally generated a polynomial in multiple variables, which does not have discrete roots.
This seminar introduces a method for generalizing the rooting technique to multidimensional arrays by generating multiple optimization polynomials corresponding to the source estimation problem and finding a set of simultaneous solutions to these equations, which contain source location information. It is shown that the variance of this new class of estimators is equal to that of the search techniques they supplant. In addition, for sources spaced more closely than a Rayleigh beamwidth, the resolution properties of the new polynomial algorithms are shown to be better than those of the search technique algorithms.

1PhD, Electrical Engineering, Princeton University

top


Radar Signal Distortion and Compensation with Transionospheric Propagation Paths

Dr. Scott D. Coutts1
MIT Lincoln Laboratory

Electromagnetic signals propagating though the atmosphere and ionosphere are distorted and refracted by this propagation media. The effects are particularly pronounced for lower-frequency signals propagating through the ionosphere. In the extreme case, frequencies in the high-frequency (HF) band can be severely refracted by the ionosphere to the point that they are reflected back toward the ground. This effect is exploited by over-the-horizon radars and HF communication systems to achieve very-long-range, over-the-horizon performance. For the general radar case, the measurements of range, Doppler shift, and elevation and azimuth angles are all corrupted from their free-space values with time-varying biases.

To provide accurate radar parameter estimates in the presence of these errors, a three-dimensional method to compensate for the ionosphere refraction has been developed at Lincoln Laboratory. In this seminar, two examples of the method’s use are provided: (1) the radar returns from a known object are used to specify an unknown ionosphere electron density, and (2) an unknown satellite state vector is estimated with and without the ionosphere compensation so that the accuracy improvement can be quantified. Magneto-ionic ray tracing is used to generate the three-dimensional propagation model and propagation correction tables. A maximum-likelihood satellite-ephemeris estimator is designed and demonstrated using corrupted radar data. The technique is demonstrated using “real data” examples with very encouraging results and is applicable at radar frequencies ranging from high to ultrahigh.

1PhD, Electrical Engineering, Northeastern University

top


Synthetic Aperture Radar

Dr. Gerald R. Benitz1 
MIT Lincoln Laboratory

MIT Lincoln Laboratory is investigating the application of phased-array technology to improve the state of the art in radar surveillance. Synthetic aperture radar (SAR) imaging is one mode that can benefit from a multiple-phase-center antenna. The potential benefits are protection against interference, improved area rate and resolution, and multiple simultaneous modes of operation.

This seminar begins with an overview of SAR, giving the basics of resolution, collection modes, and image formation. Several imaging examples are provided. Results from the Lincoln Multimission ISR Testbed (LiMIT) X-band airborne radar are presented. LiMIT employs an eight-channel phased-array antenna and records 180 MHz bandwidth from each channel simultaneously. One result employs adaptive processing to reject wideband interference, demonstrating recovery of a corrupted SAR image. Another result employs multiple simultaneous beams to increase the area of the image beyond the conventional limitation that is due to the pulse repetition frequency. Areas that are Doppler ambiguous can be disambiguated by using the phased-array antenna.

1PhD, Electrical Engineering, University of Wisconsin–Madison

top


Space Control Technology

New Techniques for High-Resolution Atmospheric Sounding

Dr. William J. Blackwell1
MIT Lincoln Laboratory

Modern spaceborne atmospheric sounders consist of passive spectrometers that measure spectral radiance intensity in microwave (approximately 1 cm to 1 mm wavelength), millimeter wave (approximately 1 mm to 300 µm), and thermal infrared (approximately 3 µm to 16 µm) bands. In the last decade, advanced microwave sounders (AMSU and ATMS) and hyperspectral infrared sounders (AIRS, IASI, and CrIS) have substantially improved forecast skill, provided new products relevant to a wide range of science application areas, and contributed to our ability to characterize the Earth's climate system.

This presentation will focus on two areas of current research: (1) new algorithmic approaches to geophysical parameter retrieval that fully exploit the spectral richness of the microwave and hyperspectral microwave observations and (2) next-generation sensor systems that build upon recent successes to provide improved spectral coverage (e.g., hyperspectral microwave systems) and improved spatial revisit (e.g., small satellite constellation architectures) for observations of dynamic meteorology and severe weather. Specific topics to be addressed include a neural network algorithm for temperature and moisture profile retrieval that is being used as part of the AIRS Science Team Version 6 algorithm, recent technology development funded by NASA to demonstrate a hyperspectral microwave receiver subsystem, and system performance analyses of nanosatellite constellation architectures, including the MiRaTA 3U atmospheric sounding CubeSat to be launched by NASA in 2016 to demonstrate core constellation elements.

AMSU – Advanced Microwave Sounding Unit
ATMS – Advanced Technology Microwave Sounder
AIRS – Atmospheric Infrared Sounder
IASI – Infrared Atmospheric Sounding Interferometer
CrIS – Cross-track Infrared Sounder
MiRaTA – Microwave Radiometer Technology Acceleration

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Systems and Architectures

Choices, Choices, Choices (Decisions, Decisions, Decisions)

Dr. Robert T. Shin1
MIT Lincoln Laboratory

As you plan a career after graduating from a university or college, you are faced with many choices and decisions, not just now but for many and your most productive years. While there generally is not a right or wrong decision, it is helpful to think about and make these decisions in a more systematic way. This seminar looks at perspectives on how one might think about making a choice and making an impact, especially as an architect of future advanced systems. Also, my lessons learned along the way, in architectural thinking and in management, are presented in the hope that you might find them useful as you advance in your careers. Finally, a short overview of MIT Lincoln Laboratory, a federally funded research and development center, is presented as a case study on how one can leverage such an organization to make an impact.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Solid State Devices, Materials, and Processes

Dynamic Photoacoustic Spectroscopy for Trace Gas Detection

Dr. Charles M. Wynn1, Dr. Michelle L. Clark2, and Dr. Roderick R. Kunz3
MIT Lincoln Laboratory

Dynamic photoacoustic spectroscopy (DPAS) is a trace-gas sensing technique recently developed at MIT Lincoln Laboratory. It is a novel laser-based means of remotely sensing extremely low concentrations of gases.

The ability to remotely detect trace gases is of great interest for many reasons. It has the potential to enable many important capabilities, including efficient monitoring of environmental pollutants, safe detection of threats from chemical agents or explosives, or monitoring of illegal activities (i.e., drug manufacturing) via effluent detection. In many cases, the relevant vapor concentrations are quite low; thus, a highly sensitive technique is required. No techniques developed to date have demonstrated both high sensitivity and remote operation. DPAS has recently [1, 2] demonstrated both the high sensitivity and standoff capability necessary to significantly impact several important missions.

DPAS is a variant of the well-known photoacoustic spectroscopy (PAS). PAS is a laser-based technique that detects gases by generating acoustic signals via a laser tuned to different absorption features of the gas. What separates DPAS from PAS is that the DPAS laser beam is swept through a gas plume at the speed of sound. The resulting coherent addition of acoustic waves leads to an amplification of the acoustic signal. In a manner similar to shock waves generated by supersonic jet planes, a shock wave is produced with significantly enhanced amplitude as compared to the very weak photoacoustic signal. In contrast, PAS generally requires a closed resonant chamber for amplification (inherently not a standoff configuration). Using DPAS, we have generated and detected acoustic signals as high as 83 dB (easily audible to the unaided human ear) from trace gases.

[1] C.M. Wynn, S. Palmacci, M.L. Clark, and R.R. Kunz, “Dynamic Photoacoustic Spectroscopy for Trace Gas Detection,” Applied Physics Letters, vol. 101, 2012.
[2] C.M. Wynn, S. Palmacci, M.L. Clark, and R.R. Kunz, “High-Sensitivity Detection of Trace Gases Using Dynamic Photoacoustic Spectroscopy,” Optical Engineering, vol. 53, no. 2, 2014.

1PhD, Physics, Clark University
2PhD, Chemistry, Massachusetts Institute of Technology
3PhD, Analytical Chemistry, University of North Carolina at Chapel Hill

top


Geiger-Mode Avalanche Photodiode Arrays for Imaging and Sensing

Dr. Brian F. Aull1
MIT Lincoln Laboratory

This seminar discusses the development of arrays of silicon avalanche photodiodes integrated with digital complementary metal-oxide semiconductor (CMOS) circuits to make focal planes with single-photon sensitivity. The avalanche photodiodes are operated in Geiger mode: they are biased above the avalanche breakdown voltage so that the detection of a single photon leads to a discharge that can directly trigger a digital circuit. The CMOS circuits to which the photodiodes are connected can either time stamp or count the resulting detection events. Applications include three-dimensional imaging using laser radar, wavefront sensing for adaptive optics, and optical communications.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Hardware Phenomenological Effects on Co-channel Full-Duplex
MIMO Relay Performance

Dr. Timothy M. Hancock1
MIT Lincoln Laboratory

This presentation will discuss the performance of co-channel full-duplex multiple-input multiple-output (MIMO) nodes in the context of models for realistic hardware characteristics. Here, co-channel full-duplex relay indicates a node that transmits and receives simultaneously in the same frequency band. It is assumed that transmit and receive phase centers are physically distinct, enabling adaptive spatial transmit and receive processing to mitigate self-interference. The use of MIMO indicates a self-interference channel with spatially diverse inputs and outputs, although multiple modes are not explored in this analysis. Rather, the focus will be on rank-1 transmit covariance matrices. In practice, the limiting issue for co-channel full-duplex nodes is the ability to mitigate self-interference. While theoretically a system with infinite dynamic range and exact channel estimation can mitigate the self-interference perfectly, in practice, transmitter and receiver dynamic range, nonlinearities, and noise, as well as channel dynamics, limit the practical performance. This presentation will investigate the self-interference mitigation limitations in the context of eigenvalue spread of spatial transmit and receive covariance matrices caused by realistic hardware models.

1PhD, Electrical Engineering, University of Michigan

top


Microfluidics at MIT Lincoln Laboratory

Prof. Todd A. Thorsen1, Dr. Shaun R. Berry2, and Dr. Jakub Kedzierski3
MIT Lincoln Laboratory

At MIT Lincoln Laboratory, we are engineering general-purpose tools for microfluidic platforms. Cross-disciplinary teams, consisting of engineers, programmers, and biologists, are designing and developing microfluidic tools for a broad range of applications. Our vision is to apply microfabrication techniques, traditionally used in microelectromechanical system (MEMS) and integrated circuit manufacturing, to develop microfluidic components and systems that are reconfigurable, scalable, and programmable through a software interface. End users of these microfluidic tools will be able to orchestrate complex and adaptive procedures that are beyond the capabilities of today’s hardware. This talk will provide a broad overview on the diverse microfluidic programs at Lincoln Laboratory, highlighting recent work, including ultra-low-power electrowetting-based pumps, integrated "lab-on-a-chip" systems for biological exploration, actively configurable pixel-scale liquid microlenses and prisms, and microhydraulic actuators.

1PhD, Biochemistry and Molecular Biophysics, California Institute of Technology
2PhD, Mechanical Engineering, Tufts University
3
PhD, Electrical Engineering, University of California–Berkeley

top


Optical Sampling for High-Speed, High-Resolution
Analog-to-Digital Conversion

Dr. Paul W. Juodawlkis1and Dr. Jonathan C. Twichell2
MIT Lincoln Laboratory

The performance of digital receivers used in modern radar, communication, and surveillance systems is often limited by the performance of the analog-to-digital converter (ADC) used to digitize the received signal. Optically sampled ADCs, which combine optical sampling with electronic quantization, have been demonstrated to extend the performance of electronic ADCs. The primary advantages of using optics to perform the sampling function include (1) the timing jitter of modern mode-locked lasers is more than an order of magnitude smaller than that of electronic sampling circuitry, (2) the low dispersion of optical components allows picosecond sampling pulses to be used to attain wide analog bandwidth, and (3) demultiplexing to arrays of time-interleaved electronic converters can be performed in the optical domain rather than in the electrical domain with no signal bandwidth, nonlinearity, or memory effect constraints.

MIT Lincoln Laboratory's work in this area has focused on the development of a linear sampling technique referred to as phase-encoded optical sampling. The technique uses a dual-output Mach-Zehnder electro-optic modulator as a sampling transducer to achieve both high linearity and 60 dB suppression of laser amplitude noise. Two-tone tests have been used to demonstrate an intermodulation-free dynamic range of 90 dB. The Laboratory also used optical sampling to directly downsample frequency-modulated chirp signals having 1 GHz bandwidth on an X-band (10 GHz) microwave carrier. The bandwidth of the technique is extended by optically distributing the post-sampling pulses to an array of time-interleaved electronic quantizers. Using high-extinction 1-to-8 LiNbO3 optical time-division demultiplexers to perform the optical distribution, Lincoln Laboratory has demonstrated a 500 MS/s ADC having 10 effective bits of resolution and a spur-free dynamic range in excess of 70 dB.

1PhD, Electrical Engineering, Georgia Institute of Technology
2PhD, Nuclear Engineering, University of Wisconsin–Madison

top


Quantum Information Science with Superconducting Artificial Atoms

Dr. William D. Oliver1
MIT Lincoln Laboratory
& the Research Laboratory of Electronics

Superconducting qubits are artificial atoms assembled from electrical circuit elements. When cooled to cryogenic temperatures, these circuits exhibit quantized energy levels. Transitions between levels are induced by applying pulsed microwave electromagnetic radiation to the circuit, revealing quantum coherent phenomena analogous to (and in certain cases beyond) those observed with coherent atomic systems.

This talk provides an overview of quantum information science and superconducting artificial atoms, including several demonstrations of quantum coherence using these circuits: Landau-Zener-Stückelberg oscillations [1], microwave-induced qubit cooling to temperatures less than 3 mK (colder than the refrigerator) [2], and a new broadband spectroscopy technique called amplitude spectroscopy [3]. We then discuss in detail a highly coherent aluminum qubit (T1 = 12 µs, T2Echo = 23 µs, fidelity = 99.75%) with which we demonstrated noise spectroscopy using nuclear magnetic resonance (NMR)-inspired control sequences comprising 100s of pulses [4, 5].

These experiments exhibit a remarkable agreement with theory and are extensible to other solid-state qubit modalities. In addition to fundamental studies of quantum coherence in solid-state systems, we anticipate these devices and techniques will advance qubit control and state-preparation methods for quantum information science and technology applications.

[1] W.D. Oliver, et al., "Mach-Zehnder Interferometry in a Strongly Driven Superconducting Qubit," Science, vol. 310, no. 5754, pp. 1653–1657, (2005.
[2] S.O. Valenzuela, et al., "Microwave-Induced Cooling of a Superconducting Qubit," Science, vol. 314, no. 5805, pp. 1589–1592, 2006.
[3] D.M. Berns et al., "Amplitude Spectroscopy of a Solid-State Artificial Atom," Nature, vol. 455, pp. 51–57, 2008.
[4] J. Bylander, et al., "Noise Spectroscopy Through Dynamical Decoupling with a Superconducting Flux Qubit," Nature Physics, vol. 7, pp. 565–570, 2011.
[5] F.Yan, et al., "Rotating-Frame Relaxation as a Noise Spectrum Analyzer of a Superconducting Qubit Undergoing Driven Evolution," accepted for publication in Nature Communications, 2013.

1PhD, Electrical Engineering, Stanford University

top


Slab-Coupled Optical Waveguide Devices and Their Applications

Dr. Paul W. Juodawlkis1, Dr. Joseph P. Donnelly2,
Dr. Gary M. Smith3, and Dr. George W. Turner4
MIT Lincoln Laboratory

For the past decade, MIT Lincoln Laboratory has been developing new classes of high-power semiconductor optoelectronic emitters and detectors based on the slab-coupled optical waveguide (SCOW) concept. The key characteristics of the SCOW design include (1) the use of a planar slab waveguide to filter the higher-order transverse modes from a large rib waveguide, (2) low overlap between the optical mode and the active layers, and (3) low excess optical loss. These characteristics enable waveguide devices having large (> 5 × 5 μm) symmetric fundamental-mode operation and long length (~1 cm). These large dimensions, relative to conventional waveguide devices, allow efficient coupling to optical fibers and external optical cavities, and provide reduced electrical and thermal resistances for improved heat dissipation.

This seminar will review the SCOW operating principles and describe applications of the SCOW technology, including Watt-class semiconductor SCOW lasers (SCOWLs) and amplifiers (SCOWAs), monolithic and ring-cavity mode-locked lasers, single-frequency external cavity lasers, and high-current waveguide photodiodes. The SCOW concept has been demonstrated in a variety of material systems at wavelengths including 915, 960–980, 1040, 1300, 1550, and 2100 nm. In addition to single emitters, higher brightness has been obtained by combining arrays of SCOWLs and SCOWAs using wavelength beam-combining and coherent combining techniques. These beam-combined SCOW architectures offer the potential of kilowatt-class, high-efficiency, electrically pumped optical sources.

1PhD, Electrical Engineering, Georgia Institute of Technology

2PhD, Electrical Engineering, Carnegie Mellon University
3PhD, Electrical Engineering, University of Illinois at Urbana-Champaign
4PhD, Electrical Engineering, Johns Hopkins University

top


Subthreshold Design of FPGAs for Minimum Energy Operation

Dr. Peter J. Grossmann1
MIT Lincoln Laboratory

Embedded systems continue to become smaller, demand greater compute capability, and target deployment in more energy-starved environments. System power budgets of less than 1 mW are increasingly common, while standby power is brought as close to zero as possible. While field-programmable gate arrays (FPGAs) have historically been used as compute engines in low-power systems, they have not kept pace with application-specific integrated circuits (ASICs) and microprocessors in meeting the needs of these ultra-low-power systems. Research in both ASICs and microprocessors has extended voltage scaling into the subthreshold region of transistor operation, sacrificing performance in exchange for dramatic power savings. For some ultra-low-power systems such as wireless sensor networks and implantable biomedical devices, performing a computation with minimum energy consumption rather than within a certain time frame is the goal. It has been shown that to minimize energy for ASICs and microprocessors, subthreshold operation is typically required. For FPGAs, the answer remains largely unexplored—the first subthreshold FPGA has only recently been fabricated, and minimum energy operation of FPGAs has not been thoroughly studied.

This research presents multiple steps forward in the design and analysis of FPGAs targeting minimum energy operation. A fabricated FPGA test chip capable of single-supply subthreshold operation is presented, with measurement results demonstrating FPGA programming and operation as low as 260 mV.  The capability to minimize energy per clock cycle at subthreshold supply voltages for a high activity factor test case is also shown, indicating that the flexible nature of FPGAs does not inherently prevent their energy minimum occurring below threshold. A simulation flow for performing prefabrication chip-level minimum energy analysis for FPGAs has also been developed in this work. By combining industry-standard integrated circuit design verification software with academic FPGA software and custom scripts, the minimum energy point sensitivity of an FPGA to its programming was investigated. The FPGA was programmed with 21 different IEEE International Symposium on Circuits and Systems (ISCAS) '85 benchmarks, and a minimum energy supply voltage was estimated for each with a nominal input activity factor. The benchmarks had minimum energy points ranging from 0.42–0.54 V, or slightly above threshold. The minimum energy point was not a strong function of benchmark circuit size or input count, suggesting that the topology of the benchmark circuit influenced the FPGA minimum energy point.

1PhD, Computer Engineering, Northeastern University

top


Three-Dimensional Integration Technology for
Advanced Focal Planes and Integrated Circuits

Donna-Ruth Yost1 and Dr. Chenson Chen2
MIT Lincoln Laboratory

Over the last decade, MIT Lincoln Laboratory has developed a three-dimensional (3D) circuit integration technology that exploits the advantages of silicon-on-insulator technology to enable wafer-level stacking and micrometer-scale electrical interconnection of fully fabricated circuit wafers [1].

Advanced focal-plane arrays have been the first applications to exploit the benefits of this 3D integration technology because the massively parallel information flow present in two-dimensional imaging arrays maps very nicely into a 3D computational structure as information flows from circuit tier to circuit tier in the z-direction. To date, the Laboratory's 3D integration technology has been used to fabricate four different focal planes, including a two-tier 64 × 64 imager with fully parallel per-pixel analog-to-digital (A/D) conversion [2]; a three-tier 640 × 480 imager consisting of an imaging tier, an A/D conversion tier, and a digital signal processing tier; two-tier 1024 × 1024 pixel, four-side-abuttable imaging modules for tiling large mosaic focal planes [3, 4]; and a three-tier Geiger-mode avalanche photodiode (APD) 3D LIDAR array, using a 30-volt avalanche-photodiode tier, a 3.3-volt complementary metal-oxide semiconductor (CMOS) tier, and a 1.5-volt CMOS tier [5].

Recently, the 3D integration technology has been made available to the circuit-design research community through multiproject fabrication runs sponsored by the Defense Advanced Research Projects Agency. Three different multiproject runs have been completed and included over 100 different circuit designs from 40 different research groups. Three-dimensional circuit concepts explored in these runs included stacked memories, field-programmable gate arrays, and mixed-signal and RF circuits. We have developed an understanding of heterogeneous 3D integration issues by successfully demonstrating 3D integration of Si CMOS read out integrated circuits to InGaAs photodiode wafers [6], and an understanding of mixed-fabrication facility issues by 3D integrating Si CMOS readout integrated circuits (ROIC) with externally fabricated technologies. This seminar will discuss the enabling technologies required for this approach to 3D integration, circuits demonstrated by this technology, and current 3D technology programs at Lincoln Laboratory.

[1] J.A. Burns, et al., "A Wafer-Scale 3-D Circuit Integration Technology," IEEE Transactions on Electron Devices, vol. 53, no. 10, pp. 2507–2516, October 2006.
[2] J.A. Burns, et al., "Three-dimensional Integrated Circuits for Low Power, High Bandwidth Systems on a Chip," 2001 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, vol. 44, pp. 268–269, February 2001.
[3] V. Suntharalingam, et al., "Megapixel CMOS Image Sensor Fabricated in Three-Dimensional Integrated Circuit Technology," 2005 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, vol. 48, pp. 356–357, February 2005.
[4] V. Suntharalingam, et al., "A Four-Side Tileable, Back Illuminated, 3D- Integrated Megapixel CMOS Image Sensor," IEEE 2009 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, pp. 38–39, February 2009.
[5] B. Aull, et al., "Laser Radar Imager Based on 3D Integration of Geiger-Mode Avalanche Photodiodes with Two SOI Timing Circuit Layers," 2006 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, vol. 49, pp. 304–305, February 2006.
[6] C.L. Chen, et al., "Wafer-Scale 3D Integration of InGaAs Image Sensors with Si Readout Circuits," IEEE International Conference on 3D System Integration, San Francisco, 28–30 Sept. 2009 (Best Paper Award).

1BS, Materials Science and Engineering, Cornell University
2PhD, Physics, University of California Berkley

top


Toward Large-Scale Trapped-Ion Quantum Processing

Dr. John Chiaverini1
MIT Lincoln Laboratory

Quantum computers have the potential to deliver a profound computational advantage when applied to many important problems because they manipulate information in a fundamentally different way from that of current (classical) hardware. Individual atomic ions, held and manipulated using electromagnetic fields, constitute a leading candidate system to realize such a processor because of their long coherence times and uniform physical properties. Accomplishments at the few-qubit level include high-fidelity demonstrations of basic quantum algorithms, but a clear path to a large-scale processor is not fully defined. This presentation will describe Lincoln Laboratory’s efforts to develop scalable ion techniques and technologies. These efforts include the loading and manipulation of ions in two-dimensional surface-electrode-trap arrays, the integration of technology for increased on-chip control of ion qubits, and the reduction of noise that can limit multi-qubit gate fidelity; all these techniques will likely be necessary for reaching the large scales required to realize the promise of quantum computing.

1PhD, Physics, Stanford University

top


Ultrasensitive Mass Spectrometry Development
at MIT Lincoln Laboratory

Dr. Jude Kelley1 and Dr. Roderick Kunz2
MIT Lincoln Laboratory

Mass spectrometry (MS) has long been regarded as one of the most reliable methods for chemical analysis because of its ability to identify molecules based on their molecular weight and fragmentation patterns combined with its sensitivity in the pico- to femtogram range. Traditionally, analytical systems that utilize mass spectrometry have coupled this method with other analytical techniques such as gas or liquid chromatography; however, the development of ambient ionization techniques and improvements in mass spectrometer design have demonstrated that this technique can function on its own as a multipurpose chemical detector.

MIT Lincoln Laboratory has been advancing these systems in an effort to develop the next generation of mass spectrometry–based sensing systems focused on detection missions relevant to national security. This work has centered on ways to improve the sensitivity of MS-based systems to explosives when they are encountered as vapors and as trace particulate residues. The vapor detection system that the Laboratory has developed has a real-time sensitivity to concentrations in the parts-per-quadrillion (ppqv) range. Real-time sensitivity at these levels rivals that of conventional vapor detectors (canines), providing opportunities to (1) better understand the origins, dynamics, concentrations, and attenuation levels of vapor signatures associated with concealed threats and (2) help improve canine training. The Laboratory's work on detecting an explosive particulate residue has focused on applying thermal desorption and atmospheric pressure chemical ionization (TD-APCI) to swipe-based surface samples across a wide range of explosive classes. For explosive threats that are challenging to detect with TD-APCI, Lincoln Laboratory researchers have developed specialized chemical reagents that can be added directly to the ionization source to improve both the specificity of the technique as well as its sensitivity. The information revealed from these studies is used to assess current mass spectrometer–based explosive trace detection systems and guide future development efforts. 


1PhD, Physical Chemistry, Yale University
2PhD, Analytical Chemistry, University of North Carolina at Chapel Hill

top


Cyber Security and Information Sciences

Addressing the Challenges of Big Data Through Innovative Technologies

Dr. Vijay Gadepally1and Dr. Jeremy Kepner2
MIT Lincoln Laboratory

The ability to collect and analyze large amounts of data is increasingly important within the scientific community. The growing gap between the volume, velocity, and variety of data available and users’ ability to handle this deluge calls for innovative tools to address the challenges imposed by what has become known as big data. MIT Lincoln Laboratory is taking a leading role in developing a set of tools to help solve the problems inherent in big data.

Big data's volume stresses the storage, memory, and compute capacity of a computing system and requires access to a computing cloud. Choosing the right cloud is problem specific. Currently, four multibillion-dollar ecosystems dominate the cloud computing environment: enterprise clouds, big data clouds, SQL database clouds, and supercomputing clouds. Each cloud ecosystem has its own hardware, software, communities, and business markets. The broad nature of big data challenges makes it unlikely that one cloud ecosystem can satisfy all needs, and solutions are likely to require the tools and techniques from more than one cloud ecosystem. The MIT SuperCloud was developed to provide one such solution. To our knowledge, the MIT SuperCloud is the only deployed cloud system that allows all four ecosystems to co-exist without sacrificing performance or functionality.

The velocity of big data stresses the rate at which data can be absorbed and meaningful answers can be produced. Through an initiative led by the National Security Agency (NSA), a Common Big Data Architecture (CBDA) was developed for the U.S. government. The CBDA is based on the Google Big Table NoSQL approach and is now in wide use. Lincoln Laboratory was instrumental in the development of the CBDA and is a leader in adapting the CBDA to a variety of big data challenges. The centerpieces of the CBDA are the NSA-developed Apache Accumulo database (capable of millions of entries per second) and the Lincoln Laboratory–developed Dynamic Distributed Dimensional Data Model (D4M) schema.

Finally, big data variety may present both the largest challenge and the greatest set of opportunities for supercomputing. The promise of big data is the ability to correlate heterogeneous data to generate new insights. The combination of Apache Accumulo and D4M technologies allows vast quantities of highly diverse data (bioinformatics, cyber-relevant data, social media data, etc.) to be automatically ingested into a common schema that enables rapid query and correlation of elements.

1 PhD, Electrical and Computer Engineering, The Ohio State University
2 PhD, Physics, Princeton University

top


Content-Centric Networking for Mobile Devices

Praveen K. Sharma1
MIT Lincoln Laboratory

Many applications used in situations such as disaster responses, combat missions, and emergency rescue operations require mobile devices to communicate in environments where the cellular infrastructure is damaged, overwhelmed, or absent. These environments manifest characteristics—such as delays and disruptions—that the traditional approaches do not address. 

To enable applications to communicate and disseminate information in these disruption-prone environments, Lincoln Laboratory designed an architecture that enables delay-tolerant, peer-to-peer, multi-hop communication and that is not dependent upon the prior knowledge of the identity or location of the host network or device. The architecture leverages mobile ad hoc networks (MANETs) for providing peer-to-peer and multi-hop communications. The architecture leverages an emerging technique, content-centric networking (CCN), for enabling mobile devices to share information despite delays or disruptions and  for enabling mobile devices to use named content when the IP addresses or phone numbers of recipients are not known.

As a proof of concept, researchers at MIT Lincoln Laboratory prototyped an architecture, on an Android smartphone, that comprised a CCN overlay on MANETs. The prototype used WiFi as the illustrative communication protocol and Optimized Link State Routing and modified Haggle as the illustrative MANET and CCN protocols, respectively. The performance of the algorithm was evaluated experimentally. Preliminary results indicate that the Laboratory's approach increases the rate of message delivery over multiple hops while preserving message transparency, albeit at a cost of additional overhead control messages.

1 MS, Computer Science, Iowa State University

top


Cryptographically Secure Computation

Dr. Emily Shen1 and Dr. Arkady Yerukhimovich2
MIT Lincoln Laboratory

In today's big data world, the ability to collect, share, and analyze data has led to unprecedented capabilities as well as unprecedented privacy concerns. Often, people or organizations would like to collaborate and obtain results from their collective data but without divulging their individual data. Current techniques to enable such sharing require that the parties either entrust each other with their sensitive data or find a mutually trusted third party who can perform the computation on their behalf; these approaches may be undesirable or impractical.

Secure multiparty computation (MPC) is a cryptographic technique that enables parties to perform joint computations without revealing their sensitive inputs and without using a trusted third party. MPC not only can provide privacy for existing applications but also can enable new applications that are currently not possible. For example, MPC can be used to enable collaboration and information exchange between partner organizations through selective data sharing and to guarantee security of computation performed in an untrusted environment such as a cloud. In this talk, we illustrate the cryptographic ideas behind MPC. We then describe the design and optimization of MPC for a collaborative anomaly detection algorithm.

1PhD, Electrical Engineering and Computer Science, Massachusetts Institute of Technology
2PhD, Computer Science, University of Maryland

top


Cyber Security Metrics

Dr. James F. Riordan,1
MIT Lincoln Laboratory

Recent cyber attacks on government, commercial, and institutional computer networks have highlighted the need for increased cyber security measures. In order to better quantify, understand, and, therefore, more effectively combat this ever-growing threat, the U.S. Government is shifting its security strategy for its computer networks and systems from one of yearly compliance checks to one of continuous monitoring and assessment. While continuous monitoring refers to the ability to maintain constant awareness of the configuration and status of the computer networks and systems, assessment refers to the ability to accurately appraise the security posture of the systems and to estimate the risk associated with them. Clearly, the effectiveness of this strategy hinges upon the ability to accurately assess cyber risk.

This seminar will present a methodology for producing useful cyber security metrics that are derived from realistic and well-defined mathematical attacker models. These metrics can be continuously evaluated from operational security data and, thus, support the government's new security strategy of continuous monitoring. The speaker will show how this methodology has been used to develop several important security metrics that are based upon the SANS Institute's list of 20 critical security controls. Live demonstrations will illustrate how these security metrics are used to assess risk in an operational context.

1PhD, Computational Mathematics, University of Minnesota

top


Developing and Evaluating Link-Prediction Algorithms for
Speaker Content Graphs

Kara Greenfield1 and Dr. William M. Campbell2
MIT Lincoln Laboratory

Graph theory can be a very powerful tool for a variety of different problems, but it is not always clear how the edges of the graph should be defined. Link prediction is the process of determining which pairs of nodes should be connected by an edge. This seminar describes the process of developing different link-prediction algorithms. We first discuss how to choose intermediary metrics that correspond well to the application of interest in order to obtain measures of performance before it is practical to obtain a measure of effectiveness for that application. We also describe how MIT Lincoln Laboratory’s VizLinc audio-visual tool can be used in visual analytics to gain better insight into the algorithms than can be provided by numeric metrics alone. Throughout the talk, we will use speaker recognition as the domain of interest, generating speaker content graphs that efficiently model the underlying manifold of the speaker space and employing those graphs to perform tasks such as query by example and speaker clustering.

1MS, Industrial Mathematics, Worcester Polytechnic Institute
2PhD, Applied Mathematics, Cornell University

top


Efficient, Privacy-Preserving Data Sharing

Dr. Benjamin W. Fuller1
MIT Lincoln Laboratory

Database management systems permit fast, expressive searches on large volumes of data. However, modern databases provide the server with a lot of information, such as the contents of the database and all of the clients' queries. In many scenarios, a client may wish to limit the information revealed to the server; examples include a cloud computing setting in which the server and client belong to two different organizations, and a high-value target setting in which the client is concerned about potential compromise and wants to limit the scope of damage in case of attack.

Modern cryptography provides several different types of tools that offer the promise of searching on encrypted data. Thus far, however, most of the tools have not gained widespread use because they either operate too slowly or lack the query expressivity of a desired language like SQL. Recently, several research groups have participated in an Intelligence Advanced Research Projects Activity (IARPA)–funded research program called "Security and Privacy Assurance Research," which aims to overcome both of these obstacles. The researchers have built secure database management systems whose query functionality comprises a large functionality of SQL and whose performance is within a 10× factor of MySQL at terabyte scales. In this talk, I will illustrate the cryptographic advances that make these technologies possible. Additionally, I will describe the rigorous software engineering and formal techniques employed to test that the researchers' software met all of the security, functionality, and performance requirements.

1PhD, Mathematics, Boston University

top


EMBER: A Global Perspective on Extreme Malicious Behavior

Tamara H. Yu1, Dr. Richard P. Lippmann2, and Dr. James F. Riordan3
MIT Lincoln Laboratory

Geographical displays are commonly used for visualizing widespread malicious behavior of Internet hosts. Placing dots on a world map or coloring regions by the magnitude of activity often results in cluttered maps that invariably emphasize population-dense metropolitan areas in developed countries where Internet connectivity is highest. To uncover atypical regions, it is necessary to normalize activity by the local computer population. This seminar presents EMBER (Extreme Malicious Behavior viewER), an analysis and display of malicious activity at the city level. EMBER uses a metric called standardized incidence rate (SIR) that is the number of hosts exhibiting malicious behavior per 100,000 available hosts. This metric relies on available data that (1) map IP addresses to geographic locations, (2) provide current city populations, and (3) provide computer usage penetration rates. An analysis of several months of suspicious source IPs from DShield identified cities with extremely high and low malicious activity rates on a day-by-day basis. In general, cities in a few Eastern European countries have the highest SIRs, whereas cities in Japan and South Korea have the lowest. Many of these results are consistent with news reports describing local cyber security policies. The distribution of SIRs for cities with comparable population levels has a long tail similar to a power law. This distribution suggests that malware preferentially spreads to regions with currently high levels of malicious activity as suggested by analysis of many malware executables in the past.

1MEng, Computer Science, Massachusetts Institute of Technology
2PhD, Electrical Engineering, Massachusetts Institute of Technology
3PhD, Mathematics, University of Minnesota

top


Evaluating Cyber Moving Target Techniques

Dr. Hamed Okhravi1
MIT Lincoln Laboratory

The concept of cyber moving target (MT) defense has been identified as one of the game-changing themes to rebalance the landscape of cyber security. MT techniques make cyber systems “a moving target,” that is, less static, less homogeneous, and less deterministic in order to create uncertainty for attackers.

Although many MT techniques have been proposed in the literature, little has been done on evaluating their effectiveness, benefits, and weaknesses. This seminar discusses the evaluation of the wide range of MT techniques. First, a qualitative assessment studies the potential benefits, gaps, and weaknesses for each category of MT strategy. This step identifies major gaps in order to guide future research and prototyping efforts. The findings of a case study on code-reuse defenses are presented. For the MT techniques that have been identified as potentially more beneficial in the qualitative assessment, a deeper quantitative assessment is performed by examining real exploits. Next, the seminar discusses an operational assessment of an MT technique inside a larger system and illustrates how important parameters of such techniques can be monitored for improved effectiveness. Finally, possible directions for future work in this domain are outlined.
 
1PhD, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign

top


Experiences in Cyber Security Education:
The MIT Lincoln Laboratory Capture-the-Flag Exercise

Joseph M. Werther1, Michael A. Zhivich2, and Timothy R. Leek3
MIT Lincoln Laboratory

Dr. Nickolai Zeldovich4
MIT Computer Science and Artificial Intelligence Laboratory

Many popular and well-established cyber security capture-the-flag (CTF) exercises are held each year at a variety of settings, including universities and semiprofessional security conferences. The CTF format also varies greatly, ranging from linear puzzle-like challenges to team-based offensive and defensive free-for-all hacking competitions. While these events are exciting and important as contests of skill, they offer limited educational opportunities. In particular, since participation requires considerable a priori domain knowledge and practical computer security expertise, the majority of typical computer science students are excluded from taking part in these events. The goal in designing and running the MIT Lincoln Laboratory CTF was to make the experience accessible to a wider community by providing an environment that would not only test and challenge the computer security skills of the participants but also educate and prepare those without extensive prior expertise. This seminar presents the Laboratory's self-consciously educational and open CTF, including discussions of our teaching methods, game design, scoring measures, logged data, and lessons learned.

1MEng, Computer Systems Engineering, Rensselaer Polytechnic Institute
2MEng, Electrical Engineering and Computer Science, Massachusetts Institute of Technology
3MS, Computer Science, University of California–San Diego
4PhD, Computer Science, Stanford University

top


Multicore Programming in pMatlab® Using Distributed Arrays

Dr. Jeremy Kepner1
MIT Lincoln Laboratory

MATLAB is one of the most commonly used languages for scientific computing, with approximately one million users worldwide. Many of the programs written in MATLAB can benefit from the increased performance offered by multicore processors and parallel computing clusters. The MIT Lincoln Laboratory pMatlab library (http://www.ll.mit.edu/pMatlab) allows high-performance parallel programs to be written quickly by using the distributed arrays programming paradigm. This talk provides an introduction to distributed arrays programming and will describe the best programming practices for using distributed arrays to produce programs that perform well on multicore processors and parallel computing clusters. These practices include understanding the concepts of parallel concurrency versus parallel data locality, using Amdahl's Law, and employing a well-defined design-code-debug-test process for parallel codes.

1PhD, Physics, Princeton University

top


Natural Language Learning Research and Development

Jennifer A. Williams1, Wade Shen2, Gordon Vidaver3, Jennifer T. Melot4,
Elizabeth E. Salesky5, and Dr. Douglas A. Jones6
MIT Lincoln Laboratory

Let's face it, learning another language is a grand undertaking. Two of the most important components of language learning are the opportunity to get feedback and the access to appropriate resources. MIT Lincoln Laboratory is developing state-of-the-art computer-assisted language-learning (CALL) tools and resources. With these resources, students practice pronunciation and vocabulary using automatic speech recognition that scores how well they do compared to native speakers. We are also helping students and teachers find level-appropriate reading materials by automatically assigning difficulty levels to reading materials using the Interagency Language Roundtable (ILR) scoring metric. Some of the questions that drive our research include

  1. How often should students study and practice their vocabulary to maximize retention in long-term memory?
  2. How do we help native English speakers master sound systems that are very different from their own?
  3. How do we determine language proficiency level automatically and let students know where they need to improve?
  4. What kinds of language learning games really help students learn?

We will present Lincoln Laboratory's language-learning research and development, including the CALL and reading-level tools, and address the above questions.

1MS, Computational Linguistics, Georgetown University
2MS, Computer Science, University of Maryland College Park
3BA, Computer Science, Harvard University; MCert. Japanese, Keio University Tokyo
4SB, Computer Science and Linguistics, Massachusetts Institute of Technology
5BA, Mathematics and Linguistics, Dartmouth College
6PhD, Linguistics, Massachusetts Institute of Technology

top


New Approaches to Automatic Speaker Recognition and
Forensic Considerations

Dr. Joseph P. Campbell1 and Dr. Pedro A. Torres-Carrasquillo2
MIT Lincoln Laboratory

Recent gains in the performance of automatic speaker recognition systems have been obtained by new methods in subspace modeling. This talk presents the development of speaker recognition systems ranging from traditional approaches, such as Gaussian mixture modeling (GMM) to novel state-of-the-art systems employing subspace techniques, such as factor analysis and iVector methods. This seminar also covers research on the means to exploit high-level information. For example, idiosyncratic word usage and speaker-dependent pronunciation are high-level features for recognizing speakers. These high-level features can be combined with conventional features for increased accuracy. The seminar presents new methods to increase robustness and improve calibration of speaker recognition systems by addressing common factors in the forensic domain that degrade recognition performance. We describe MIT Lincoln Laboratory's VOCALINC system and its application to automated voice comparison of speech samples for law enforcement investigation and forensic applications. The talk concludes with appropriate uses of this technology, especially cautions regarding forensic-style applications, and a look at this technology's future directions.

1PhD, Electrical Engineering, Oklahoma State University
2
PhD, Electrical Engineering, Michigan State University

top


Securing Data at Rest with Optical Physically Unclonable Functions

Dr. Merrielle Spain1
MIT Lincoln Laboratory

In many situations, computer systems must be able to protect data from a determined adversary with physical access to the machine—either by making the secrets difficult to obtain or by destroying the secrets before the adversary can get to where they are stored. One example of such a system is the IBM 4758 cryptographic coprocessor, often used for applications such as automated teller machines. Were an ATM stolen and the cryptographic keys within the IBM 4758 compromised, then the adversary could conduct signed transactions with the bank as the ATM. To thwart these attacks, the IBM 4758 monitors its perimeter with powered sensor suites to detect intrusion and destroy the key. Although effective, approaches like the 4758's require constant power to detect adversary actions and react before the adversary can succeed, even when the ATM is powered off. Lincoln Laboratory is developing a solution that can protect data at rest (when the device is powered off) without requiring constant power. Laboratory researchers are leveraging expertise from machine learning, cryptography, optics, and polymer encapsulants to develop and implement a coating-based, physically unclonable function (PUF) capable of protecting a small circuit board. The PUF embodies a cryptographic key used to encrypt the data. Physically disturbing the coating irreversibly destroys the key and consequently denies an adversary access to the data. This approach could be used to retrofit existing equipment, producing smaller, lighter, highly compatible, unpowered systems capable of protecting the data they contain.

1PhD, Computational and Neural Systems, California Institute of Technology

top


Signal Processing for the Measurement of Characteristic Voice Quality

Dr. Nicolas Malyska1 and Dr. Thomas F. Quatieri2
MIT Lincoln Laboratory

The quality of a speaker's voice communicates to a listener information about many characteristics, including the speaker's identity, language, dialect, emotional state, and physical condition. These characteristic elements of a voice arise because of variations in the anatomical configuration of a speaker's lungs, voice box, throat, tongue, mouth, and nasal airways, as well as the ways in which the speaker moves these structures. The voice box, or larynx, is of particular interest in voice quality, as it is responsible for generating variations in the excitation source signal for speech.

In this seminar, we will discuss mechanisms by which voice-source variations are generated, appear in the acoustic signal, and are perceived by humans. Our focus will be on using signal processing to capture acoustic phenomena resulting from the voice source. The presentation will explore several applications that build upon these measurement techniques including (1) turbulence-noise component estimation during aperiodic phonation, (2) automatic labeling for regions of irregular-phonation, and (3) the analysis of pitch dynamics.

1PhD, Health Sciences and Technology, Massachusetts Institute of Technology
2ScD, Electrical Engineering, Massachusetts Institute of Technology

top


The Probabilistic Provenance Graph

Jeffrey C. Gottschalk1
MIT Lincoln Laboratory

In making decisions that could affect the lives of millions or even billions of people around the world, government leaders and their supporting analysts require that the information they use be trustworthy. The trustworthiness of a given datum can be no more trustworthy than the data on which it relies; thus, it is crucial to identify dependencies between and among data. This dependency identification process is referred to as data provenance discovery. This process outputs a provenance model in the form of a directed acyclic graph referred to as a provenance graph. In provenance graphs, nodes correspond to entities, and edges correspond to provenance relationships between entities. 

Previous provenance models have assumed that there is complete certainty in the provenance relationships. In a world fraught with so much uncertainty, how can such an assumption hold? How can decision-makers be certain they are making the right choice when they are unaware of the uncertainties involved in the data they rely on? Ultimately, what is needed is a provenance system that can reason about uncertainty. However, to achieve this aim, researchers must reformulate the traditional provenance model found in all modern provenance systems. This seminar outlines the Laboratory's proposal for an alternative model—the probabilistic provenance graph (PPG). The proposal includes the PPG's motivation, specification, and real-world manifestation.

1MS, Astronomy, University of Massachusetts–Amherst

 

top of page