2014–2015 Technical Seminar Series

Members of the technical staff at MIT Lincoln Laboratory are pleased to present these technical seminars to interested college and university groups. Costs related to the staff members' visits for these seminars will be assumed by the Laboratory.

To arrange a technical seminar, please contact

College Recruiting Program Administrator
Human Resources Department
MIT Lincoln Laboratory
244 Wood Street
Lexington, Massachusetts 02420-9108
781-981-2465
email: collegerecr@ll.mit.edu

and provide the following information:

  • A rank-ordered list of requested seminars
  • Preferred date/time options
  • A description of the target audience

banner for technical seminars

Index of 2014–2015 Seminars

Air Traffic Control

Communication Systems

Cyber Security and Information Sciences

Systems and Architectures

Homeland Protection

Radar and Signal Processing

Solid State Devices, Materials, and Processes

Space Control Technology

Optical Propagation and Technology


SEMINAR ABSTRACTS

Air Traffic Control

Experiences from Modeling and Exploiting Data in Air Traffic Control

Dr. James K. Kuchar1
MIT Lincoln Laboratory

Recent machine-learning techniques are now enabling significant advances in the performance of air transportation decision support systems. This talk will review three vignettes from data-driven prototype system development: exploiting radar data and modeling airspace traffic encounters to build a more effective collision avoidance system; extracting information from surface surveillance data to improve airport operations; and learning from operational experience to facilitate departure management in the vicinity of convective weather. In each case, examples and challenges of data collection, processing, and translation into models and ultimately operational prototype systems will be discussed.

1PhD, Aeronautics and Astronautics, Massachusetts Institute of Technology


Integrating Unmanned Aircraft Systems Safely into the National Airspace System

Dr. Rodney E. Cole1
MIT Lincoln Laboratory

Unmanned aircraft systems (UAS) such as the Air Force's Global Hawk and Predator are increasingly employed by the military and Department of Homeland Security in roles that require sharing airspace with civilian aircraft. Missions include pilot training, border patrol, highway and agricultural observation, and disaster management. Because of the pressure for widespread access for UASs to the national airspace and the risk of collision with passenger aircraft, UAS operators must find a way to integrate with manned aircraft with a very high degree of safety. The key to safe integration of UAS into the national airspace is the development and assessment of "sense and avoid" (SAA) technologies to replace the manned aircraft pilot's ability to "see and avoid" other aircraft.

MIT Lincoln Laboratory is conducting research to address safe and flexible UAS integration with commercial and general aviation aircraft. Research areas include development of sophisticated computer models that simulate millions of encounters between UAS and civilian aircraft to characterize airspace hazards and collision rates. These models can be applied to assess the performance of SAA algorithms designed to maintain separation, "well clear," between UAS and civilian aircraft while observing and adhering to established right-of-way rules. The Laboratory is also conducting groundbreaking research in the area of collision avoidance logic and is pursuing a probabilistic approach to collision avoidance that considers the uncertainty in pilot response to alerts and uncertainty in future states of the threat aircraft. This approach offers the potential to provide increased safety with decreased false alarms over conventional techniques and is a candidate for future Traffic Alert Collision Avoidance Systems (TCAS) and UAS SAA applications.

The Laboratory is also working with the Department of Defense (DoD) and Department of Homeland Security to develop ground-based sense and avoid (GBSAA) and airborne sense and avoid (ABSAA) surveillance architectures to satisfy the Federal Aviation Administration’s (FAA) requirement for replacing the onboard pilot's "see and avoid" function. Under DoD sponsorship, Lincoln Laboratory has deployed a service-oriented architecture GBSAA test bed that will be utilized in operational and simulation-over-live environments to collect data and operator feedback that can then be used to support future certification with the FAA. This seminar will provide a broad overview of the Laboratory's efforts in UAS airspace integration and next-generation aircraft collision avoidance algorithms and will provide an overview of the GBSAA test bed that is under development for the DoD.

1PhD, Mathematics, University of Colorado–Boulder

top


Radar Detection of Aviation Weather Hazards

Dr. John Y. N. Cho1
MIT Lincoln Laboratory

Bad weather plays a factor in many aviation accidents and incidents. Microburst, hail, icing, lightning, fog, turbulence—these are atmospheric phenomena that can interfere with aircraft performance and a pilot’s ability to fly safely. Thus, for safe and efficient operation of the air traffic system, it is crucial to continuously observe meteorological conditions and accurately characterize phenomena hazardous to aircraft. Radar is the most important weather-sensing instrument for aviation. This seminar will discuss technical advances that led to today’s operational terminal wind-shear detection radars. An overview of recent and ongoing research to improve radar capability to accurately observe weather hazards to aviation will also be presented.

1PhD, Electrical Engineering, Cornell University

top


System Design in an Uncertain World: Decision Support
for Mitigating Thunderstorm Impacts on Air Traffic

Richard A. DeLaura1
MIT Lincoln Laboratory

Weather accounts for 70% of the cost of air traffic delays—about $28 billion annually—within the United States National Airspace System (NAS). Most weather-related delays occur during the summer months, when thunderstorms affect air traffic, particularly in the crowded Northeast. The task of air traffic management, complicated even in the best of circumstances, can become overwhelmingly complex as air traffic managers struggle to route traffic reliably through rapidly evolving thunderstorms. A new generation of air traffic management decision support tools promises to reduce air traffic delays by accounting for the potential effects of convective weather, such as thunderstorms, on air traffic flow. Underpinning these tools are models that translate high-resolution convective weather forecasts into estimates of impact on aviation operations.

This seminar will present the results of new research to develop models of pilot decision making and air traffic capacity in the presence of thunderstorms. The models will be described, initial validation will be presented, and sources of error and uncertainty will be discussed. Finally, some applications of these models and directions for future research will be briefly described.

1AB, Physics, Harvard University

top


Communication Systems

Building a High-Capability Internet Protocol Airborne Backbone with Disparate Radio Technologies

Dr. Bow-Nan Cheng1
MIT Lincoln Laboratory

The current generation of long-range, high-capacity military radios are stovepiped systems that lack interoperability—each radio provides a subset of disparate link information in nonstandard interfaces—and have built-in homegrown or industry-based routers running nonstandard, proprietary routing protocols. The issue is complicated further in that airborne link characteristics change rapidly, often requiring direct link layer feedback from the radio to make routing decisions. This seminar will present the technology overview for current and emerging radio-to-router interface technology as well as practical design and implementation of these technologies in an airborne network with various radio technologies. Results and experience from emulations and field tests with the goal of designing, developing, and prototyping a high-capacity airborne Internet protocol (IP) backbone are presented.

1PhD, Computer Science, Rensselaer Polytechnic Institute

top


Cooperative Communication in Heterogeneous Wireless Networks

Dr. Scott Pudlewski1
MIT Lincoln Laboratory

Cooperative communication techniques, in which users relay for other users in order to improve performance, are typically explored in wireless networks consisting of homogeneous nodes, such as sensor networks or vehicular ad hoc networks. Future wireless networks will consist of a blend of different types of nodes, including satellite, airborne, and terrestrial nodes, with widely varying characteristics in terms of achievable data rate, loss rate, and propagation time. This seminar will include a tutorial and focus on recent research in cooperative capabilities and techniques, including the use of network coding, for heterogeneous wireless networks.

1PhD, Electrical Engineering, State University of New York

top


Diversity in Air-to-Ground Lasercom: The Focal Demonstration

Dr. Frederick G. Walther1
MIT Lincoln Laboratory

Laser communications (lasercom) provides significant advantages, compared to radio-frequency (RF) communications, which include a large, unregulated bandwidth and high beam directionality for free-space links. These advantages provide capabilities for high (multi Gb/s) data transfer rates; reduced terminal size, weight and power; and a degree of physical link security against out-of-beam interferers or detectors. This seminar addresses the key components of lasercom system design, including modeling and simulation of atmospheric effects, link budget development, employment of spatial and temporal diversity techniques to mitigate signal fading due to scintillation, and requirements for acquisition and tracking system performance.

1PhD, Physics, Massachusetts Institute of Technology

top


Dynamic Link Adaptation for Satellite Communications

Dr. Huan Yao1
MIT Lincoln Laboratory

Future protected military satellite communications will continue to use high transmission frequencies to capitalize on the large amounts of available bandwidth. However, the data flowing through these satellites will transition from the circuit-switched traffic of today's satellite systems to Internet-like packet traffic. One of the main differences in migrating to packet-switched communications is that the traffic will become bursty (i.e., the data rate from particular users will not be constant). The variation in data rate is only one of the potential system variations. At the frequencies of interest, rain and other weather phenomena can introduce significant path attenuation for relatively short time periods. Current protected satellite communications systems are designed with sufficient link margins to provide a desired availability under such degraded path conditions. These systems do not have provisions to use the excess link margins for additional capacity when weather conditions are good. The focus of this seminar is the design of a future satellite system that autonomously reacts to changes in link conditions and offered traffic. This automatic adaptation drastically improves the overall system capacity and the service that can be provided to ground terminals.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Future Directions in Communication Systems

Navid Yazdani1
MIT Lincoln Laboratory

Communication systems have advanced tremendously over the past few years. High-data-rate satellite service is affordable and available to consumers, and high-data-rate cellular service to smart phones is widely proliferated. This talk will discuss many of the developments that have made these technologies possible and outline current research that is enabling the next generation of commercial and government systems. Particular emphasis will be placed on next-generation satellite communication systems and on research in radio-frequency (RF) and optical free-space communication technologies. The talk is intended to provide motivation for students looking to pursue a career in an exciting field at an exciting time. A short overview of MIT Lincoln Laboratory, a federally funded research and development center, is also presented to show how Lincoln Laboratory is making an impact in these areas and changing the future of communication systems.

1MS, Electrical Engineering, Stanford University

top


High-Rate Laser Communications to the Moon and Back

Dr. Farzana I. Khatri1
MIT Lincoln Laboratory

Radio waves have been the standard method for deep-space communications since the Apollo mission. Over the past decades, scientists at MIT Lincoln Laboratory have been working to develop free-space optical communications systems, and the recent success of the Lunar Laser Communication Demonstration (LLCD) program will clearly revolutionize future deep-space communication. systems. The LLCD demonstrated record-breaking optical up and down links between Earth and the Lunar Lasercom Space Terminal (LLST) payload on NASA’s Lunar Atmosphere and Dust Environment Explorer (LADEE) satellite orbiting the Moon. The system included an innovative space terminal, a novel ground terminal, two major upgrades of existing ground terminals, and a capable and flexible ground operations infrastructure. This talk will give an overview of the technologies involved in the demonstration, the system architecture, the basic operations of both the link and the whole system, and some typical results.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Implementation Considerations for Wideband Wireless Communications

Dr. Nancy B. List1
MIT Lincoln Laboratory

Unexpected technical challenges often arise in the process of transferring technology from theory into practical applications. It is well known that modulator distortion causes problems for the transmission of communications signals. Less obvious, however, is the effect of modulator distortion on signals used for time tracking in wireless systems requiring strict timing control. Narrowband tracking signals are often used to synchronize systems transmitting wideband communications signals. While narrowband tracking signals may be less sensitive than communications signals to many types of distortion, they are particularly sensitive to group delay variation. As a result, relatively small levels of group delay variation across the frequency band can cause unexpected overall system degradation. This seminar will describe the real-world challenges of time-tracking in frequency-hopped satellite communications systems transmitting signals at high data rates, as well as practical methods to analyze and overcome these challenges.

1PhD, Electrical Engineering, Georgia Institute of Technology

top


Providing Information Security with Quantum Physics—A Practical Engineering Perspective

Dr. Andrew S. Fletcher1
MIT Lincoln Laboratory

Quantum information technology enables promising advances in the areas of communications security. A well-engineered application of quantum mechanics enables results that are unattainable with classical-only processing. The most well-known example is quantum key distribution (QKD), a family of protocols to distribute a secret key. With a carefully designed protocol, quantum mechanics bounds the information an eavesdropper could obtain without being detected. A verifiable random number generator (vRNG) is another important quantum information technology. A secure source of random bits is an important input for nearly all cryptographic protocols; classical RNGs are subject to silent failures, which potentially create security vulnerabilities. A vRNG uses the observation of a quantum phenomenon—the presence of entanglement via a violation of Bell's inequalities—to certify the entropy of the vRNG output. MIT Lincoln Laboratory has had great success developing practical optical communications system demonstrations. This seminar will explore the ongoing efforts at the Laboratory to engineer quantum communication systems, exploring both theoretical and experimental advances.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Quality of Service and Cross-Layer Optimization for Satellite Communications Networks

Dr. Jeffrey S. Wysocarski1
MIT Lincoln Laboratory

To efficiently utilize limited radio-frequency (RF) resources, future packet-switched satellite networks will dynamically allocate resources on the uplink and downlink. Designing the resource-allocation algorithms to maximize link-layer efficiency is insufficient. The resource-allocation algorithms must work cooperatively with the network layer and transport layer to optimize network layer performance and provide quality of service (QoS) to applications and users. Several mechanisms for facilitating this required cooperation between the layers are presented. The individual roles and actions of the layers as well as their interaction are defined. Router QoS schedulers that continue to provide service differentiation in the presence of link variations are illustrated, and downlink scheduling architectures that provide terminal QoS guarantees are demonstrated. Finally, the interaction between the transmission control protocol (TCP) and the dynamic resource-allocation algorithms is investigated, leading to suggested modifications of either the resource-allocation algorithms, the TCP protocol, or both.

1PhD, Electrical Engineering, Clemson University

top


Real-Time Modeling of Wireless Networks Through Emulation

David P. Ward1
MIT Lincoln Laboratory

End-to-end application performance over wireless networks can be difficult to predict, given the number of interactions that may occur between layers in the protocol stack. Emulation is an approach that utilizes a cluster of off-the-shelf computing servers to model protocol behavior in real time in order to exhibit and understand these interactions. This seminar will explain how emulation compares to other performance prediction methods, how it can be used by researchers, and what challenges need to be addressed as the emulated network increases in scale or complexity.

1MS, Electrical and Computer Engineering, Georgia Institute of Technology

top


Robust Multi-user Wireless Communications

Dr. Thomas C. Royster IV1
MIT Lincoln Laboratory

Many of today's wireless communications systems continue to push toward higher data rates to support diverse user applications. Users that must communicate under adverse conditions, however, place a premium on robustness and security. This seminar discusses physical layer and medium-access control layer techniques for designing resiliency into wireless communication systems. Trade-offs among data rate, robustness, delay, and complexity are described. Also discussed are ways in which continuing advances in digital processing enable fresh design approaches for robust systems.

1PhD, Electrical Engineering, Clemson University

top


Waveform Design for Airborne Networks

Dr. Frederick J. Block1
MIT Lincoln Laboratory

Airborne networks are expected to play a significant role in future military communications. Successful deployment of these ad hoc networks requires overcoming many unique challenges. For example, nodes are often separated by great distances and can be highly mobile. In addition to multiple-access interference from other radios in the network, interference from jammers located over a wide geographic region may be able to reach a receiver because of its high altitude. The seminar gives an overview of channel models for airborne networks and examines the trade-offs that must be made when choosing the modulation, coding, channel access, and routing techniques.

1PhD, Electrical Engineering, Clemson University

top


Worth a Thousand Bits: Visualization of Communication Network Data

Andrea L. Brennen1
MIT Lincoln Laboratory

Currently, our capacity to generate, collect, and store "Big Data" far exceeds our ability to extract meaningful information from those data. This problem is heightened for multidimensional and time-sensitive datasets, particularly when there is a need to cross-correlate different types of data, such as numerical, categorical, geolocational, and temporal.

Visualizations—visual representations of information—can greatly facilitate the analysis and interpretation of multidimensional "Big Data." However, the majority of current research on data visualization is approached either from the perspective of computer science (i.e., algorithm development) or visual design (i.e., aesthetics, legibility), but not both. In order to develop effective visualizations, these perspectives must be integrated into a multidisciplinary approach; however, collaboration (and even communication) across disciplinary boundaries can be complicated, as researchers from technical and design disciplines often have conflicting working methods, evaluation criteria, and underlying philosophies.

This talk presents the development of "NetSight," a tool for visualizing multidimensional communication network data, as a case study in collaboration between visual design, software engineering, and network science. The talk will explain how visual design can benefit technical research in emerging communication networks, showcase a range of capabilities developed to address Department of Defense–specific needs, and discuss some of the challenges of conducting multidisciplinary research in visualization.  

1M.Arch, Architectural Design, Massachusetts Institute of Technology

top


Cyber Security and Information Sciences

Addressing the Challenges of Big Data Through Innovative Technologies

Dr. Vijay Gadepally1and Dr. Jeremy Kepner2
MIT Lincoln Laboratory

The ability to collect and analyze large amounts of data is increasingly important within the scientific community. The growing gap between the volume, velocity, and variety of data available and users’ ability to handle this deluge calls for innovative tools to address the challenges imposed by what has become known as big data. MIT Lincoln Laboratory is taking a leading role in developing a set of tools to help solve the problems inherent in big data.

Big data’s volume stresses the storage, memory, and compute capacity of a computing system and requires access to a computing cloud. Choosing the right cloud is problem specific. Currently, four multibillion-dollar ecosystems dominate the cloud computing environment: enterprise clouds, big data clouds, SQL database clouds, and supercomputing clouds. Each cloud ecosystem has its own hardware, software, communities, and business markets. The broad nature of big data challenges makes it unlikely that one cloud ecosystem can satisfy all needs, and solutions are likely to require the tools and techniques from more than one cloud ecosystem. The MIT SuperCloud was developed to provide one such solution. To our knowledge, the MIT SuperCloud is the only deployed cloud system that allows all four ecosystems to co-exist without sacrificing performance or functionality.

The velocity of big data stresses the rate at which data can be absorbed and meaningful answers can be produced. Through an initiative led by the National Security Agency (NSA), a Common Big Data Architecture (CBDA) was developed for the U.S. government. The CBDA is based on the Google Big Table NoSQL approach and is now in wide use. Lincoln Laboratory was instrumental in the development of the CBDA and is a leader in adapting the CBDA to a variety of big data challenges. The centerpieces of the CBDA are the NSA-developed Apache Accumulo database (capable of millions of entries per second) and the Lincoln Laboratory–developed Dynamic Distributed Dimensional Data Model (D4M) schema.

Finally, big data variety may present both the largest challenge and the greatest set of opportunities for supercomputing. The promise of big data is the ability to correlate heterogeneous data to generate new insights. The combination of Apache Accumulo and D4M technologies allows vast quantities of highly diverse data (bioinformatics, cyber-relevant data, social media data, etc.) to be automatically ingested into a common schema that enables rapid query and correlation of elements.

1 PhD, Electrical and Computer Engineering, The Ohio State University
2 PhD, Physics, Princeton University

top


Content-Centric Networking for Mobile Devices

Praveen K. Sharma1
MIT Lincoln Laboratory

Many applications used in situations such as disaster responses, combat missions, and emergency rescue operations require mobile devices to communicate in environments where the cellular infrastructure is damaged, overwhelmed, or absent. These environments manifest characteristics—such as delays and disruptions—that the traditional approaches do not address. 

To enable applications to communicate and disseminate information in these disruption-prone environments, Lincoln Laboratory designed an architecture that enables delay-tolerant, peer-to-peer, multi-hop communication and that is not dependent upon the prior knowledge of the identity or location of the host network or device. The architecture leverages mobile ad hoc networks (MANETs) for providing peer-to-peer and multi-hop communications. The architecture leverages an emerging technique, content-centric networking (CCN), for enabling mobile devices to share information despite delays or disruptions and  for enabling mobile devices to use named content when the IP addresses or phone numbers of recipients are not known.

As a proof of concept, researchers at MIT Lincoln Laboratory prototyped an architecture, on an Android smartphone, that comprised a CCN overlay on MANETs. The prototype used WiFi as the illustrative communication protocol and Optimized Link State Routing and modified Haggle as the illustrative MANET and CCN protocols, respectively. The performance of the algorithm was evaluated experimentally. Preliminary results indicate that the Laboratory's approach increases the rate of message delivery over multiple hops while preserving message transparency, albeit at a cost of additional overhead control messages.

1 MS, Computer Science, Iowa State University

top


Cross-Language Illness Tracking via Tweets

Sharon Tam1
MIT Lincoln Laboratory

Social media has become increasingly popular for various types of content sharing, including users' thoughts, ideas, and details of their lives. With more than 200 million active users around the world generating more than 400 million tweets daily, Twitter has a wealth of information that can provide insights into different population characteristics. One such characteristic is public health. It has been shown that Twitter messages mentioning flu-related keywords correlate with influenza rates in the United States [1]; however, works reported in the literature focus on the analysis of English tweets, ignoring those in foreign languages. Lincoln Laboratory is using cross-language information retrieval to investigate whether non-English tweets from Twitter can also be used to track the spread of illnesses on a global scale. This seminar presents the methods we’ve used so far to further this work. We created a statistical model via query seeding to learn flu symptom-related terms and applied our model to detect flu-related tweets. We analyzed 14 million tweets collected from January to April 2012 and compared our results to the Centers for Disease Control and Prevention's (CDC) data for the same period. We found that our analysis correlated to flu prevalence and that our results are a leading indicator of the CDC's influenza surveillance reports.

[1] Culotta, A., “Detecting Influenza Epidemics by Analyzing Twitter Messages,” arXiv:1007.4748v1 [cs.IR], 2010.

1MEng, Electrical Engineering and Computer Science, Massachusetts Institute of Technology

top


Cyber Security Metrics

Dr. James F. Riordan,1
MIT Lincoln Laboratory

Recent cyber attacks on government, commercial, and institutional computer networks have highlighted the need for increased cyber security measures. In order to better quantify, understand, and, therefore, more effectively combat this ever-growing threat, the U.S. Government is shifting its security strategy for its computer networks and systems from one of yearly compliance checks to one of continuous monitoring and assessment. While continuous monitoring refers to the ability to maintain constant awareness of the configuration and status of the computer networks and systems, assessment refers to the ability to accurately appraise the security posture of the systems and to estimate the risk associated with them. Clearly, the effectiveness of this strategy hinges upon the ability to accurately assess cyber risk.

This seminar will present a methodology for producing useful cyber security metrics that are derived from realistic and well-defined mathematical attacker models. These metrics can be continuously evaluated from operational security data and, thus, support the government's new security strategy of continuous monitoring. The speaker will show how this methodology has been used to develop several important security metrics that are based upon the SANS Institute's list of 20 critical security controls. Live demonstrations will illustrate how these security metrics are used to assess risk in an operational context.

1PhD, Computational Mathematics, University of Minnesota

top


Developing and Evaluating Link-Prediction Algorithms for
Speaker Content Graphs

Kara Greenfield1 and Dr. William M. Campbell2
MIT Lincoln Laboratory

Graph theory can be a very powerful tool for a variety of different problems, but it is not always clear how the edges of the graph should be defined. Link prediction is the process of determining which pairs of nodes should be connected by an edge. This seminar describes the process of developing different link-prediction algorithms. We first discuss how to choose intermediary metrics that correspond well to the application of interest in order to obtain measures of performance before it is practical to obtain a measure of effectiveness for that application. We also describe how MIT Lincoln Laboratory’s VizLinc audio-visual tool can be used in visual analytics to gain better insight into the algorithms than can be provided by numeric metrics alone. Throughout the talk, we will use speaker recognition as the domain of interest, generating speaker content graphs that efficiently model the underlying manifold of the speaker space and employing those graphs to perform tasks such as query by example and speaker clustering.

1MS, Industrial Mathematics, Worcester Polytechnic Institute
2PhD, Applied Mathematics, Cornell University

top


Efficient, Privacy-Preserving Data Sharing

Dr. Mayank H. Varia1
MIT Lincoln Laboratory

Database management systems permit fast, expressive searches on large volumes of data. However, modern databases provide the server with a lot of information, such as the contents of the database and all of the clients' queries. In many scenarios, a client may wish to limit the information revealed to the server; examples include a cloud computing setting in which the server and client belong to two different organizations, and a high-value target setting in which the client is concerned about potential compromise and wants to limit the scope of damage in case of attack.

Modern cryptography provides several different types of tools that offer the promise of searching on encrypted data. Thus far, however, most of the tools have not gained widespread use because they either operate too slowly or lack the query expressivity of a desired language like SQL. Recently, several research groups have participated in an Intelligence Advanced Research Projects Activity (IARPA)–funded research program called "Security and Privacy Assurance Research," which aims to overcome both of these obstacles. The researchers have built secure database management systems whose query functionality comprises a large functionality of SQL and whose performance is within a 10× factor of MySQL at terabyte scales. In this talk, I will illustrate the cryptographic advances that make these technologies possible. Additionally, I will describe the rigorous software engineering and formal techniques employed to test that the researchers' software met all of the security, functionality, and performance requirements.

1PhD, Mathematics, Massachusetts Institute of Technology

top


EMBER: A Global Perspective on Extreme Malicious Behavior

Tamara H. Yu1, Dr. Richard P. Lippmann2, and Dr. James F. Riordan3
MIT Lincoln Laboratory

Geographical displays are commonly used for visualizing widespread malicious behavior of Internet hosts. Placing dots on a world map or coloring regions by the magnitude of activity often results in cluttered maps that invariably emphasize population-dense metropolitan areas in developed countries where Internet connectivity is highest. To uncover atypical regions, it is necessary to normalize activity by the local computer population. This seminar presents EMBER (Extreme Malicious Behavior viewER), an analysis and display of malicious activity at the city level. EMBER uses a metric called standardized incidence rate (SIR) that is the number of hosts exhibiting malicious behavior per 100,000 available hosts. This metric relies on available data that (1) map IP addresses to geographic locations, (2) provide current city populations, and (3) provide computer usage penetration rates. An analysis of several months of suspicious source IPs from DShield identified cities with extremely high and low malicious activity rates on a day-by-day basis. In general, cities in a few Eastern European countries have the highest SIRs, whereas cities in Japan and South Korea have the lowest. Many of these results are consistent with news reports describing local cyber security policies. The distribution of SIRs for cities with comparable population levels has a long tail similar to a power law. This distribution suggests that malware preferentially spreads to regions with currently high levels of malicious activity as suggested by analysis of many malware executables in the past.

1MEng, Computer Science, Massachusetts Institute of Technology
2PhD, Electrical Engineering, Massachusetts Institute of Technology
3PhD, Mathematics, University of Minnesota

top


Evaluating Cyber Moving Target Techniques

Dr. Hamed Okhravi1
MIT Lincoln Laboratory

The concept of cyber moving target (MT) defense has been identified as one of the game-changing themes to rebalance the landscape of cyber security. MT techniques make cyber systems “a moving target,” that is, less static, less homogeneous, and less deterministic in order to create uncertainty for attackers.

Although many MT techniques have been proposed in the literature, little has been done on evaluating their effectiveness, benefits, and weaknesses. This seminar discusses the evaluation of the wide range of MT techniques. First, a qualitative assessment studies the potential benefits, gaps, and weaknesses for each category of MT strategy. This step identifies major gaps in order to guide future research and prototyping efforts. The findings of a case study on code-reuse defenses are presented. For the MT techniques that have been identified as potentially more beneficial in the qualitative assessment, a deeper quantitative assessment is performed by examining real exploits. Next, the seminar discusses an operational assessment of an MT technique inside a larger system and illustrates how important parameters of such techniques can be monitored for improved effectiveness. Finally, possible directions for future work in this domain are outlined.
 
1PhD, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign

top


Experiences in Cyber Security Education:
The MIT Lincoln Laboratory Capture-the-Flag Exercise

Joseph M. Werther1, Michael A. Zhivich2, and Timothy R. Leek3
MIT Lincoln Laboratory

Dr. Nickolai Zeldovich4
MIT Computer Science and Artificial Intelligence Laboratory

Many popular and well-established cyber security capture-the-flag (CTF) exercises are held each year at a variety of settings, including universities and semiprofessional security conferences. The CTF format also varies greatly, ranging from linear puzzle-like challenges to team-based offensive and defensive free-for-all hacking competitions. While these events are exciting and important as contests of skill, they offer limited educational opportunities. In particular, since participation requires considerable a priori domain knowledge and practical computer security expertise, the majority of typical computer science students are excluded from taking part in these events. The goal in designing and running the MIT Lincoln Laboratory CTF was to make the experience accessible to a wider community by providing an environment that would not only test and challenge the computer security skills of the participants but also educate and prepare those without extensive prior expertise. This seminar presents the Laboratory's self-consciously educational and open CTF, including discussions of our teaching methods, game design, scoring measures, logged data, and lessons learned.

1MEng, Computer Systems Engineering, Rensselaer Polytechnic Institute
2MEng, Electrical Engineering and Computer Science, Massachusetts Institute of Technology
3MS, Computer Science, University of California–San Diego
4PhD, Computer Science, Stanford University

top


Multicore Programming in pMatlab® Using Distributed Arrays

Dr. Jeremy Kepner1
MIT Lincoln Laboratory

MATLAB is one of the most commonly used languages for scientific computing, with approximately one million users worldwide. Many of the programs written in MATLAB can benefit from the increased performance offered by multicore processors and parallel computing clusters. The MIT Lincoln Laboratory pMatlab library (http://www.ll.mit.edu/pMatlab) allows high-performance parallel programs to be written quickly by using the distributed arrays programming paradigm. This talk provides an introduction to distributed arrays programming and will describe the best programming practices for using distributed arrays to produce programs that perform well on multicore processors and parallel computing clusters. These practices include understanding the concepts of parallel concurrency versus parallel data locality, using Amdahl's Law, and employing a well-defined design-code-debug-test process for parallel codes.

1PhD, Physics, Princeton University

top


Natural Language Learning Research and Development

Jennifer A. Williams1, Wade Shen2, Gordon Vidaver3, Jennifer T. Melot4,
Elizabeth E. Salesky5, and Dr. Douglas A. Jones6
MIT Lincoln Laboratory

Let's face it, learning a second language is a grand undertaking. One of the most important aspects of learning a new language is the opportunity to try it out. MIT Lincoln Laboratory is developing state-of-the-art computer-assisted language-learning (CALL) tools and resources. With these resources, students practice pronunciation and vocabulary using automatic speech recognition that scores how well they do compared to native speakers. We are also helping students and teachers find level-appropriate reading materials by automatically assigning reading levels to text using the Interagency Language Roundtable (ILR) scoring metric. Some of the research questions that we are investigating include

  1. How often should students study and practice their vocabulary to maximize retention in long-term memory?
  2. How do we help native English speakers master sound systems that are very different from their own?
  3. How do we determine language proficiency level automatically and let students know where they need to improve?
  4. What kinds of language learning games help students learn?

We will present Lincoln Laboratory's language-learning research and development, as well as CALL and reading-level tools, and will address the above questions.

1MS, Computational Linguistics, Georgetown University
2MS, Computer Science, University of Maryland College Park
3BA, Computer Science, Harvard University; MCert. Japanese, Keio University Tokyo
4SB, Computer Science and Linguistics, Massachusetts Institute of Technology
5BA, Mathematics and Linguistics, Dartmouth College
6PhD, Linguistics, Massachusetts Institute of Technology

top


New Approaches to Automatic Speaker Recognition and
Forensic Considerations

Dr. Joseph P. Campbell1 and Dr. Pedro A. Torres-Carrasquillo2
MIT Lincoln Laboratory

Recent gains in the performance of automatic speaker recognition systems have been obtained by new methods in subspace modeling. This talk presents the development of speaker recognition systems ranging from traditional approaches, such as Gaussian mixture modeling (GMM) to novel state-of-the-art systems employing subspace techniques, such as factor analysis and iVector methods. This seminar also covers research on the means to exploit high-level information. For example, idiosyncratic word usage and speaker-dependent pronunciation are high-level features for recognizing speakers. These high-level features can be combined with conventional features for increased accuracy. The seminar presents new methods to increase robustness and improve calibration of speaker recognition systems by addressing common factors in the forensic domain that degrade recognition performance. We describe MIT Lincoln Laboratory's VOCALINC system and its application to automated voice comparison of speech samples for law enforcement investigation and forensic applications. The talk concludes with appropriate uses of this technology, especially cautions regarding forensic-style applications, and a look at this technology's future directions.

1PhD, Electrical Engineering, Oklahoma State University
2
PhD, Electrical Engineering, Michigan State University

top


Securing Data at Rest with Optical Physically Unclonable Functions

Dr. Merrielle Spain1
MIT Lincoln Laboratory

In many situations, computer systems must be able to protect data from a determined adversary with physical access to the machine—either by making the secrets difficult to obtain or by destroying the secrets before the adversary can get to where they are stored. One example of such a system is the IBM 4758 cryptographic coprocessor, often used for applications such as automated teller machines. Were an ATM stolen and the cryptographic keys within the IBM 4758 compromised, then the adversary could conduct signed transactions with the bank as the ATM. To thwart these attacks, the IBM 4758 monitors its perimeter with powered sensor suites to detect intrusion and destroy the key. Although effective, approaches like the 4758's require constant power to detect adversary actions and react before the adversary can succeed, even when the ATM is powered off. Lincoln Laboratory is developing a solution that can protect data at rest (when the device is powered off) without requiring constant power. Laboratory researchers are leveraging expertise from machine learning, cryptography, optics, and polymer encapsulants to develop and implement a coating-based, physically unclonable function (PUF) capable of protecting a small circuit board. The PUF embodies a cryptographic key used to encrypt the data. Physically disturbing the coating irreversibly destroys the key and consequently denies an adversary access to the data. This approach could be used to retrofit existing equipment, producing smaller, lighter, highly compatible, unpowered systems capable of protecting the data they contain.

1PhD, Computational and Neural Systems, California Institute of Technology

top


Signal Processing for the Measurement of Characteristic Voice Quality

Dr. Nicolas Malyska1 and Dr. Thomas F. Quatieri2
MIT Lincoln Laboratory

The quality of a speaker's voice communicates to a listener information about many characteristics, including the speaker's identity, language, dialect, emotional state, and physical condition. These characteristic elements of a voice arise because of variations in the anatomical configuration of a speaker's lungs, voice box, throat, tongue, mouth, and nasal airways, as well as the ways in which the speaker moves these structures. The voice box, or larynx, is of particular interest in voice quality, as it is responsible for generating variations in the excitation source signal for speech.

In this seminar, we will discuss mechanisms by which voice-source variations are generated, appear in the acoustic signal, and are perceived by humans. Our focus will be on using signal processing to capture acoustic phenomena resulting from the voice source. The presentation will explore several applications that build upon these measurement techniques including (1) turbulence-noise component estimation during aperiodic phonation, (2) automatic labeling for regions of irregular-phonation, and (3) the analysis of pitch dynamics.

1PhD, Health Sciences and Technology, Massachusetts Institute of Technology
2ScD, Electrical Engineering, Massachusetts Institute of Technology

top


The Probabilistic Provenance Graph

Jeffrey C. Gottschalk1
MIT Lincoln Laboratory

In making decisions that could affect the lives of millions or even billions of people around the world, government leaders and their supporting analysts require that the information they use be trustworthy. The trustworthiness of a given datum can be no more trustworthy than the data on which it relies; thus, it is crucial to identify dependencies between and among data. This dependency identification process is referred to as data provenance discovery. This process outputs a provenance model in the form of a directed acyclic graph referred to as a provenance graph. In provenance graphs, nodes correspond to entities, and edges correspond to provenance relationships between entities. 

Previous provenance models have assumed that there is complete certainty in the provenance relationships. In a world fraught with so much uncertainty, how can such an assumption hold? How can decision-makers be certain they are making the right choice when they are unaware of the uncertainties involved in the data they rely on? Ultimately, what is needed is a provenance system that can reason about uncertainty. However, to achieve this aim, researchers must reformulate the traditional provenance model found in all modern provenance systems. This seminar outlines the Laboratory's proposal for an alternative model—the probabilistic provenance graph (PPG). The proposal includes the PPG's motivation, specification, and real-world manifestation.

1MS, Astronomy, University of Massachusetts–Amherst

top


Systems and Architectures

Choices, Choices, Choices (Decisions, Decisions, Decisions)

Dr. Robert T. Shin1
MIT Lincoln Laboratory

As you plan a career after graduating from a university or college, you are faced with many choices and decisions, not just now but for many and your most productive years. While there generally is not a right or wrong decision, it is helpful to think about and make these decisions in a more systematic way. This seminar looks at perspectives on how one might think about making a choice and making an impact, especially as an architect of future advanced systems. Also, my lessons learned along the way, in architectural thinking and in management, are presented in the hope that you might find them useful as you advance in your careers. Finally, a short overview of MIT Lincoln Laboratory, a federally funded research and development center, is presented as a case study on how one can leverage such an organization to make an impact.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Homeland Protection

Disease Modeling to Assess Outbreak Detection and Response

Dr. Diane C. Jamrog1 
MIT Lincoln Laboratory

Bioterrorism is a serious threat that has become widely recognized since the anthrax mailings of 2001. In response, one national research activity has been the development of biosensors and networks thereof. A driving factor behind biosensor development is the potential to provide early detection of a biological attack, thereby enabling timely treatment. This presentation introduces a disease progression and treatment model to quantify the potential benefit of early detection. To date, the model has been used to assess responses to inhalation anthrax and smallpox outbreaks.

1PhD, Computation and Applied Mathematics, Rice University

top


Radar and Signal Processing

Adaptive Array Detection

Dr. Christ D. Richmond1
MIT Lincoln Laboratory

Adaptive detection theory began with the development of radar in the early 1940s, mostly in classified circles. Surveillance radar systems strived for automatic detection of target echoes in additive clutter and noise of unknown or time-varying power. The goal was to optimize signal detectability while constraining the number of false alerts generated. Signal and noise integration (coherent and incoherent) resulting in a sufficient statistic to be compared to a specified threshold was the accepted approach to improving signal detectability. Noise increases and nonstationarity caused by jamming interference, and variations in clutter power, however, prompted the use of cell-averaging constant false-alarm rate (CA-CFAR) processing that essentially uses a local estimate of the noise power to normalize the detection statistic. As digital technology evolved and hardware design improved, the use of multisensor arrays that exploit the spatial dimension to coherently cancel jamming interference and clutter became an attractive option. Development of a class of adaptive array detection algorithms emerged from this multivariate framework that represents multidimensional extensions of ideas intimately similar to classical CA-CFAR processing. This class of algorithms includes the adaptive matched filter (AMF), Kelly/Khatri's generalized likelihood ratio test (GLRT), Scharf's adaptive coherence estimator (ACE), and the 2D adaptive sidelobe blanker (ASB). This talk will review classic CA-CFAR processing as a backdrop to an enlightened extended discussion of the analysis, performance, and inherent properties of the more contemporary adaptive array detection approaches.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Adaptive Array Estimation

Dr. Christ D. Richmond1
MIT Lincoln Laboratory

Parameter estimation is a necessary step in most surveillance systems and typically follows detection processing. Estimation theory provides parameter bounds specifying the best achievable performance and suggests maximum-likelihood (ML) estimation as a viable strategy for algorithm development. Adaptive sensor arrays introduce the added complexity of bounding and assessing parameter estimation performance (i) in the presence of limiting interference whose statistics must be inferred from measured data and (ii) under uncertainty in the array manifold for the signal search space. This talk focuses on assessing the mean-squared-error (MSE) performance at low and high signal-to-noise ratio (SNR) of nonlinear ML estimation that (i) uses the sample covariance matrix as an estimate of the true noise covariance and (ii) has imperfect knowledge of the array manifold for the signal search space. The method of interval errors (MIE) is used to predict MSE performance and is shown to be remarkably accurate well below estimation threshold. SNR loss in estimation performance due to noise covariance estimation is quantified and is shown to be quite different from analogous losses obtained for detection. Lastly, a discussion of the asymptotic efficiency of ML estimation is also provided in the general context of misspecified models, the most general form of model mismatch.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Bioinspired Resource Management for Multiple-Sensor
Target Tracking Systems

Dr. Dana Sinno1 and Dr. Hendrick C. Lambert2
MIT Lincoln Laboratory

We present an algorithm, inspired by self-organization and stigmergy observed in biological swarms, for managing multiple sensors tracking large numbers of targets. We have devised a decentralized architecture wherein autonomous sensors manage their own data collection resources and task themselves. Sensors cannot communicate with each other directly; however, a global track file, which is continuously broadcast, allows the sensors to infer their contributions to the global estimation of target states. Sensors can transmit their data (either as raw measurements or some compressed format) only to a central processor where their data are combined to update the global track file. We outline information-theoretic rules for the general multiple-sensor Bayesian target tracking problem and provide specific formulas for problems dominated by additive white Gaussian noise. Using Cramér-Rao lower bounds as surrogates for error covariances and numerical scenarios involving ballistic targets, we illustrate that the bioinspired algorithm is highly scalable and performs very well for large numbers of targets.

1PhD, Electrical Engineering, Arizona State University
2PhD, Applied Physics, University of California, San Diego

top


Parameter Bounds Under Misspecified Models

Dr. Christ D. Richmond1
MIT Lincoln Laboratory

Parameter bounds are traditionally derived assuming perfect knowledge of data distributions. When the assumed probability distribution for the measured data differs from the true distribution, the model is said to be misspecified; mismatch at some level is inevitable in practice. Thus, several authors have studied the impact of model misspecification on parameter estimation. Most notably, Peter Huber explored in detail the performance of maximum-likelihood (ML) estimation under a very general form of misspecification; he showed consistency and normality, and derived ML estimation’s asymptotic covariance that is often referred to as the celebrated "sandwich covariance."

The goal of this talk is to consider the class of non-Bayesian parameter bounds emerging from the covariance inequality under the assumption of model misspecification. Casting the bound problem as one of constrained minimization is likewise considered. Primary attention is given to the Cramér-Rao bound (CRB). It is shown that Huber's sandwich covariance is the misspecified CRB and provides the greatest lower bound (tightest) under ML constraints. Consideration of the standard circular complex Gaussian ubiquitous in signal processing yields a generalization of the Slepian-Bangs formula under misspecification. This formula, of course, reduces to the usual one when the assumed distribution is in fact the correct one. The framework is outlined for consideration of the Barankin/Hammersley-Chapman-Robbins, Bhattacharyya, and Bobrovsky-MayerWolf-Zakai bound under misspecification.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Polynomial Rooting Techniques for Adaptive Array Direction Finding

Dr. Gary F. Hatke1 
MIT Lincoln Laboratory

Array processing has many applications in modern communications, radar, and sonar systems. Array processing is used when a signal in space, be it electromagnetic or acoustic, has some spatial coherence properties that can be exploited (such as far-field plane wave properties). The array can be used to sense the orientation of the plane wave and thus deduce the angular direction to the source. Adaptive array processing is used when there exists an environment of many signals from unknown directions as well as noise with unknown spatial distribution. Under these circumstances, classical Fourier analysis of the spatial correlations from an array data snapshot (the data seen at one instance in time) is insufficient to localize the signal sources.

In estimating the signal directions, most adaptive algorithms require computing an optimization metric over all possible source directions and searching for a maximum. When the array is multidimensional (e.g., planar), this search can become computationally expensive, as the source direction parameters are now also multidimensional. In the special case of one-dimensional (line) arrays, this search procedure can be replaced by solving a polynomial equation, where the roots of the polynomial correspond to estimates of the signal directions. This technique had not been extended to multidimensional arrays because these arrays naturally generated a polynomial in multiple variables, which does not have discrete roots.
This seminar introduces a method for generalizing the rooting technique to multidimensional arrays by generating multiple optimization polynomials corresponding to the source estimation problem and finding a set of simultaneous solutions to these equations, which contain source location information. It is shown that the variance of this new class of estimators is equal to that of the search techniques they supplant. In addition, for sources spaced more closely than a Rayleigh beamwidth, the resolution properties of the new polynomial algorithms are shown to be better than those of the search technique algorithms.

1PhD, Electrical Engineering, Princeton University

top


Radar Signal Distortion and Compensation with Transionospheric Propagation Paths

Dr. Scott D. Coutts1
MIT Lincoln Laboratory

Electromagnetic signals propagating though the atmosphere and ionosphere are distorted and refracted by this propagation media. The effects are particularly pronounced for lower-frequency signals propagating through the ionosphere. In the extreme case, frequencies in the high-frequency (HF) band can be severely refracted by the ionosphere to the point that they are reflected back toward the ground. This effect is exploited by over-the-horizon radars and HF communication systems to achieve very-long-range, over-the-horizon performance. For the general radar case, the measurements of range, Doppler shift, and elevation and azimuth angles are all corrupted from their free-space values with time-varying biases.

To provide accurate radar parameter estimates in the presence of these errors, a three-dimensional method to compensate for the ionosphere refraction has been developed at Lincoln Laboratory. In this seminar, two examples of the method’s use are provided: (1) the radar returns from a known object are used to specify an unknown ionosphere electron density, and (2) an unknown satellite state vector is estimated with and without the ionosphere compensation so that the accuracy improvement can be quantified. Magneto-ionic ray tracing is used to generate the three-dimensional propagation model and propagation correction tables. A maximum-likelihood satellite-ephemeris estimator is designed and demonstrated using corrupted radar data. The technique is demonstrated using “real data” examples with very encouraging results and is applicable at radar frequencies ranging from high to ultrahigh.

1PhD, Electrical Engineering, Northeastern University

top


Synthetic Aperture Radar

Dr. Gerald R. Benitz1 
MIT Lincoln Laboratory

MIT Lincoln Laboratory is investigating the application of phased-array technology to improve the state of the art in radar surveillance. Synthetic aperture radar (SAR) imaging is one mode that can benefit from a multiple-phase-center antenna. The potential benefits are protection against interference, improved area rate and resolution, and multiple simultaneous modes of operation.

This seminar begins with an overview of SAR, giving the basics of resolution, collection modes, and image formation. Several imaging examples are provided. Results from the Lincoln Multimission ISR Testbed (LiMIT) X-band airborne radar are presented. LiMIT employs an eight-channel phased-array antenna and records 180 MHz bandwidth from each channel simultaneously. One result employs adaptive processing to reject wideband interference, demonstrating recovery of a corrupted SAR image. Another result employs multiple simultaneous beams to increase the area of the image beyond the conventional limitation that is due to the pulse repetition frequency. Areas that are Doppler ambiguous can be disambiguated by using the phased-array antenna.

1PhD, Electrical Engineering, University of Wisconsin–Madison

top


Solid State Devices, Materials, and Processes

Chemical Aerosol Characterization by Single-Particle
Infrared Elastic Scattering

Dr. William D. Herzog1 and Dr. Brian G. Saar2
MIT Lincoln Laboratory

Detection and identification of aerosol particles based on their chemical composition is a longstanding problem in atmospheric science. MIT Lincoln Laboratory is developing a real-time sensor that provides infrared chemical fingerprints for individual aerosol particles sampled from the ambient environment. This sensor will be used to characterize air quality and warn of particulate respiration hazards. The sensor makes use of a novel beam-combined array of quantum-cascade lasers to illuminate individual particles with high power across the long-wave infrared, and collects the scattered light and analyzes its spectrum to deduce the chemical composition of the particle. This talk will provide an overview of the elastic-scattering detection scheme and will describe the sensor hardware development and the signal processing needed to identify chemical composition. The evolution of the program, from initial theory to laboratory measurements to engineering and packaging of fieldable hardware, demonstrates Lincoln Laboratory’s unique project development process, from whiteboard brainstorming all the way to fielding integrated sensor systems.

1PhD, Electrical Engineering, Boston University
2PhD, Chemistry, Harvard University

top


Dynamic Photoacoustic Spectroscopy for Trace Gas Detection

Dr. Charles M. Wynn1, Dr. Michelle L. Clark2, and Dr. Roderick R. Kunz3
MIT Lincoln Laboratory

Dynamic photoacoustic spectroscopy (DPAS) is a trace-gas sensing technique recently developed at MIT Lincoln Laboratory. It is a novel laser-based means of remotely sensing extremely low concentrations of gases.

The ability to remotely detect trace gases is of great interest for many reasons. It has the potential to enable many important capabilities, including efficient monitoring of environmental pollutants, safe detection of threats from chemical agents or explosives, or monitoring of illegal activities (i.e., drug manufacturing) via effluent detection. In many cases, the relevant vapor concentrations are quite low; thus, a highly sensitive technique is required. No techniques developed to date have demonstrated both high sensitivity and remote operation. DPAS has recently [1, 2] demonstrated both the high sensitivity and standoff capability necessary to significantly impact several important missions.

DPAS is a variant of the well-known photoacoustic spectroscopy (PAS). PAS is a laser-based technique that detects gases by generating acoustic signals via a laser tuned to different absorption features of the gas. What separates DPAS from PAS is that the DPAS laser beam is swept through a gas plume at the speed of sound. The resulting coherent addition of acoustic waves leads to an amplification of the acoustic signal. In a manner similar to shock waves generated by supersonic jet planes, a shock wave is produced with significantly enhanced amplitude as compared to the very weak photoacoustic signal. In contrast, PAS generally requires a closed resonant chamber for amplification (inherently not a standoff configuration). Using DPAS, we have generated and detected acoustic signals as high as 83 dB (easily audible to the unaided human ear) from trace gases.

[1] C.M. Wynn, S. Palmacci, M.L. Clark, and R.R. Kunz, “Dynamic Photoacoustic Spectroscopy for Trace Gas Detection,” Applied Physics Letters, vol. 101, 2012.
[2] C.M. Wynn, S. Palmacci, M.L. Clark, and R.R. Kunz, “High-Sensitivity Detection of Trace Gases Using Dynamic Photoacoustic Spectroscopy,” Optical Engineering, vol. 53, no. 2, 2014.

1PhD, Physics, Clark University
2PhD, Chemistry, Massachusetts Institute of Technology
3PhD, Analytical Chemistry, University of North Carolina at Chapel Hill

top


Fully Depleted Silicon-on-Insulator Process Technology for
Subthreshold-Operation Ultra-Low-Power Electronics

Dr. Steven A. Vitale1
MIT Lincoln Laboratory

Ultra-low-power transistors are an enabling technology for many proposed applications, including ubiquitous sensor networks, RFID tags, implanted medical devices, portable biosensors, handheld devices, 3D and parallel processing, and space-based applications [1, 2]. Other applications include energy-harvesting devices that recharge batteries by scavenging power from motion or solar cells. With an operating voltage of 0.3 V and an on-current of less than 1 mA/μm, subthreshold transistors use orders of magnitude less power than transistors operated in strong inversion.

MIT Lincoln Laboratory has designed a subthreshold-optimized fabrication process from the substrate material through the interconnect metal. Typical conventional transistors, designed for high performance above threshold operation, will have comparatively high off-state leakage and overlap capacitance, as well as poorer subthreshold slope and potentially lower channel mobility. With the Laboratory transistors specifically engineered for subthreshold operation, it is possible to realize a device with a minimum switching energy and off-state current without significant impact to the energy-delay product. Fully depleted silicon-on-insulator (FDSOI) ultra-low-power transistors have been fabricated using the Lincoln Laboratory subthreshold-optimized process. A near-ideal subthreshold slope of 64 mV/decade has been demonstrated with longer gate-length transistors (500 nm), with 4 pA/μm of leakage current and a 71% reduction in overlap capacitance. Multiple circuits have been tested below the 0.3 V program goal, with a 97-stage ring oscillator providing baseline characterization. Compared to a commercially available bulk silicon ring oscillator of similar gate length operating at 0.3 V, the subthreshold-optimized FDSOI device decreases the switching energy from 0.241 fJ/μm to 0.099 fJ/μm and decreases the stage delay from 153 ns to 13 ns [3, 4].

[1] D. Bol, R. Ambroise, D. Flandre, J.-D. Legar, "Sub-45 nm Fully-Depleted SOI CMOS Subthreshold Logic for Ultra-Low-Power Applications," 2008 IEEE International SOI Conference Proceedings, pp. 57–58, October 2008.
[2] A. Uchiyama, S. Baba, Y. Nagatomo, J. Ida, "Fully Depleted SOI Technology for Ultra-Low-Power Digital and RF Applications," 2006 IEEE International SOI Conference Proceedings, pp. 15–16, October 2006.
[3] M.J. Deen, S. Naseh, O. Marinov, M.H. Kazemeini, "Very Low-Voltage Operation Capability of Complementary Metal-Oxide-Semiconductor Ring Oscillators and Logic Gates," Journal of Vacuum Science and Technology A, vol. 24, pp. 763–769, 2006.
[4] S.A. Vitale, J. Kedzierski, P.W. Wyatt, M. Renzi, and C.L. Keast, "FDSOI Metal Gate Transistors for Ultra-Low-Power Subthreshold Operation," IEEE International SOI Conference, 11–14 October 2010, IEEE, 2010.

1PhD, Chemical Engineering, Massachusetts Institute of Technology

top


Geiger-Mode Avalanche Photodiode Arrays for Imaging and Sensing

Dr. Brian F. Aull1
MIT Lincoln Laboratory

This seminar discusses the development of arrays of silicon avalanche photodiodes integrated with digital complementary metal-oxide semiconductor (CMOS) circuits to make focal planes with single-photon sensitivity. The avalanche photodiodes are operated in Geiger mode: they are biased above the avalanche breakdown voltage so that the detection of a single photon leads to a discharge that can directly trigger a digital circuit. The CMOS circuits to which the photodiodes are connected can either time stamp or count the resulting detection events. Applications include three-dimensional imaging using laser radar, wavefront sensing for adaptive optics, and optical communications.

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Hardware Phenomenological Effects on Co-channel Full-Duplex
MIMO Relay Performance

Dr. Timothy M. Hancock1
MIT Lincoln Laboratory

This presentation will discuss the performance of co-channel full-duplex multiple-input multiple-output (MIMO) nodes in the context of models for realistic hardware characteristics. Here, co-channel full-duplex relay indicates a node that transmits and receives simultaneously in the same frequency band. It is assumed that transmit and receive phase centers are physically distinct, enabling adaptive spatial transmit and receive processing to mitigate self-interference. The use of MIMO indicates a self-interference channel with spatially diverse inputs and outputs, although multiple modes are not explored in this analysis. Rather, the focus will be on rank-1 transmit covariance matrices. In practice, the limiting issue for co-channel full-duplex nodes is the ability to mitigate self-interference. While theoretically a system with infinite dynamic range and exact channel estimation can mitigate the self-interference perfectly, in practice, transmitter and receiver dynamic range, nonlinearities, and noise, as well as channel dynamics, limit the practical performance. This presentation will investigate the self-interference mitigation limitations in the context of eigenvalue spread of spatial transmit and receive covariance matrices caused by realistic hardware models.

1PhD, Electrical Engineering, University of Michigan

top


Integrated Optics in Silicon

Dr. Steven J. Spector1 and Dr. Michael W. Geis2
MIT Lincoln Laboratory

Recent innovations in the field of integrated photonics are driving a revolutionary miniaturization of optical components. Silicon photonics technology uses complementary metal-oxide semiconductor (CMOS) materials and fabrication infrastructure, and has the potential for producing inexpensive, compact, highly integrated, and high-yielding photonic microchips. What previously required a bench full of optical components can now be made in a single photonic microchip. 

MIT Lincoln Laboratory has been active in the field of silicon photonics for more than 10 years and has developed a toolbox of photonic components. This toolbox—which includes waveguides, couplers, modulators, filters, and photodetectors—enables the fabrication of photonic microchips for a variety of applications. This seminar provides an introduction to silicon photonics and describes the major components in detail. These components have been combined on a single chip to demonstrate the front end of an optically assisted analog-to-digital converter. 

1PhD, Physics, State University of New York–Stony Brook
2PhD, Physics, Rice University

top


Metamaterials and Plasmonics Research at MIT Lincoln Laboratory

Dr. Vladimir Liberman1, Dr. Kenneth Diest2, and Dr. Mordechai Rothschild3
MIT Lincoln Laboratory

The field of plasmonics and optical metamaterials has grown tremendously within the last decade. Metamaterials refer to artificial materials engineered at the nanoscale to have useful properties not normally found in nature. Plasmonics refers to the subset of metamaterials that rely on surface plasmons, which are light-induced coherent electron oscillations at metal/dielectric interfaces. In this seminar, we will describe our activities in the field of optical metamaterials as related to large-scale sensor fabrication, enhancement of nonlinear phenomena, and development of novel metrology methods.

Our work encompasses the simulation of metamaterials-based nanophotonic devices, development of nanofabrication processes, and materials and device characterization. While traditional plasmonics relies on devices made with gold and silver, we have been exploring aluminum as an alternative plasmonic material for use in the ultraviolet and blue parts of the wavelength spectrum. Surface plasmon propagation in a variety of aluminum films has been investigated using total internal reflection ellipsometry and correlated to the film nanostructure. Additionally, we have been investigating the enhancement of nonlinear properties of plasmonic materials and nanocomposites, such as nonlinear absorption, scattering, and bleaching. Applications of interest include optical limiting and actively tunable optical components. 

1PhD, Physics, Columbia University
2PhD, Materials Science, California Institute of Technology
3PhD, Optics, University of Rochester

top


Microfluidics at MIT Lincoln Laboratory

Prof. Todd A. Thorsen1, Dr. Shaun R. Berry2, and Dr. Jakub Kedzierski3
MIT Lincoln Laboratory

At MIT Lincoln Laboratory, we are engineering general-purpose tools for microfluidic platforms. Cross-disciplinary teams, consisting of engineers, programmers, and biologists, are designing and developing microfluidic tools for a broad range of applications. Our vision is to apply microfabrication techniques, traditionally used in microelectromechanical system (MEMS) and integrated circuit manufacturing, to develop microfluidic components and systems that are reconfigurable, scalable, and programmable through a software interface. End users of these microfluidic tools will be able to orchestrate complex and adaptive procedures that are beyond the capabilities of today’s hardware. This talk will provide a broad overview on the diverse microfluidic programs at Lincoln Laboratory, highlighting recent work, including ultra-low-power electrowetting-based pumps, integrated "lab-on-a-chip" systems for biological exploration, actively configurable pixel-scale liquid microlenses and prisms, and microhydraulic actuators.

1PhD, Biochemistry and Molecular Biophysics, California Institute of Technology
2PhD, Mechanical Engineering, Tufts University
3
PhD, Electrical Engineering, University of California–Berkeley

top


Optical Sampling for High-Speed, High-Resolution
Analog-to-Digital Conversion

Dr. Paul W. Juodawlkis1and Dr. Jonathan C. Twichell2
MIT Lincoln Laboratory

The performance of digital receivers used in modern radar, communication, and surveillance systems is often limited by the performance of the analog-to-digital converter (ADC) used to digitize the received signal. Optically sampled ADCs, which combine optical sampling with electronic quantization, have been demonstrated to extend the performance of electronic ADCs. The primary advantages of using optics to perform the sampling function include (1) the timing jitter of modern mode-locked lasers is more than an order of magnitude smaller than that of electronic sampling circuitry, (2) the low dispersion of optical components allows picosecond sampling pulses to be used to attain wide analog bandwidth, and (3) demultiplexing to arrays of time-interleaved electronic converters can be performed in the optical domain rather than in the electrical domain with no signal bandwidth, nonlinearity, or memory effect constraints.

MIT Lincoln Laboratory's work in this area has focused on the development of a linear sampling technique referred to as phase-encoded optical sampling. The technique uses a dual-output Mach-Zehnder electro-optic modulator as a sampling transducer to achieve both high linearity and 60 dB suppression of laser amplitude noise. Two-tone tests have been used to demonstrate an intermodulation-free dynamic range of 90 dB. The Laboratory also used optical sampling to directly downsample frequency-modulated chirp signals having 1 GHz bandwidth on an X-band (10 GHz) microwave carrier. The bandwidth of the technique is extended by optically distributing the post-sampling pulses to an array of time-interleaved electronic quantizers. Using high-extinction 1-to-8 LiNbO3 optical time-division demultiplexers to perform the optical distribution, Lincoln Laboratory has demonstrated a 500 MS/s ADC having 10 effective bits of resolution and a spur-free dynamic range in excess of 70 dB.

1PhD, Electrical Engineering, Georgia Institute of Technology
2PhD, Nuclear Engineering, University of Wisconsin–Madison

top


Pan-STARRS: Gigapixel Astronomy with Atmospheric Distortion Correction

Dr. Vyshnavi Suntharalingam1 and Dr. Bernard B. Kosicki2
MIT Lincoln Laboratory

The Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) is an innovative wide-field imaging facility developed at the University of Hawaii's Institute for Astronomy. The combination of four relatively small mirrors (1.8 m) with very large digital cameras (1.4 Gpixels each) results in an economical system that can observe the entire available sky several times each month. The redundancy offered by using multiple mirrors to view the same area of the sky yields the same light collection as a 3.6 m mirror and also allows for economical use of not-quite-perfect imager chips.

This presentation describes the technology behind the gigapixel Pan-STARRS charge-coupled-device (CCD) focal plane developed and constructed at Lincoln Laboratory. This focal plane is the largest ever constructed for astronomy. A second unique feature of this very large focal plane is the use of the orthogonal transfer CCD (OTCCD) as the basic imaging cell. Pan-STARRS is also the first large-scale use of OTCCD technology, which allows compensation of the translational-movement component of atmospheric distortion. The focal plane design enables atmospheric compensation to be individually implemented for each 10 × 10-arc-minute portion of the total 3-degree-wide image, and accounts for the exceptional ability of the system to do very accurate astrometry.

The primary purpose of Pan-STARRS is to detect potentially hazardous objects in the Solar System, but its ability to map very large areas of sky to great sensitivity and its ability to find faint moving or variable objects make the system uniquely valuable for a large number of other scientific purposes. The prototype single-mirror telescope PS1 is now operational on Mount Haleakala.

1PhD, Engineering Science & Mechanics, Pennsylvania State University
2PhD, Physics, Harvard University

top


Quantum Information Science with Superconducting Artificial Atoms

Dr. William D. Oliver1
MIT Lincoln Laboratory
& the Research Laboratory of Electronics

Superconducting qubits are artificial atoms assembled from electrical circuit elements. When cooled to cryogenic temperatures, these circuits exhibit quantized energy levels. Transitions between levels are induced by applying pulsed microwave electromagnetic radiation to the circuit, revealing quantum coherent phenomena analogous to (and in certain cases beyond) those observed with coherent atomic systems.

This talk provides an overview of quantum information science and superconducting artificial atoms, including several demonstrations of quantum coherence using these circuits: Landau-Zener-Stückelberg oscillations [1], microwave-induced qubit cooling to temperatures less than 3 mK (colder than the refrigerator) [2], and a new broadband spectroscopy technique called amplitude spectroscopy [3]. We then discuss in detail a highly coherent aluminum qubit (T1 = 12 µs, T2Echo = 23 µs, fidelity = 99.75%) with which we demonstrated noise spectroscopy using nuclear magnetic resonance (NMR)-inspired control sequences comprising 100s of pulses [4, 5].

These experiments exhibit a remarkable agreement with theory and are extensible to other solid-state qubit modalities. In addition to fundamental studies of quantum coherence in solid-state systems, we anticipate these devices and techniques will advance qubit control and state-preparation methods for quantum information science and technology applications.

[1] W.D. Oliver, et al., "Mach-Zehnder Interferometry in a Strongly Driven Superconducting Qubit," Science, vol. 310, no. 5754, pp. 1653–1657, (2005.
[2] S.O. Valenzuela, et al., "Microwave-Induced Cooling of a Superconducting Qubit," Science, vol. 314, no. 5805, pp. 1589–1592, 2006.
[3] D.M. Berns et al., "Amplitude Spectroscopy of a Solid-State Artificial Atom," Nature, vol. 455, pp. 51–57, 2008.
[4] J. Bylander, et al., "Noise Spectroscopy Through Dynamical Decoupling with a Superconducting Flux Qubit," Nature Physics, vol. 7, pp. 565–570, 2011.
[5] F.Yan, et al., "Rotating-Frame Relaxation as a Noise Spectrum Analyzer of a Superconducting Qubit Undergoing Driven Evolution," accepted for publication in Nature Communications, 2013.

1PhD, Electrical Engineering, Stanford University

top


Slab-Coupled Optical Waveguide Devices and Their Applications

Dr. Paul W. Juodawlkis1, Dr. Joseph P. Donnelly2,
Dr. Gary M. Smith3, and Dr. George W. Turner4
MIT Lincoln Laboratory

For the past decade, MIT Lincoln Laboratory has been developing new classes of high-power semiconductor optoelectronic emitters and detectors based on the slab-coupled optical waveguide (SCOW) concept. The key characteristics of the SCOW design include (1) the use of a planar slab waveguide to filter the higher-order transverse modes from a large rib waveguide, (2) low overlap between the optical mode and the active layers, and (3) low excess optical loss. These characteristics enable waveguide devices having large (> 5 × 5 μm) symmetric fundamental-mode operation and long length (~1 cm). These large dimensions, relative to conventional waveguide devices, allow efficient coupling to optical fibers and external optical cavities, and provide reduced electrical and thermal resistances for improved heat dissipation.

This seminar will review the SCOW operating principles and describe applications of the SCOW technology, including Watt-class semiconductor SCOW lasers (SCOWLs) and amplifiers (SCOWAs), monolithic and ring-cavity mode-locked lasers, single-frequency external cavity lasers, and high-current waveguide photodiodes. The SCOW concept has been demonstrated in a variety of material systems at wavelengths including 915, 960–980, 1040, 1300, 1550, and 2100 nm. In addition to single emitters, higher brightness has been obtained by combining arrays of SCOWLs and SCOWAs using wavelength beam-combining and coherent combining techniques. These beam-combined SCOW architectures offer the potential of kilowatt-class, high-efficiency, electrically pumped optical sources.

1PhD, Electrical Engineering, Georgia Institute of Technology

2PhD, Electrical Engineering, Carnegie Mellon University
3PhD, Electrical Engineering, University of Illinois at Urbana-Champaign
4PhD, Electrical Engineering, Johns Hopkins University

top


Submicrosecond to Subnanosecond Snapshot Imaging Technology

Dr. Dennis D. Rathman1 and Dr. Robert Reich2
MIT Lincoln Laboratory

Research laboratories for both the Department of Defense and the Department of Energy have imaging applications that require very fast snapshot imagers to analyze a wide range of rapidly evolving phenomena. MIT Lincoln Laboratory has a long history of developing high-frame-rate imagers to meet those application needs. This seminar will discuss the most recent technology development of three imagers: a charge-coupled device (CCD)-based 4-exposure device and a CCD-based 50-exposure device that both have burst rates greater than one million frames per second, and a complementary metal-oxide semiconductor (CMOS)–based X-ray imager capable of taking 100 ps snapshot exposures.

1PhD, Physics, Lehigh University
2PhD, Electrical Engineering, Colorado State University

top


Subthreshold Design of FPGAs for Minimum Energy Operation

Dr. Peter J. Grossmann1
MIT Lincoln Laboratory

Embedded systems continue to become smaller, demand greater compute capability, and target deployment in more energy-starved environments. System power budgets of less than 1 mW are increasingly common, while standby power is brought as close to zero as possible. While field-programmable gate arrays (FPGAs) have historically been used as compute engines in low-power systems, they have not kept pace with application-specific integrated circuits (ASICs) and microprocessors in meeting the needs of these ultra-low-power systems. Research in both ASICs and microprocessors has extended voltage scaling into the subthreshold region of transistor operation, sacrificing performance in exchange for dramatic power savings. For some ultra-low-power systems such as wireless sensor networks and implantable biomedical devices, performing a computation with minimum energy consumption rather than within a certain time frame is the goal. It has been shown that to minimize energy for ASICs and microprocessors, subthreshold operation is typically required. For FPGAs, the answer remains largely unexplored—the first subthreshold FPGA has only recently been fabricated, and minimum energy operation of FPGAs has not been thoroughly studied.

This research presents multiple steps forward in the design and analysis of FPGAs targeting minimum energy operation. A fabricated FPGA test chip capable of single-supply subthreshold operation is presented, with measurement results demonstrating FPGA programming and operation as low as 260 mV.  The capability to minimize energy per clock cycle at subthreshold supply voltages for a high activity factor test case is also shown, indicating that the flexible nature of FPGAs does not inherently prevent their energy minimum occurring below threshold. A simulation flow for performing prefabrication chip-level minimum energy analysis for FPGAs has also been developed in this work. By combining industry-standard integrated circuit design verification software with academic FPGA software and custom scripts, the minimum energy point sensitivity of an FPGA to its programming was investigated. The FPGA was programmed with 21 different IEEE International Symposium on Circuits and Systems (ISCAS) '85 benchmarks, and a minimum energy supply voltage was estimated for each with a nominal input activity factor. The benchmarks had minimum energy points ranging from 0.42–0.54 V, or slightly above threshold. The minimum energy point was not a strong function of benchmark circuit size or input count, suggesting that the topology of the benchmark circuit influenced the FPGA minimum energy point.

1PhD, Computer Engineering, Northeastern University

top


Three-Dimensional Integration Technology for
Advanced Focal Planes and Integrated Circuits

Donna-Ruth Yost1 and Dr. Chenson Chen2
MIT Lincoln Laboratory

Over the last decade, MIT Lincoln Laboratory has developed a three-dimensional (3D) circuit integration technology that exploits the advantages of silicon-on-insulator technology to enable wafer-level stacking and micrometer-scale electrical interconnection of fully fabricated circuit wafers [1].

Advanced focal-plane arrays have been the first applications to exploit the benefits of this 3D integration technology because the massively parallel information flow present in two-dimensional imaging arrays maps very nicely into a 3D computational structure as information flows from circuit tier to circuit tier in the z-direction. To date, the Laboratory's 3D integration technology has been used to fabricate four different focal planes, including a two-tier 64 × 64 imager with fully parallel per-pixel analog-to-digital (A/D) conversion [2]; a three-tier 640 × 480 imager consisting of an imaging tier, an A/D conversion tier, and a digital signal processing tier; two-tier 1024 × 1024 pixel, four-side-abuttable imaging modules for tiling large mosaic focal planes [3, 4]; and a three-tier Geiger-mode avalanche photodiode (APD) 3D LIDAR array, using a 30-volt avalanche-photodiode tier, a 3.3-volt complementary metal-oxide semiconductor (CMOS) tier, and a 1.5-volt CMOS tier [5].

Recently, the 3D integration technology has been made available to the circuit-design research community through multiproject fabrication runs sponsored by the Defense Advanced Research Projects Agency. Three different multiproject runs have been completed and included over 100 different circuit designs from 40 different research groups. Three-dimensional circuit concepts explored in these runs included stacked memories, field-programmable gate arrays, and mixed-signal and RF circuits. We have developed an understanding of heterogeneous 3D integration issues by successfully demonstrating 3D integration of Si CMOS read out integrated circuits to InGaAs photodiode wafers [6], and an understanding of mixed-fabrication facility issues by 3D integrating Si CMOS readout integrated circuits (ROIC) with externally fabricated technologies. This seminar will discuss the enabling technologies required for this approach to 3D integration, circuits demonstrated by this technology, and current 3D technology programs at Lincoln Laboratory.

[1] J.A. Burns, et al., "A Wafer-Scale 3-D Circuit Integration Technology," IEEE Transactions on Electron Devices, vol. 53, no. 10, pp. 2507–2516, October 2006.
[2] J.A. Burns, et al., "Three-dimensional Integrated Circuits for Low Power, High Bandwidth Systems on a Chip," 2001 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, vol. 44, pp. 268–269, February 2001.
[3] V. Suntharalingam, et al., "Megapixel CMOS Image Sensor Fabricated in Three-Dimensional Integrated Circuit Technology," 2005 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, vol. 48, pp. 356–357, February 2005.
[4] V. Suntharalingam, et al., "A Four-Side Tileable, Back Illuminated, 3D- Integrated Megapixel CMOS Image Sensor," IEEE 2009 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, pp. 38–39, February 2009.
[5] B. Aull, et al., "Laser Radar Imager Based on 3D Integration of Geiger-Mode Avalanche Photodiodes with Two SOI Timing Circuit Layers," 2006 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, vol. 49, pp. 304–305, February 2006.
[6] C.L. Chen, et al., "Wafer-Scale 3D Integration of InGaAs Image Sensors with Si Readout Circuits," IEEE International Conference on 3D System Integration, San Francisco, 28–30 Sept. 2009 (Best Paper Award).

1BS, Materials Science and Engineering, Cornell University
2PhD, Physics, University of California Berkley

top


Toward Large-Scale Trapped-Ion Quantum Processing

Dr. John Chiaverini1
MIT Lincoln Laboratory

Atomic ions held in electromagnetic traps and manipulated with optical, radio-frequency, and microwave fields are among the most promising implementations for useful quantum information processing. These well-isolated quantum two-level systems (qubits) have been shown to maintain coherence for many seconds while also being controllable on the microsecond timescale. Accomplishments at the few-qubit level include high-fidelity demonstrations of basic quantum algorithms, but a clear path to a large-scale processor is not fully defined. Current work involves devising and demonstrating scalable architectures and low-error quantum operations. This presentation will describe efforts to develop scalable ion techniques, in particular rapid ion-qubit loading, integration of technology for increased on-chip control of ion qubits, and the reduction of multi-qubit gate errors. By using a novel ion-loading method employing cold neutral atoms, we plan to increase array loading rates by orders of magnitude, approaching the required rates for realistic computations. Through the use of a surface-electrode trap geometry, efficient measurement and ion routing devices may be integrated directly with trapping electrodes for simpler scaling. Additionally, utilizing novel electrode preparation and materials that include superconducting technologies, we investigate trap electrode surface properties to address anomalous ion heating, which may soon limit standard two-qubit gate operations.

1PhD, Physics, Stanford University

top


Ultrasensitive Mass Spectrometry Development
at MIT Lincoln Laboratory

Dr. Matthew Aernecke1, Dr. Jude Kelley2, and Dr. Roderick Kunz3
MIT Lincoln Laboratory

Mass spectrometry (MS) has long been regarded as one of the most reliable methods for chemical analysis because of its ability to identify molecules based on their molecular weight and fragmentation patterns combined with its sensitivity in the pico- to femtogram range. Traditionally, analytical systems that utilize mass spectrometry have coupled this method with other analytical techniques such as gas or liquid chromatography; however, the development of ambient ionization techniques and improvements in mass spectrometer design have demonstrated that this technique can function on its own as a multipurpose chemical detector.

MIT Lincoln Laboratory has been advancing these systems in an effort to develop the next generation of mass spectrometry–based sensing systems focused on detection missions relevant to national security. This work has centered on ways to improve the sensitivity of MS-based systems to explosives when they are encountered as vapors and as trace particulate residues. The vapor detection system that the Laboratory has developed has a real-time sensitivity to concentrations in the parts-per-quadrillion (ppqv) range. Real-time sensitivity at these levels rivals that of conventional vapor detectors (canines), providing opportunities to (1) better understand the origins, dynamics, concentrations, and attenuation levels of vapor signatures associated with concealed threats and (2) help improve canine training. The Laboratory's work on detecting an explosive particulate residue has focused on applying thermal desorption and atmospheric pressure chemical ionization (TD-APCI) to swipe-based surface samples across a wide range of explosive classes. For explosive threats that are challenging to detect with TD-APCI, Lincoln Laboratory researchers have developed specialized chemical reagents that can be added directly to the ionization source to improve both the specificity of the technique as well as its sensitivity. The information revealed from these studies is used to assess current mass spectrometer–based explosive trace detection systems and guide future development efforts. 

1PhD, Chemistry, Tufts University
2PhD, Physical Chemistry, Yale University
3PhD, Analytical Chemistry, University of North Carolina at Chapel Hill

top


Space Control Technology

New Techniques for High-Resolution Atmospheric Sounding

Dr. William J. Blackwell1
MIT Lincoln Laboratory

Modern spaceborne atmospheric sounders consist of passive spectrometers that measure spectral radiance intensity in microwave (approximately 1 cm to 1 mm wavelength), millimeter wave (approximately 1 mm to 300 µm), and thermal infrared (approximately 3 µm to 16 µm) bands. In the last decade, advanced microwave sounders (AMSU and ATMS) and hyperspectral infrared sounders (AIRS, IASI, and CrIS) have substantially improved forecast skill, provided new products relevant to a wide range of science application areas, and contributed to our ability to characterize the Earth's climate system.

This presentation will focus on two areas of current research: (1) new algorithmic approaches to geophysical parameter retrieval that fully exploit the spectral richness of the microwave and hyperspectral microwave observations and (2) next-generation sensor systems that build upon recent successes to provide improved spectral coverage (e.g., hyperspectral microwave systems) and improved spatial revisit (e.g., small satellite constellation architectures) for observations of dynamic meteorology and severe weather. Specific topics to be addressed include a neural network algorithm for temperature and moisture profile retrieval that is being used as part of the AIRS Science Team Version 6 algorithm, recent technology development funded by NASA to demonstrate a hyperspectral microwave receiver subsystem, and system performance analyses of nanosatellite constellation architectures, including the MicroMAS and MiRaTA 3U atmospheric sounding CubeSats to be launched by NASA in 2014 and 2015 to demonstrate core constellation elements.

AMSU – Advanced Microwave Sounding Unit
ATMS – Advanced Technology Microwave Sounder
AIRS – Atmospheric Infrared Sounder
IASI – Infrared Atmospheric Sounding Interferometer
CrIS – Cross-track Infrared Sounder
MicroMAS – Microsized Microwave Atmospheric Satellite
MiRaTA – Microwave Radiometer Technology Acceleration

1PhD, Electrical Engineering, Massachusetts Institute of Technology

top


Predicting and Avoiding Close Approaches and Potential Collisions in Geosynchronous Orbits

Dr. Richard I. Abbot1
MIT Lincoln Laboratory

The geosynchronous orbit regime is getting crowded with nearly 500 large active satellites and more than 600 inactive dead resident space objects that pose a physical collision threat to the active satellites. The in situ demise of a particular satellite, Telstar 401, initiated a research and development effort at MIT Lincoln Laboratory to address this threat. This work, which is done in collaboration with commercial satellite operators, is accomplished using the mechanism of Cooperative Research and Development Agreements. Initial work to detect and warn of close approaches with failed satellites has led to extensive research on the collision threat as well as how to monitor the threat over the entire geosynchronous belt and develop avoidance strategies to prevent collisions. It has been found that

  1. There is a significant probability of collision with objects routinely passing close to one another;
  2. The continuing failure of geosynchronous satellites and injection of rocket bodies into or near geosynchronous orbit will increase the threat;
  3. Collision-avoidance strategies can be developed that require no additional expenditure of valuable station-keeping fuel;
  4. Non-geosynchronous objects that regularly cross geosynchronous orbits pose another significant problem that has to be addressed.

This seminar surveys what has been achieved so far in predicting the threat and protecting satellites. An assessment of the probability of collision is presented, as well as a description of the close conjunction monitoring and warning systems that have been developed. Areas of research such as maneuver detection, orbital uncertainty quantification, solar-radiation pressure modeling, and low-level maneuver modeling are reviewed.

1PhD, Astronomy, University of Texas–Austin

top


Optical Propagation and Technology

Mechanical Systems Engineering of Optical Sensors

Dr. Steven E. Forman1 
MIT Lincoln Laboratory

During the past 26 years, MIT Lincoln Laboratory has developed several different optical sensor experiments that have flown on airborne and space platforms. These sensors include the Space-Based Visible, Airborne Infrared Imager, and Advanced Land Imager. Each represents a one-of-a-kind sensor fully engineered at Lincoln Laboratory. This talk summarizes several of the mechanical systems engineering areas and issues that occurred throughout design, analysis, fabrication, integration, and testing of these systems. Included are discussions of optical, optomechanical, structural, and thermal engineering; electronic packaging; mechanism design; focal-plane packaging; control-system engineering; materials selection and testing; environmental testing; failure analysis; and computer-aided design and analysis tools.

1PhD, Mechanical Engineering, Harvard University

 

top of page