Share This Page
For Additional Inquiries
Contact Human Resources
2017–2018 Technical Seminar SeriesMembers of the technical staff at MIT Lincoln Laboratory are pleased to present these technical seminars to interested college and university groups. Costs related to the staff members' visits for these seminars will be assumed by the Laboratory.

To arrange a technical seminar, please contact:

College Recruiting Program Administrator
Human Resources Department
MIT Lincoln Laboratory
244 Wood Street
Lexington, Massachusetts 02420-9108

and provide the following information:

  • A rank-ordered list of requested seminars
  • Preferred date/time options
  • A description of the target audience

Seminar Abstracts

Air Traffic Control

Human-System Integration in Complex Systems

Dr. Hayley J. Davison Reynolds1
MIT Lincoln Laboratory

MIT Lincoln Laboratory has had a successful history of integrating decision support systems into the air traffic control and emergency management domain, even though the introduction of new technologies often meets user resistance. Because of the complex nature of the work, air traffic controllers and emergency managers heavily rely on certain technologies and decision processes to maintain a safe and effective operating environment. This reliance on familiar technology and processes makes the introduction of a new tool into the environment a difficult task, even though the tool may ultimately improve the decision and raise levels of safety and/or efficiency of the operation. Poorly integrated systems can result in users being disappointed by a system that provides information for which they have no concept of use, or more likely, that results in the system’s not being used despite its existence in the array of available information systems; neither option yields the benefits for which the tool was designed.

In this seminar, a practical methodology for designing and fielding decision support systems that maximizes the potential of effective integration of the system into the users' operational context will be presented. Several examples of air traffic control and emergency management decision support prototype systems designed by Lincoln Laboratory, including the Route Availability Planning Tool (RAPT), the Tower Flight Data Manager (TFDM), and the Hurricane Evacuation (HVX) decision support tool will be described to demonstrate this process. Included in the presentation will be areas in which the designers ran into roadblocks in making the systems effective and the combination of qualitative and quantitative techniques used to help properly integrate the system in the field, yielding measurable operational benefits.

1PhD, Aeronautical Systems and Applied Psychology, Massachusetts Institute of Technology


Integrating Unmanned Aircraft Systems Safely into the National Airspace System

Dr. Rodney E. Cole1
MIT Lincoln Laboratory

Unmanned aircraft systems (UAS) such as the Air Force's Global Hawk and Predator are increasingly employed by the military and Department of Homeland Security in roles that require sharing airspace with civilian aircraft. Missions include pilot training, border patrol, highway and agricultural observation, and disaster management. Because of the pressure for widespread access for UASs to the national airspace and the risk of collision with passenger aircraft, UAS operators must find a way to integrate with manned aircraft with a very high degree of safety. The key to safe integration of UAS into the national airspace is the development and assessment of "sense and avoid" (SAA) technologies to replace the manned aircraft pilot's ability to "see and avoid" other aircraft.

MIT Lincoln Laboratory is conducting research to address safe and flexible UAS integration with commercial and general aviation aircraft. Research areas include development of sophisticated computer models that simulate millions of encounters between UAS and civilian aircraft to characterize airspace hazards and collision rates. These models can be applied to assess the performance of SAA algorithms designed to maintain separation, "well clear," between UAS and civilian aircraft while observing and adhering to established right-of-way rules. The Laboratory is also conducting groundbreaking research in the area of collision avoidance logic and is pursuing a probabilistic approach to collision avoidance that considers the uncertainty in pilot response to alerts and uncertainty in future states of the threat aircraft. This approach offers the potential to provide increased safety with decreased false alarms over conventional techniques and is a candidate for future Traffic Alert Collision Avoidance Systems (TCAS) and UAS SAA applications.

The Laboratory is also working with the Department of Defense (DoD) and Department of Homeland Security to develop ground-based sense and avoid (GBSAA) and airborne sense and avoid (ABSAA) surveillance architectures to satisfy the Federal Aviation Administration’s (FAA) requirement for replacing the onboard pilot's "see and avoid" function. Under DoD sponsorship, Lincoln Laboratory has deployed a service-oriented architecture GBSAA test bed that will be utilized in operational and simulation-over-live environments to collect data and operator feedback that can then be used to support future certification with the FAA. This seminar will provide a broad overview of the Laboratory's efforts in UAS airspace integration and next-generation aircraft collision avoidance algorithms and will provide an overview of the GBSAA test bed that is under development for the DoD.

1PhD, Mathematics, University of Colorado–Boulder


Machine Learning Applications in Aviation Weather and Traffic Management

Dr. Mark S. Veillette1
MIT Lincoln Laboratory

Adverse weather accounts for the majority of air traffic delays in the United States. When weather is expected to impact operations, air traffic managers (ATMs) are often faced with an overload of weather information required to make decisions. These data include multiple numerical weather model forecasts, satellite observations, Doppler radar, lightning detections, wind information, and other forms of meteorological data. Many of these data contain a great deal of uncertainty. Absorbing and utilizing these data in an optimal way is challenging even for experienced ATMs.

This talk will provide two examples of using machine learning to assist ATMs. In our first example, data from weather satellites are combined with global lightning detections and numerical weather models to create Doppler radar-like displays of precipitation in regions outside the range of weather radar. In the second example, multiple weather forecasts are used to refine the prediction of airspace capacity within selected regions of airspace. Examples and challenges of data collection, translation, modelling, and operational prototype evaluations will be discussed.

1PhD, Mathematics, Boston University


Radar Detection of Aviation Weather Hazards

Dr. John Y. N. Cho1
MIT Lincoln Laboratory

Bad weather plays a factor in many aviation accidents and incidents. Microburst, hail, icing, lightning, fog, turbulence—these are atmospheric phenomena that can interfere with aircraft performance and a pilot’s ability to fly safely. Thus, for safe and efficient operation of the air traffic system, it is crucial to continuously observe meteorological conditions and accurately characterize phenomena hazardous to aircraft. Radar is the most important weather-sensing instrument for aviation. This seminar will discuss technical advances that led to today’s operational terminal wind-shear detection radars. An overview of recent and ongoing research to improve radar capability to accurately observe weather hazards to aviation will also be presented.

1PhD, Electrical Engineering, Cornell University


System Design in an Uncertain World: Decision Support
for Mitigating Thunderstorm Impacts on Air Traffic

Richard A. DeLaura1
MIT Lincoln Laboratory

Weather accounts for 70% of the cost of air traffic delays—about $28 billion annually—within the United States National Airspace System (NAS). Most weather-related delays occur during the summer months, when thunderstorms affect air traffic, particularly in the crowded Northeast. The task of air traffic management, complicated even in the best of circumstances, can become overwhelmingly complex as air traffic managers struggle to route traffic reliably through rapidly evolving thunderstorms. A new generation of air traffic management decision support tools promises to reduce air traffic delays by accounting for the potential effects of convective weather, such as thunderstorms, on air traffic flow. Underpinning these tools are models that translate high-resolution convective weather forecasts into estimates of impact on aviation operations.

This seminar will present the results of new research to develop models of pilot decision making and air traffic capacity in the presence of thunderstorms. The models will be described, initial validation will be presented, and sources of error and uncertainty will be discussed. Finally, some applications of these models and directions for future research will be briefly described.

1AB, Physics, Harvard University


Communication Systems

Balloon Communications Relay Systems and Technology

Dr. Ameya Agaskar1
MIT Lincoln Laboratory

Today's smart devices, such as phones and watches, have allowed us to connect all aspects of our lives. However, these devices need enough radio spectrum to enable communications. As more smart devices are adopted, the same frequency spectrum will likely need to be utilized by a growing number of physically close transmitters to communicate to a base station, causing congestion and interference. A robust solution to this spectrum congestion and general interference problem is urgently needed.

This seminar provides an overview of a sparse large–aperture array of high altitude balloon relay solution, where a collection of low-cost balloon relays continuously covers a communication area of interest. Each balloon would relay received signals down to a base station where the signals are processed using advanced adaptive beamforming techniques. The individual user signals are extracted and decoded while the interference is suppressed. The large aperture of the balloon array allows for the spatial separation of very close transmitters on the ground. Among the difficulties of the concept that were overcome include the fact that the balloons drift with time and their location may not be precisely known at any given time by the base-station. The seminar covers conceptual difficulties and their solutions, design choices and tradeoffs, as well as actual beamforming results achieved for interference suppression and videos of the balloon concept during a test flight.

1PhD, Engineering Sciences, Harvard University


Diversity in Air-to-Ground Lasercom: The Focal Demonstration

Dr. Frederick G. Walther1
MIT Lincoln Laboratory

Laser communications (lasercom) provides significant advantages, compared to radio-frequency (RF) communications, which include a large, unregulated bandwidth and high beam directionality for free-space links. These advantages provide capabilities for high (multi Gb/s) data transfer rates; reduced terminal size, weight and power; and a degree of physical link security against out-of-beam interferers or detectors. This seminar addresses the key components of lasercom system design, including modeling and simulation of atmospheric effects, link budget development, employment of spatial and temporal diversity techniques to mitigate signal fading due to scintillation, and requirements for acquisition and tracking system performance. Results from recent flight demonstrations show stable tracking, rapid reacquisition after cloud blockages, and high data throughput for a multi-Gb/s communications link out to 80 km. Potential technologies for further development include a compact optical gimbal that has low size, weight, and power, and more efficient modem and coding techniques to extend range and/or data rate.

1PhD, Physics, Massachusetts Institute of Technology


Dynamic Link Adaptation for Satellite Communications

Dr. Huan Yao1
MIT Lincoln Laboratory

Future protected military satellite communications will continue to use high-transmission frequencies to capitalize on large amounts of available bandwidth. However, the data flowing through these satellites will transition from the circuit-switched traffic of today's satellite systems to Internet-like packet traffic. One of the main differences in migrating to packet-switched communications is that the traffic will become bursty (i.e., the data rate from particular users will not be constant). The variation in data rate is only one of the potential system variations. At the frequencies of interest, rain and other weather phenomena can introduce significant path attenuation for relatively short time periods. Current protected satellite communications systems are designed with sufficient link margins to provide a desired availability under such degraded path conditions. These systems do not have provisions to use the excess link margins for additional capacity when weather conditions are good. The focus of this seminar is the design of a future satellite system that autonomously reacts to changes in link conditions and offered traffic. This automatic adaptation drastically improves the overall system capacity and the service that can be provided to ground terminals.

1PhD, Electrical Engineering, Massachusetts Institute of Technology


High-Rate Laser Communications to the Moon and Back

Dr. Farzana I. Khatri1
MIT Lincoln Laboratory

Radio waves have been the standard method for deep-space communications since the Apollo mission. Over the past decades, scientists at MIT Lincoln Laboratory have been working to develop free-space optical communications systems, and the recent success of the Lunar Laser Communication Demonstration (LLCD) program will clearly revolutionize future deep-space communication. systems. The LLCD demonstrated record-breaking optical up and down links between Earth and the Lunar Lasercom Space Terminal (LLST) payload on NASA’s Lunar Atmosphere and Dust Environment Explorer (LADEE) satellite orbiting the Moon. The system included an innovative space terminal, a novel ground terminal, two major upgrades of existing ground terminals, and a capable and flexible ground operations infrastructure. This talk will give an overview of the technologies involved in the demonstration, the system architecture, the basic operations of both the link and the whole system, and some typical results.

1PhD, Electrical Engineering, Massachusetts Institute of Technology


Machine Learning for Radio-Frequency Signal Classification and Blind Demodulation

Dr. Ameya Agaskar1
MIT Lincoln Laboratory

Platforms such as unmanned air systems and unattended ground sensors are limited by their size, weight, and power consumption requirements, and as a result, embedded signal processing is limited. Exfiltrating raw sampled data for offline processing is often not an alternative because such processing demands significant bandwidth.

To accommodate at-sensor processing—including detection, demodulation, and decoding—of wideband data, MIT Lincoln Laboratory is proposing an intermediate level of radio-frequency (RF) data representation to significantly reduce the data rates to levels largely equivalent to the information-handling rate in the processing environment. This approach leverages sparsity in the RF environment and machine learning algorithms to discover signal structure without prior knowledge and leads to blind demodulation techniques.

This seminar presents a novel highly scalable approach, based on nonparametric Bayesian models, for learning unknown signal structure and a new blind synchronization technique. The Laboratory has characterized performance on the basis of signal-to-noise ratio (SNR) and the number of observations. The research team has observed that above a critical SNR learning threshold, the algorithms offer near-optimal levels of performance in terms of raw bit error rates.

1PhD, Engineering Sciences, Harvard University


National Aeronautics and Space Administration Lunar Laser Communication Demonstration

Dr. Dennis A. Burianek1
MIT Lincoln Laboratory

The National Aeronautics and Space Administration (NASA) Lunar Laser Communication Demonstration (LLCD) was launched in September 2013. During the six months following the LLCD launch, the program showed the capability and promise of high-rate optical communication systems for space exploration. This talk discusses the engineering challenges and approach for developing prototype space hardware using examples and lessons learned from the LLCD program. Discussion topics include systems engineering and requirements development, risk and problems that inevitably arise on prototype programs, and the technology that made LLCD successful.

1PhD, Aeronautics/Astronautics, Massachusetts Institute of Technology


Practical Capacity Benchmarking for Wireless Networks

Dr. Jun Sun1
MIT Lincoln Laboratory

Despite the prevalence of ad hoc networks in wireless mesh networks, wireless sensor networks, and the tactical communication environment, there is not yet a fundamental understanding of their achieved performance relative to the optimal limit. Many evaluations of emerging wireless ad hoc networking systems have left the impression that the systems exhibit poor performance, but these evaluations have not provided either a clear vision of what performance should be attained or which details of network implementation are responsible for the systems’ disappointing results. The goal of the work discussed in this seminar is to develop a network benchmarking capability that can be used to evaluate the performance of emerging wireless ad hoc networks. MIT Lincoln Laboratory's focus is on the development of a network capacity benchmark.

This talk provides an upper bound on the wireless network capacity that is computationally efficient to implement. The upper-bound calculations consider multiple different wireless interference models. The seminar shows that in a wireless network with n nodes and bounded degree, the gap between the network capacity and its upper bound is O(log n) under uniform traffic and several different interference models. For the more constrained case of a wireless N*M grid network, the upper bound is at most twice the value of the network capacity.

The talk shows that, in practice, Lincoln Laboratory's approach performs extremely well. A polynomial time randomized algorithm generates the upper bound in general wireless networks. To ascertain the performance of their upper bound, the Laboratory's researchers use the cross-layer approach to develop a lower bound by considering only a subset of independent sets. The lower bound is shown to be within 95% of the upper bound on average for the primary interference model. For 802.11, 802.16 and the 2-hop interference models, the lower bound is within 70% of the upper bound. These bounds are used to quantify the impact of commonly used algorithms at the routing and the scheduling layers on the overall network throughput. The seminar compares the individual effects of these algorithms on overall network throughput performance.

1PhD, Electrical Engineering and Computer Science, Massachusetts Institute of Technology


Providing Information Security with Quantum Physics—A Practical Engineering Perspective

Dr. P. Benjamin Dixon1
MIT Lincoln Laboratory

Quantum information technology enables promising advances in the area of communications security. A well-engineered application of quantum mechanics enables results that are unattainable with classical-only processing. The most well-known example is quantum key distribution (QKD), a family of protocols to distribute a secret key. With a carefully designed protocol, quantum mechanics bounds the information an eavesdropper could obtain without being detected.

A verifiable random number generator (vRNG) is another important quantum information technology. A secure source of random bits is an important input for nearly all cryptographic protocols. Classical RNGs, which are subject to silent failures, potentially create security vulnerabilities. A vRNG uses the observation of a quantum phenomenon—the presence of entanglement via a violation of Bell's inequalities—to certify the entropy of the vRNG output.

MIT Lincoln Laboratory has had great success developing practical communication system demonstrations. This seminar will explore the ongoing efforts at the Laboratory to engineer quantum communication systems, exploring both theoretical and experimental advances.

1PhD, Physics, University of Rochester


Research Challenges in Airborne Networks and Communications

Dr. Bow-Nan Cheng1
MIT Lincoln Laboratory

In recent years, with the emergence of ubiquitous communications and the increasing availability of unmanned aerial vehicles, there has been an increased interest in building and leveraging airborne networks to facilitate data dissemination and coordination. Although the commercial world has made significant progress in building and deploying ground wireless networks, several military needs are currently not being met by available and emerging commercial technologies, particularly for use in airborne applications.

This talk presents four major airborne networking domains, examines unique domain characteristics, and identifies interesting research challenges for improving performance and scalability in the context of military airborne systems. Topics covered include techniques for improving physical layer performance in the presence of strong interference, antenna capabilities and limitations resulting from the antenna's mounting location on the aircraft surface, and methods for integrating radio-link state information with the platform network routers for improved routing decisions. The talk also presents an overview of some work performed at MIT Lincoln Laboratory in the past few years to address the research challenges, and to identify additional open areas for research, in airborne networking.

1PhD, Computer Science, Rensselaer Polytechnic Institute


Undersea Laser Communication—The Next Frontier

Dr. Nicholas Hardy1
MIT Lincoln Laboratory

Undersea communication is of critical importance to many research and military applications, including, for example, oceanographic data collection, pollution monitoring, offshore exploration, and tactical surveillance. Undersea communication functions have traditionally been accomplished with acoustic and low-frequency radio communication. These technologies, however, impose speed limitations on the undersea platform, impose maximum communications data rates limited to a few bits per second (low-frequency radio) or multiple kilobits per second (acoustic), and radiate over a broad area. These factors lead to power inefficiencies, potential crosstalk, and multipath degradation. Undersea optical communication has the potential to mitigate these capability shortfalls, providing very-high-data-rate covert communication.

This seminar provides an overview of applications that might benefit from high-rate undersea communication and technologies, and it reports on new research into finding the ultimate performance limit of undersea optical communication. Using sophisticated physics models of laser and sunlight undersea propagation, Lincoln Laboratory researchers have found that gigabit-per-second links over appreciable ranges, in excess of 100 meters, are possible in much of the world's oceans. This capability promises to transform undersea communications. Finally, the seminar presents architectures based on currently realizable technology that can achieve the predicted ultimate performance limit.

1PhD, Electrical Engineering, Massachusetts Institute of Technology


Waveform Design for Airborne Networks

Dr. Frederick J. Block1
MIT Lincoln Laboratory

Airborne networks are an integral part of modern network-centric military operations. These networks operate in a unique environment that poses many research challenges. For example, nodes are much more mobile in airborne networks than in ground-based networks and are often separated by great distances that cause long delays and large propagation losses. Additionally, the altitude of an airborne receiver potentially places it within line of sight of many transmitters located over a wide area. Some of these transmitters may be part of the same network as the receiver but may cause multiple-access interference that is due to complications in channel access protocol design for airborne networks. Other transmitters that are outside the network may also cause interference. The waveform design must specifically account for this environment to achieve the required level of performance. Physical, link, and network layer techniques for providing high-rate, reliable airborne networks will be discussed.

1PhD, Electrical Engineering, Clemson University


Cyber Security and Information Sciences

Addressing the Challenges of Big Data Through Innovative Technologies

Dr. Vijay Gadepally1 and Dr. Jeremy Kepner2
MIT Lincoln Laboratory

The ability to collect and analyze large amounts of data is increasingly important within the scientific community. The growing gap between the volume, velocity, and variety of data available and users’ ability to handle this deluge calls for innovative tools to address the challenges imposed by what has become known as big data. MIT Lincoln Laboratory is taking a leading role in developing a set of tools to help solve the problems inherent in big data.

Big data's volume stresses the storage, memory, and compute capacity of a computing system and requires access to a computing cloud. Choosing the right cloud is problem specific. Currently, four multibillion-dollar ecosystems dominate the cloud computing environment: enterprise clouds, big data clouds, SQL database clouds, and supercomputing clouds. Each cloud ecosystem has its own hardware, software, communities, and business markets. The broad nature of big data challenges makes it unlikely that one cloud ecosystem can satisfy all needs, and solutions are likely to require the tools and techniques from more than one cloud ecosystem. The MIT SuperCloud was developed to provide one such solution. To our knowledge, the MIT SuperCloud is the only deployed cloud system that allows all four ecosystems to co-exist without sacrificing performance or functionality.

The velocity of big data stresses the rate at which data can be absorbed and meaningful answers can be produced. Through an initiative led by the National Security Agency (NSA), a Common Big Data Architecture (CBDA) was developed for the U.S. government. The CBDA is based on the Google Big Table NoSQL approach and is now in wide use. Lincoln Laboratory was instrumental in the development of the CBDA and is a leader in adapting the CBDA to a variety of big data challenges. The centerpieces of the CBDA are the NSA-developed Apache Accumulo database (capable of millions of entries per second) and the Lincoln Laboratory–developed Dynamic Distributed Dimensional Data Model (D4M) schema.

Finally, big data variety may present both the largest challenge and the greatest set of opportunities for supercomputing. The promise of big data is the ability to correlate heterogeneous data to generate new insights. The combination of Apache Accumulo and D4M technologies allows vast quantities of highly diverse data (bioinformatics, cyber-relevant data, social media data, etc.) to be automatically ingested into a common schema that enables rapid query and correlation of elements.

1PhD, Electrical and Computer Engineering, The Ohio State University
2PhD, Physics, Princeton University


Artificial Intelligence for Cyber Security

Dr. Joseph R. Zipkin1
MIT Lincoln Laboratory

Despite cyber defenders' best efforts to secure computer systems over the past several decades, cyber adversaries continue to outpace and outsmart them by hacking systems and stealing data. Recent advances in artificial intelligence (AI) can be leveraged to improve this situation by helping the defender to stay apace with the flood of cyber data from networks and systems and enabling them to make progress in securing their systems. For example, AI can help security analysts focus on relevant signals in large amounts of situational awareness data, and it can be leveraged to automatically generate cyber courses of action in response to cyber threats. This seminar will present an overview of the challenges of cyber security that are amenable to AI solutions and provide a summary of recent work in analyzing online open-source data in support of attack prediction.

1PhD, Mathematics, University of California Los Angeles


CASCADE: Cyber Adversarial Scenario Modeling and Automated Decision Engine

Jaime Pena1
MIT Lincoln Laboratory

Cyber systems are complex and include numerous heterogeneous and interdependent entities and actors—such as mission operators, network users, attackers, and defenders—that interact within a network environment. Decisions must be made to secure cyber systems, mitigate threats, and minimize mission impact. Administrators must consider how to optimally partition a network using firewalls, filters, and other mechanisms to restrict the movement of an attacker who gains a foothold on the network and to mitigate the damage that the attacker can cause. Current cyber decision support focuses on gathering network data, processing these data into knowledge, and visualizing the resultant knowledge for cyber situational awareness. However, as the data become more readily obtained, network administrators face the same problem that business analysts have faced for some time: More data or knowledge does not lead to better decisions. There is a need for the development of cyber decision support that can automatically recommend timely and  sound data-driven courses of action. Currently, decisions about partitioning a network are made by analysts who use subjective judgment that is sometimes informed by best practices. We propose a decision system that automatically recommends an optimal network-partitioning architecture to maximize security posture and minimize mission impact.

1BS, Mathematics and Computer Science, University of Texas at El Paso


Cryptographically Secure Computation

Dr. Emily Shen1 and Dr. Arkady Yerukhimovich2
MIT Lincoln Laboratory

In today's big data world, the ability to collect, share, and analyze data has led to unprecedented capabilities as well as unprecedented privacy concerns. Often, people or organizations would like to collaborate and obtain results from their collective data but without divulging their individual data. Current techniques to enable such sharing require that the parties either entrust each other with their sensitive data or find a mutually trusted third party who can perform the computation on their behalf; these approaches may be undesirable or impractical.

Secure multiparty computation (MPC) is a cryptographic technique that enables parties to perform joint computations without revealing their sensitive inputs and without using a trusted third party. MPC not only can provide privacy for existing applications but also can enable new applications that are currently not possible. For example, MPC can be used to enable collaboration and information exchange between partner organizations through selective data sharing and to guarantee security of computation performed in an untrusted environment such as a cloud. In this talk, we illustrate the cryptographic ideas behind MPC. We then describe the design and optimization of MPC for a collaborative anomaly detection algorithm.

1PhD, Electrical Engineering and Computer Science, Massachusetts Institute of Technology
2PhD, Computer Science, University of Maryland


Cyber Security Metrics

Dr. Robert Lychev1
MIT Lincoln Laboratory

Recent cyber attacks on government, commercial, and institutional computer networks have highlighted the need for increased cyber security measures. In order to better quantify, understand, and, therefore, more effectively combat this ever-growing threat, the U.S. government is shifting its security strategy for its computer networks and systems from one of yearly compliance checks to one of continuous monitoring and assessment. While continuous monitoring refers to the ability to maintain constant awareness of the configuration and status of the computer networks and systems, assessment refers to the ability to accurately appraise the security posture of the systems and to estimate the risk associated with them. Clearly, the effectiveness of this strategy hinges upon the ability to accurately assess cyber risk.

This seminar will present a methodology for producing useful cyber security metrics that are derived from realistic and well-defined mathematical attacker models. These metrics can be continuously evaluated from operational security data and, thus, support the government's new security strategy of continuous monitoring. The speaker will show how this methodology has been used to develop several important security metrics that are based upon the SANS Institute's list of 20 critical security controls. Live demonstrations will illustrate how these security metrics are used to assess risk in an operational context.

1PhD, Computer Science, Georgia Institute of Technology


Developing and Evaluating Link-Prediction Algorithms for Speaker Content Graphs

Kara Greenfield1 and Dr. William M. Campbell2
MIT Lincoln Laboratory

Graph theory can be a very powerful tool for a variety of different problems, but it is not always clear how the edges of the graph should be defined. Link prediction is the process of determining which pairs of nodes should be connected by an edge. This seminar describes the process of developing different link-prediction algorithms. We first discuss how to choose intermediary metrics that correspond well to the application of interest in order to obtain measures of performance before it is practical to obtain a measure of effectiveness for that application. We also describe how MIT Lincoln Laboratory’s VizLinc audio-visual tool can be used in visual analytics to gain better insight into the algorithms than can be provided by numeric metrics alone. Throughout the talk, we will use speaker recognition as the domain of interest, generating speaker content graphs that efficiently model the underlying manifold of the speaker space and employing those graphs to perform tasks such as query by example and speaker clustering.

1MS, Industrial Mathematics, Worcester Polytechnic Institute
2PhD, Applied Mathematics, Cornell University


Efficient, Privacy-Preserving Data Sharing

J. Darby Mitchell1 and Sophia Yakoubov2
MIT Lincoln Laboratory

Database management systems permit fast, expressive searches on large volumes of data. However, modern databases provide the server with a lot of information, such as the contents of the database and all of the clients' queries. In many scenarios, a client may wish to limit the information revealed to the server; examples include a cloud computing setting in which the server and client belong to two different organizations, and a high-value target setting in which the client is concerned about potential compromise and wants to limit the scope of damage in case of attack.

Modern cryptography provides several different types of tools that offer the promise of searching on encrypted data. Thus far, however, most of the tools have not gained widespread use because they either operate too slowly or lack the query expressivity of a desired language like SQL. Recently, several research groups have participated in an Intelligence Advanced Research Projects Activity (IARPA)–funded research program called "Security and Privacy Assurance Research," which aims to overcome both of these obstacles. The researchers have built secure database management systems whose query functionality comprises a large functionality of SQL and whose performance is within a 10× factor of MySQL at terabyte scales. In this talk, I will illustrate the cryptographic advances that make these technologies possible. Additionally, I will describe the rigorous software engineering and formal techniques employed to test that the researchers' software met all of the security, functionality, and performance requirements.

1MSE, Software Engineering, Carnegie Mellon University
2MS, Computer Science, Boston University


Evaluating Cyber Moving Target Techniques

Dr. Hamed Okhravi1
MIT Lincoln Laboratory

The concept of cyber moving target (MT) defense has been identified as one of the game-changing themes to rebalance the landscape of cyber security. MT techniques make cyber systems “a moving target,” that is, less static, less homogeneous, and less deterministic in order to create uncertainty for attackers.

Although many MT techniques have been proposed in the literature, little has been done on evaluating their effectiveness, benefits, and weaknesses. This seminar discusses the evaluation of the wide range of MT techniques. First, a qualitative assessment studies the potential benefits, gaps, and weaknesses for each category of MT strategy. This step identifies major gaps in order to guide future research and prototyping efforts. The findings of a case study on code-reuse defenses are presented. For the MT techniques that have been identified as potentially more beneficial in the qualitative assessment, a deeper quantitative assessment is performed by examining real exploits. Next, the seminar discusses an operational assessment of an MT technique inside a larger system and illustrates how important parameters of such techniques can be monitored for improved effectiveness. Finally, possible directions for future work in this domain are outlined.
1PhD, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign


Experiences in Cyber Security Education:
The MIT Lincoln Laboratory Capture-the-Flag Exercise

Andrew Fasano1, Timothy R. Leek2, and Andrew Davis3
MIT Lincoln Laboratory

Dr. Nickolai Zeldovich4
MIT Computer Science and Artificial Intelligence Laboratory

Many popular and well-established cyber security capture-the-flag (CTF) exercises are held each year at a variety of settings, including universities and semiprofessional security conferences. The CTF format also varies greatly, ranging from linear puzzle-like challenges to team-based offensive and defensive free-for-all hacking competitions. While these events are exciting and important as contests of skill, they offer limited educational opportunities. In particular, since participation requires considerable a priori domain knowledge and practical computer security expertise, the majority of typical computer science students are excluded from taking part in these events. The goal in designing and running the MIT Lincoln Laboratory CTF was to make the experience accessible to a wider community by providing an environment that would not only test and challenge the computer security skills of the participants but also educate and prepare those without extensive prior expertise. This seminar presents the Laboratory's self-consciously educational and open CTF, including discussions of our teaching methods, game design, scoring measures, logged data, and lessons learned.

1BS, Computer Science and Communication, Rensselaer Polytechnic Institute
2MS, Computer Science, University of California–San Diego
3BS, Computer Science, Georgia Institute of Technology
4PhD, Computer Science, Stanford University


Modeling to Improve the Science of Insider Threat Detection

Seth E. Webster1
MIT Lincoln Laboratory

Recent high-profile security breaches have highlighted the importance of insider threat detection systems for cyber security. Most of these detection systems rely on the manual or semi-automated analysis of audit logs to identify malicious activity. However, issues such as high false-positive rates and concerns over data privacy make it difficult to predict performance of these systems within an enterprise environment. These and other issues limit an organization’s ability to effectively apply these tools.

This seminar will present an approach that leverages enterprise-level modeling to predict the performance of insider threat detection systems. We provide a proof of concept of our modeling approach by applying it to a synthetic dataset and comparing its predictions to the ground truth. The output of these models that predict performance can enable enterprises to compare tools and ultimately allow them to make better informed decisions about which insider threat detection systems to deploy.

1MS, Computer Science, Massachusetts Institute of Technology


Multicore Programming in pMatlab® Using Distributed Arrays

Dr. Jeremy Kepner1
MIT Lincoln Laboratory

MATLAB is one of the most commonly used languages for scientific computing, with approximately one million users worldwide. Many of the programs written in MATLAB can benefit from the increased performance offered by multicore processors and parallel computing clusters. The MIT Lincoln Laboratory pMatlab library ( allows high-performance parallel programs to be written quickly by using the distributed arrays programming paradigm. This talk provides an introduction to distributed arrays programming and will describe the best programming practices for using distributed arrays to produce programs that perform well on multicore processors and parallel computing clusters. These practices include understanding the concepts of parallel concurrency versus parallel data locality, using Amdahl's Law, and employing a well-defined design-code-debug-test process for parallel codes.

1PhD, Physics, Princeton University


Secure and Resilient Cloud Computing for the Department of Defense

Dr. Nabil A. Schear1
MIT Lincoln Laboratory

The Department of Defense (DoD) and the intelligence community (IC) are rapidly moving to adopt cloud computing to achieve substantial user benefits, such as the ability to store and access massive amounts of data, on-demand delivery of computing services, the capability to widely share information, and the scalability of resource usage. However, cloud computing both exacerbates existing cyber security challenges and introduces new challenges.  

Today, most cloud data remain unprotected and therefore vulnerable to theft, deception, or destruction. Furthermore, the homogeneity of cloud software and hardware increases the risk of a single cyber attack on the cloud, causing widespread impact. To address these challenges, Lincoln Laboratory is developing technology that will strengthen the security and resilience of cloud computing so that the DoD and IC can confidently deploy cloud services for their critical missions. This seminar will detail the cyber security threats to cloud computing, the Laboratory's approach to cloud security and resilience, and results from several novel technologies demonstrating enhanced cyber capabilities with low overhead.

1PhD, Computer Science, University of Illinois at Urbana-Champaign


Engineering, Other

Advanced Structures and Materials Using Additive Manufacturing

Dr. Keith B. Doyle1 and James F. Ingraham2
MIT Lincoln Laboratory

MIT Lincoln Laboratory is utilizing additive manufacturing processes to create novel engineering solutions that develop prototypes for both research and national security applications. Additive manufacturing offers the ability to build unique structures with complex external and internal geometries producing strong, lightweight, multifunctional structures that are not possible using traditional manufacturing techniques. At Lincoln Laboratory, high-performance plastics are employed in a broad variety of engineering applications, which include unmanned aerial vehicles and airborne sensors. The utilization of metals, including the use of aluminum and titanium for high-performance, lightweight cooling systems, is increasing, and research teams are developing techniques to improve mechanical properties to approach their wrought counterparts. The development of new materials offering increased mechanical performance to meet demanding size, weight, performance, and environmental requirements for next-generation prototype systems is underway.

1PhD, Engineering Mechanics, University of Arizona
2BS, Chemical Engineering, Worcester Polytechnic Institute


Improved Resilience for the Electric Grid

Erik R. Limpaecher1 and Scott B. Van Broekhoven2
MIT Lincoln Laboratory

The U.S. electric power grid has been described as the largest interconnected machine on Earth. Its continuous and stable operation has become essential to the day-to-day functioning of our economy and our households. However, the grid is undergoing a period of rapid change while facing an increasing amount of threats. The national grid is composed of aging and outdated infrastructure, and with the addition of distributed intermittent renewables and natural gas generation, it is evolving in ways that often decrease grid resilience. As the climate changes and the potential for physical or cyber terrorist attacks grows, the likelihood of catastrophic outages due to extreme weather events and cyber incidents increases. MIT Lincoln Laboratory has been working with the federal, state, and city governments as well as utilities and grid operators to improve the resiliency of critical facilities and to accelerate the adoption of distributed energy resources. This seminar will describe our recent progress on both new resilience modeling tools and on an enabling hardware-in-the-loop test bed prototype.

1BS, Electrical Engineering, Princeton University
2MS, Engineering Systems, Massachusetts Institute of Technology


Optomechanical Systems Engineering for Space Payloads

Dr. Keith B. Doyle1
MIT Lincoln Laboratory

MIT Lincoln Laboratory has developed a broad array of optical payloads for imaging and communication systems in support of both science and national defense applications. Space provides a particularly harsh environment for high-performing optical instruments that must survive launch and then meet demanding imaging and pointing requirements in dynamic thermal and vibration operational settings. This seminar presents some of the mechanical challenges in building optical payloads for space and draws on a history of successful optical sensors built at MIT Lincoln Laboratory along with recent systems in the news for both imaging and communication applications. Details include an overview of recent optical payloads and their applications, optomechanical design considerations, the impact and mitigation of environmental effects, and environmental testing.

1PhD, Engineering Mechanics, University of Arizona


Self-Driving Vehicles Using Localizing Ground-Penetrating Radar

Byron M. Stanley1
MIT Lincoln Laboratory

Self-driving vehicles have the potential to prevent most of the over 35,000 vehicle fatalities in the U.S. each year. However, current implementations are not mature enough for widespread adoption. Optical sensors provide remarkably successful lane-keeping capabilities, yet they struggle in a range of conditions and scenarios. This seminar will describe in detail MIT Lincoln Laboratory's Localizing Ground-Penetrating Radar (LGPR) system, a sensor that provides real-time estimates of a vehicle's position even in challenging weather and road conditions. As an independent method of localization, the potential safety and robustness impact of fusing the LGPR system into traditional approaches is significant. Video demonstrations include closed loop autonomous steering systems deployed in Afghanistan and Boston and real-time localization at highway speeds at night in a snowstorm.

1SM, Mechanical Engineering, Massachusetts Institute of Technology


Three-Dimensional Integrated Technology for
Advanced Focal Planes and Integrated Circuits

Donna-Ruth Yost1 and Dr. Chenson Chen2
MIT Lincoln Laboratory

Over the last decade, MIT Lincoln Laboratory has developed a three-dimensional (3D) circuit integration technology that exploits the advantages of silicon-on-insulator technology to enable wafer-level stacking and micrometer-scale electrical interconnection of fully fabricated circuit wafers [1].

Advanced focal-plane arrays have been the first applications to exploit the benefits of this 3D integration technology because the massively parallel information flow present in two-dimensional imaging arrays maps very nicely into a 3D computational structure as information flows from circuit tier to circuit tier in the z-direction. To date, the Laboratory's 3D integration technology has been used to fabricate four different focal planes, including a two-tier 64 × 64 imager with fully parallel per-pixel analog-to-digital (A/D) conversion [2]; a three-tier 640 × 480 imager consisting of an imaging tier, an A/D conversion tier, and a digital signal processing tier; two-tier 1024 × 1024 pixel, four-side-abuttable imaging modules for tiling large mosaic focal planes [3, 4]; and a three-tier Geiger-mode avalanche photodiode (APD) 3D LIDAR array, using a 30-volt avalanche-photodiode tier, a 3.3-volt complementary metal-oxide semiconductor (CMOS) tier, and a 1.5-volt CMOS tier [5].

Recently, the 3D integration technology has been made available to the circuit-design research community through multiproject fabrication runs sponsored by the Defense Advanced Research Projects Agency. Three different multiproject runs have been completed and included over 100 different circuit designs from 40 different research groups. Three-dimensional circuit concepts explored in these runs included stacked memories, field-programmable gate arrays, and mixed-signal and RF circuits. We have developed an understanding of heterogeneous 3D integration issues by successfully demonstrating 3D integration of Si CMOS read out integrated circuits to InGaAs photodiode wafers [6], and an understanding of mixed-fabrication facility issues by 3D integrating Si CMOS readout integrated circuits (ROIC) with externally fabricated technologies. This seminar will discuss the enabling technologies required for this approach to 3D integration, circuits demonstrated by this technology, and current 3D technology programs at Lincoln Laboratory.

[1] J.A. Burns, et al., "A Wafer-Scale 3-D Circuit Integration Technology," IEEE Transactions on Electron Devices, vol. 53, no. 10, pp. 2507–2516, October 2006.
[2] J.A. Burns, et al., "Three-dimensional Integrated Circuits for Low Power, High Bandwidth Systems on a Chip," 2001 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, vol. 44, pp. 268–269, February 2001.
[3] V. Suntharalingam, et al., "Megapixel CMOS Image Sensor Fabricated in Three-Dimensional Integrated Circuit Technology," 2005 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, vol. 48, pp. 356–357, February 2005.
[4] V. Suntharalingam, et al., "A Four-Side Tileable, Back Illuminated, 3D- Integrated Megapixel CMOS Image Sensor," IEEE 2009 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, pp. 38–39, February 2009.
[5] B. Aull, et al., "Laser Radar Imager Based on 3D Integration of Geiger-Mode Avalanche Photodiodes with Two SOI Timing Circuit Layers," 2006 ISSCC International Solid-State Circuits Conference, Digest of Technical Papers, vol. 49, pp. 304–305, February 2006.
[6] C.L. Chen, et al., "Wafer-Scale 3D Integration of InGaAs Image Sensors with Si Readout Circuits," IEEE International Conference on 3D System Integration, San Francisco, 28–30 Sept. 2009 (Best Paper Award).

1BS, Materials Science and Engineering, Cornell University
2PhD, Physics, University of California Berkley


Homeland Protection

Disease Modeling to Assess Outbreak Detection and Response

Dr. Diane C. Jamrog1 
MIT Lincoln Laboratory

Bioterrorism is a serious threat that has become widely recognized since the anthrax mailings of 2001. In response, one national research activity has been the development of biosensors and networks thereof. A driving factor behind biosensor development is the potential to provide early detection of a biological attack, thereby enabling timely treatment. This presentation introduces a disease progression and treatment model to quantify the potential benefit of early detection. To date, the model has been used to assess responses to inhalation anthrax and smallpox outbreaks.

1PhD, Computation and Applied Mathematics, Rice University


Human Language Technology

New Approaches to Automatic Speaker Recognition and
Forensic Considerations

Dr. Joseph P. Campbell1 and Dr. Pedro A. Torres-Carrasquillo2
MIT Lincoln Laboratory

Recent gains in the performance of automatic speaker recognition systems have been obtained by new methods in subspace modeling. This talk presents the development of speaker recognition systems ranging from traditional approaches, such as Gaussian mixture modeling (GMM) to novel state-of-the-art systems employing subspace techniques, such as factor analysis and iVector methods. This seminar also covers research on the means to exploit high-level information. For example, idiosyncratic word usage and speaker-dependent pronunciation are high-level features for recognizing speakers. These high-level features can be combined with conventional features for increased accuracy. The seminar presents new methods to increase robustness and improve calibration of speaker recognition systems by addressing common factors in the forensic domain that degrade recognition performance. We describe MIT Lincoln Laboratory's VOCALINC system and its application to automated voice comparison of speech samples for law enforcement investigation and forensic applications. The talk concludes with appropriate uses of this technology, especially cautions regarding forensic-style applications, and a look at this technology's future directions.

1PhD, Electrical Engineering, Oklahoma State University
PhD, Electrical Engineering, Michigan State University


Signal Processing for the Measurement of Characteristic Voice Quality

Dr. Nicolas Malyska1 and Dr. Thomas F. Quatieri2
MIT Lincoln Laboratory

The quality of a speaker's voice communicates to a listener information about many characteristics, including the speaker's identity, language, dialect, emotional state, and physical condition. These characteristic elements of a voice arise because of variations in the anatomical configuration of a speaker's lungs, voice box, throat, tongue, mouth, and nasal airways, as well as the ways in which the speaker moves these structures. The voice box, or larynx, is of particular interest in voice quality, as it is responsible for generating variations in the excitation source signal for speech.

In this seminar, we will discuss mechanisms by which voice-source variations are generated, appear in the acoustic signal, and are perceived by humans. Our focus will be on using signal processing to capture acoustic phenomena resulting from the voice source. The presentation will explore several applications that build upon these measurement techniques including (1) turbulence-noise component estimation during aperiodic phonation, (2) automatic labeling for regions of irregular-phonation, and (3) the analysis of pitch dynamics.

1PhD, Health Sciences and Technology, Massachusetts Institute of Technology
2ScD, Electrical Engineering, Massachusetts Institute of Technology


Radar and Signal Processing

Adaptive Array Estimation

Mr. Keith Forsythe1
MIT Lincoln Laboratory

Parameter estimation is a necessary step in most surveillance systems and typically follows detection processing. Estimation theory provides parameter bounds specifying the best achievable performance and suggests maximum-likelihood (ML) estimation as a viable strategy for algorithm development. Adaptive sensor arrays introduce the added complexity of bounding and assessing parameter estimation performance (i) in the presence of limiting interference whose statistics must be inferred from measured data and (ii) under uncertainty in the array manifold for the signal search space. This talk focuses on assessing the mean-squared-error (MSE) performance at low and high signal-to-noise ratio (SNR) of nonlinear ML estimation that (i) uses the sample covariance matrix as an estimate of the true noise covariance and (ii) has imperfect knowledge of the array manifold for the signal search space. The method of interval errors (MIE) is used to predict MSE performance and is shown to be remarkably accurate well below estimation threshold. SNR loss in estimation performance due to noise covariance estimation is quantified and is shown to be quite different from analogous losses obtained for detection. Lastly, a discussion of the asymptotic efficiency of ML estimation is also provided in the general context of misspecified models, the most general form of model mismatch.

1SM, Mathematics, Massachusetts Institute of Technology


Advanced Embedded Computing

Dr. Paul Monticciolo1, Dr. William S. Song2, and Matthew A. Alexander3
MIT Lincoln Laboratory

Embedded computing continues to influence and change the way we live, as evidenced by its ubiquitous usage in consumer and industrial electronics and its ability to enable new growth areas, such as autonomous vehicles and virtual reality systems.  Over the past several decades the Department of Defense (DoD) has been leveraging advances in embedded computing to meet national needs in intelligence, surveillance, and reconnaissance, electronic warfare, and Command and Control applications that require tremendous compute capabilities in highly constrained size, weight, and power environments. Furthermore, defense platforms such as aircraft and ships can have lifetimes upwards of 50 years; hence, hardware and software architectures must be designed to provide both the performance and flexibility to support new applications and regular technology upgrades over such a long time frame.

MIT Lincoln Laboratory’s Embedded and Open Systems Group has been a long-time leader in the design, development, and implementation of defense electronic components and systems. In this seminar, we will discuss advanced real-time hardware and software technologies along with codesigned hardware and software system solutions. First, we will provide an overview of DoD-oriented embedded computing challenges, technologies, and trends. We will then explore extremely power efficient custom application-specific integrated circuit and emerging system-on-chip solutions that parallel approaches used in smart phones. Heterogeneous system solutions that leverage field-programmable gate arrays, graphics processing units, and multi-core microprocessor components will then be addressed. Open standards-based software architectures that can meet real-time performance requirements, enable code portability, and enhance programmer productivity will be discussed. Representative implementation examples will be provided in each section. Our presentation will conclude with some prognostication on the future of embedded computing. 

1PhD, Electrical Engineering, Northeastern University
2DSc, Electrical Engineering, Massachusetts Institute of Technology
3BS, Computer Engineering, Northeastern University


A Wideband 6 GHz to 12 GHz Power Amplifier with Enhanced Efficiency

Dr. Nestor D. Lopez1
MIT Lincoln Laboratory

In the work discussed in this seminar, MIT Lincoln Laboratory uses optimal nonuniform transmission lines (ONUTLs) for the design of wideband single-transistor power amplifiers. Single-transistor, class-AB power amplifiers are the building blocks of most radio-frequency (RF) transmitters because these amplifiers exhibit acceptable linearity and efficiency; however, these amplifiers are typically narrowband.

RF transistors support ultrawideband performance, but bandwidth is limited by the matching networks used. The matching networks transform the system impedance to the impedances the transistor needs to see (target impedances) to deliver maximal performance, i.e., output power, gain, and efficiency. Matching networks are needed to individually match source (input) and load (output) ports since these sets of target impedances are different. RF transistor target impedances can be small, are complex, and vary with frequency. Because these target impedances are also inherent to the transistor's physical dimensions and its technology (GaN high-electron-mobility transistor [HEMT], Si laterally diffused MOSFET [LDMOS], or GaAs pseudomorphic HEMT [pHEMT]), each matching network is unique to a specific transistor.

These constraints make the design of RF transistor wideband matching networks a difficult task. This work uses an algorithm to modify the characteristic impedance profile of a nonuniform transmission line so that the error between the matching network transformed impedances and a set of target impedances is reduced. The target impedances are obtained from the large signal loadpull characterization of the RF transistor. The result of designing source and load ONUTL wideband matching networks is a class-AB power amplifier that exhibits wideband performance.

This work demonstrates a 6 GHz to 12 GHz power amplifier prototype implemented with a discrete 10 W GaN HEMT on SiC and wideband ONUTLs. The amplifier gain is 9 dB with a power-added efficiency of 40%.

1PhD, Electrical Engineering, University of Colorado at Boulder


Bioinspired Resource Management for Multiple-Sensor
Target Tracking Systems

Dr. Dana Sinno1and Dr. Hendrick C. Lambert2
MIT Lincoln Laboratory

We present an algorithm, inspired by self-organization and stigmergy observed in biological swarms, for managing multiple sensors tracking large numbers of targets. We have devised a decentralized architecture wherein autonomous sensors manage their own data collection resources and task themselves. Sensors cannot communicate with each other directly; however, a global track file, which is continuously broadcast, allows the sensors to infer their contributions to the global estimation of target states. Sensors can transmit their data (either as raw measurements or some compressed format) only to a central processor where their data are combined to update the global track file. We outline information-theoretic rules for the general multiple-sensor Bayesian target tracking problem and provide specific formulas for problems dominated by additive white Gaussian noise. Using Cramér-Rao lower bounds as surrogates for error covariances and numerical scenarios involving ballistic targets, we illustrate that the bioinspired algorithm is highly scalable and performs very well for large numbers of targets.

1PhD, Electrical Engineering, Arizona State University
2PhD, Applied Physics, University of California, San Diego


Multilithic Phased Array Architectures for Next-Generation Radar

Dr. Sean M. Duffy1
MIT Lincoln Laboratory

Phased array antennas provide significant operational capabilities beyond those achievable with dish antennas. Civilian agencies, such as the Federal Aviation Administration and Department of Homeland Security, are investigating the feasibility of using phased array radars to satisfy their next-generation needs. In particular, the Multifunction Phased Array Radar (MPAR) effort aims to eliminate nine different dish-based radars and replace them with radars employing a single, low-cost MPAR architecture. Also, unmanned air system (UAS) operation within the National Airspace System (NAS) requires an airborne sense-and-avoid (ABSAA) capability ideally satisfied by a small low-cost phased array.

Two example phased array panels are discussed in this talk. The first is the MPAR panel—a scalable, low-cost, highly capable S-band panel. This panel provides the functionality to perform the missions of air surveillance and weather surveillance for the NAS. The second example is the ABSAA phased array, a low-cost Ku-band panel for UAS collision avoidance radar.

The approach used in these phased arrays eliminates the drivers that lead to expensive systems. For example, the high-power amplifier is fabricated in a high-volume foundry and mounted in a surface mount package, thereby allowing industry-standard low-cost assembly processes. Also, all our integrated circuits contain multiple functions to save on semiconductor space and board-level complexity. Finally, the systems' multilayered printed circuit board assemblies combine the antenna, microwave circuitry, and integrated circuits, eliminating the need for hundreds of connectors between the subsystems; this streamlined design enhances overall reliability and lowers manufacturing costs.

1PhD, Electrical Engineering, University of Massachusetts–Amherst


Parameter Bounds Under Misspecified Models

Mr. Keith Forsythe1
MIT Lincoln Laboratory

Parameter bounds are traditionally derived assuming perfect knowledge of data distributions. When the assumed probability distribution for the measured data differs from the true distribution, the model is said to be misspecified; mismatch at some level is inevitable in practice. Thus, several authors have studied the impact of model misspecification on parameter estimation. Most notably, Peter Huber explored in detail the performance of maximum-likelihood (ML) estimation under a very general form of misspecification; he showed consistency and normality, and derived ML estimation’s asymptotic covariance that is often referred to as the celebrated "sandwich covariance."

The goal of this talk is to consider the class of non-Bayesian parameter bounds emerging from the covariance inequality under the assumption of model misspecification. Casting the bound problem as one of constrained minimization is likewise considered. Primary attention is given to the Cramér-Rao bound (CRB). It is shown that Huber's sandwich covariance is the misspecified CRB and provides the greatest lower bound (tightest) under ML constraints. Consideration of the standard circular complex Gaussian ubiquitous in signal processing yields a generalization of the Slepian-Bangs formula under misspecification. This formula, of course, reduces to the usual one when the assumed distribution is in fact the correct one. The framework is outlined for consideration of the Barankin/Hammersley-Chapman-Robbins, Bhattacharyya, and Bobrovsky-MayerWolf-Zakai bound under misspecification.

1SM, Mathematics, Massachusetts Institute of Technology


Polynomial Rooting Techniques for Adaptive Array Direction Finding

Dr. Gary F. Hatke1 
MIT Lincoln Laboratory

Array processing has many applications in modern communications, radar, and sonar systems. Array processing is used when a signal in space, be it electromagnetic or acoustic, has some spatial coherence properties that can be exploited (such as far-field plane wave properties). The array can be used to sense the orientation of the plane wave and thus deduce the angular direction to the source. Adaptive array processing is used when there exists an environment of many signals from unknown directions as well as noise with unknown spatial distribution. Under these circumstances, classical Fourier analysis of the spatial correlations from an array data snapshot (the data seen at one instance in time) is insufficient to localize the signal sources.

In estimating the signal directions, most adaptive algorithms require computing an optimization metric over all possible source directions and searching for a maximum. When the array is multidimensional (e.g., planar), this search can become computationally expensive, as the source direction parameters are now also multidimensional. In the special case of one-dimensional (line) arrays, this search procedure can be replaced by solving a polynomial equation, where the roots of the polynomial correspond to estimates of the signal directions. This technique had not been extended to multidimensional arrays because these arrays naturally generated a polynomial in multiple variables, which does not have discrete roots.
This seminar introduces a method for generalizing the rooting technique to multidimensional arrays by generating multiple optimization polynomials corresponding to the source estimation problem and finding a set of simultaneous solutions to these equations, which contain source location information. It is shown that the variance of this new class of estimators is equal to that of the search techniques they supplant. In addition, for sources spaced more closely than a Rayleigh beamwidth, the resolution properties of the new polynomial algorithms are shown to be better than those of the search technique algorithms.

1PhD, Electrical Engineering, Princeton University


Radar Signal Distortion and Compensation with Transionospheric Propagation Paths

Dr. Scott D. Coutts1
MIT Lincoln Laboratory

Electromagnetic signals propagating though the atmosphere and ionosphere are distorted and refracted by this propagation media. The effects are particularly pronounced for lower-frequency signals propagating through the ionosphere. In the extreme case, frequencies in the high-frequency (HF) band can be severely refracted by the ionosphere to the point that they are reflected back toward the ground. This effect is exploited by over-the-horizon radars and HF communication systems to achieve very-long-range, over-the-horizon performance. For the general radar case, the measurements of range, Doppler shift, and elevation and azimuth angles are all corrupted from their free-space values with time-varying biases.

To provide accurate radar parameter estimates in the presence of these errors, a three-dimensional method to compensate for the ionosphere refraction has been developed at Lincoln Laboratory. In this seminar, two examples of the method’s use are provided: (1) the radar returns from a known object are used to specify an unknown ionosphere electron density, and (2) an unknown satellite state vector is estimated with and without the ionosphere compensation so that the accuracy improvement can be quantified. Magneto-ionic ray tracing is used to generate the three-dimensional propagation model and propagation correction tables. A maximum-likelihood satellite-ephemeris estimator is designed and demonstrated using corrupted radar data. The technique is demonstrated using “real data” examples with very encouraging results and is applicable at radar frequencies ranging from high to ultrahigh.

1PhD, Electrical Engineering, Northeastern University


Synthetic Aperture Radar

Dr. Gerald R. Benitz1 
MIT Lincoln Laboratory

MIT Lincoln Laboratory is investigating the application of phased-array technology to improve the state of the art in radar surveillance. Synthetic aperture radar (SAR) imaging is one mode that can benefit from a multiple-phase-center antenna. The potential benefits are protection against interference, improved area rate and resolution, and multiple simultaneous modes of operation.

This seminar begins with an overview of SAR, giving the basics of resolution, collection modes, and image formation. Several imaging examples are provided. Results from the Lincoln Multimission ISR Testbed (LiMIT) X-band airborne radar are presented. LiMIT employs an eight-channel phased-array antenna and records 180 MHz bandwidth from each channel simultaneously. One result employs adaptive processing to reject wideband interference, demonstrating recovery of a corrupted SAR image. Another result employs multiple simultaneous beams to increase the area of the image beyond the conventional limitation that is due to the pulse repetition frequency. Areas that are Doppler ambiguous can be disambiguated by using the phased-array antenna.

1PhD, Electrical Engineering, University of Wisconsin–Madison


Solid State Devices, Materials, and Processes

Advanced Materials at MIT Lincoln Laboratory

Dr. Jeffrey B. Chou1, Dr. Vladimir Liberman2, Dr. Alexander Stolyarov3, and Dr. Daniel Nezich4
MIT Lincoln Laboratory

At MIT Lincoln Laboratory, we are engineering materials that include new optical and electrical properties with capabilities beyond those found in nature. Our vision is to use these fundamentally new materials to enable faster- and lower-power electronics, advanced imaging applications, and high-tech wearable fabrics. At Lincoln Laboratory, our cross-disciplinary team of engineers, programmers, physicists, and material scientists are working together to turn these new materials into useable systems that will impact everyday life. Our research spans first-principle calculations via density functional theory to fully fabricated and integrated systems. Many of our research programs also leverage the new and exciting discoveries from the MIT campus faculty and graduate students, who often act as our collaborators on many projects. The vast material deposition and characterization capabilities between Lincoln Laboratory and MIT campus enable an enormous ability to develop new and exciting materials. This seminar will provide a broad overview on the diverse advanced materials programs at Lincoln Laboratory and highlight recent work, including phase-change metamaterials, dielectric resonators, multifunctional fabrics, and two-dimensional materials for valleytronics.

1PhD, Electrical Engineering and Computer Sciences, University of California–Berkeley
2PhD, Applied Physics, Columbia University
3PhD, Physics, Harvard University
4PhD, Physics, MIT


Electromagnetic Vector Antenna and Constrained Maximum-Likelihood Imaging for Radio Astronomy

Dr. Frank C. Robey1, Dr. Alan J. Fenn2, and Dr. Mark J. Silver3
MIT Lincoln Laboratory

Mary E. Knapp4, Dr. Frank D. Lind5, and Dr. Ryan A. Volz6

Radio astronomy at frequencies below 50 MHz provides a window into nonthermal processes in objects ranging from planets to galaxies. These frequencies also provide insight into the formation of the universe. Ground-based arrays cannot adequately observe astronomical sources below about 20 MHz due to ionospheric perturbation and shielding; therefore, the sky has not been mapped with high angular resolution below that frequency. Radio astronomy at these frequencies must be accomplished in space. With space-based sensing, the cost of each satellite is high, and consequently we desire to maximize the information from each satellite. This presentation discusses designs for mapping the sky from space using electromagnetic vector sensors. These sensors measure the full electric- and magnetic-field vectors of incoming radiation and enable measurement with reasonable angular resolution from a compact sensor with a single-phase center. A model for radio astronomy imaging is introduced, and the constraints imposed by Maxwell's equations and vector sensing of an electromagnetic field are explored. This presentation shows that the covariance matrix inherent in the stochastic process must lie in a highly constrained subset of allowable positive definite covariance matrices. Results are shown that use an expectation maximization to form images consistent with a covariance matrix that satisfies the constraints. A conceptual design for a spacecraft to map the sky at frequencies below the ionospheric cutoff is discussed, along with concept development progress.

1PhD, DsC, Electrical Engineering, Washington University
2PhD, Electrical Engineering, The Ohio State University
3PhD, Aerospace Engineering, University of Colorado, Boulder
4Graduate Student, Earth, Atmosphere, and Planetary Sciences Department, Massachusetts Institute of Technology
5PhD, Geophysics, University of Washington
6PhD, Aeronautics/Astronautics, Stanford University


Geiger-Mode Avalanche Photodiode Arrays for Imaging and Sensing

Dr. Brian F. Aull1
MIT Lincoln Laboratory

This seminar discusses the development of arrays of silicon avalanche photodiodes integrated with digital complementary metal-oxide semiconductor (CMOS) circuits to make focal planes with single-photon sensitivity. The avalanche photodiodes are operated in Geiger mode: they are biased above the avalanche breakdown voltage so that the detection of a single photon leads to a discharge that can directly trigger a digital circuit. The CMOS circuits to which the photodiodes are connected can either time stamp or count the resulting detection events. Applications include three-dimensional imaging using laser radar, wavefront sensing for adaptive optics, and optical communications.

1PhD, Electrical Engineering, Massachusetts Institute of Technology


High-Power Laser Technology at MIT Lincoln Laboratory

Dr. T. Y. Fan1 and Dr. Darren A. Rand2
MIT Lincoln Laboratory

This seminar includes an overview of high-power laser technology development at MIT Lincoln Laboratory. Two topics will be emphasized: laser beam combining and cryogenically cooled solid-state lasers. Beam combining, taking the outputs from arrays of lasers and combining them into a single beam with near-ideal propagation characteristics, has been pursued for many decades, but its promise has started to become a reality only within the last decade. Beam combining of both fiber and semiconductor arrays using both wavelength and coherent beam combining approaches will be discussed. Cryogenically cooled solid-state lasers enable high-efficiency and high-average-power lasers while maintaining excellent beam quality. This approach is particularly applicable for developing laser sources with both high peak and average power.

1PhD, Electrical Engineering, Stanford University
2PhD, Electrical Engineering, Princeton University


Integrated Photonics for Sensing, Communications, and Signal Processing

Dr. Suraj Bramhavar1, Dr. Paul Juodawlkis2, Dr. Boris (Dave) Kharas3, Dr. Cheryl Sorace-Agaskar4, Dr. Reuel Swint5, and Dr. Siva Yegnanarayanan6
MIT Lincoln Laboratory

Integrated photonics involves the aggregation of multiple optical or photonic components onto a common substrate using either monolithic or hybrid integration techniques. Common components include lasers, optical modulators, detectors, filters, splitters and combiners, couplers, and optical isolators. These components are typically connected using optical waveguides built into the substrate platform to create a photonic integrated circuit (PIC). Relative to optical circuits constructed from discrete components connected using optical fibers, PICs have a number of advantages, including reduced size and weight, reduced fiber interfaces and associated fiber-pigtailing cost, and improved environmental stability for coherent optical signal processing. These advantages are strengthened by combining PICs with electronic integrated circuits via several electronic-photonic integration techniques (wire bonding, flip chip, wafer bonding, or monolithic). Depending on the desired functions, PICs can be fabricated from a variety of materials including silicon, silicon-nitride, and compound semiconductors (e.g., indium phosphide, gallium arsenide, gallium nitride, and their associated ternary and quaternary compounds).

In this seminar, we will provide an introduction to integrated photonics technology including design, fabrication, packaging, and characterization. We will describe resources and capabilities to develop silicon, silicon-nitride, and compound-semiconductor PICs at MIT Lincoln Laboratory using in-house fabrication resources. We will also describe several government applications of integrated photonics (e.g., remote sensing, free-space communications, and signal processing), and how these are similar and different from commercial applications.

1PhD, Electrical Engineering, Boston University
2PhD, Electrical Engineering, Georgia Institute of Technology
3PhD, Materials Science, State University of New York at Stony Brook
4PhD, Electrical Engineering, Massachusetts Institute of Technology
5PhD, Electrical Engineering, University of Illinois
6PhD, Electrical Engineering, University of California Los Angeles


Microfluidics at MIT Lincoln Laboratory

Prof. Todd A. Thorsen1, Dr. Shaun R. Berry2, and Dr. Jakub Kedzierski3
MIT Lincoln Laboratory

At MIT Lincoln Laboratory, we are engineering general-purpose tools for microfluidic platforms. Cross-disciplinary teams, consisting of engineers, programmers, and biologists, are designing and developing microfluidic tools for a broad range of applications. Our vision is to apply microfabrication techniques, traditionally used in microelectromechanical system (MEMS) and integrated circuit manufacturing, to develop microfluidic components and systems that are reconfigurable, scalable, and programmable through a software interface. End users of these microfluidic tools will be able to orchestrate complex and adaptive procedures that are beyond the capabilities of today’s hardware. This talk will provide a broad overview on the diverse microfluidic programs at Lincoln Laboratory, highlighting recent work, including ultra-low-power electrowetting-based pumps, integrated "lab-on-a-chip" systems for biological exploration, actively configurable pixel-scale liquid microlenses and prisms, and microhydraulic actuators.

1PhD, Biochemistry and Molecular Biophysics, California Institute of Technology
2PhD, Mechanical Engineering, Tufts University
PhD, Electrical Engineering, University of California–Berkeley


New Fabrication Platforms and Processes

Dr. Theodore Fedynyshyn1, Dr. Lalitha Parameswaran2, Dr. Livia Racz3, and Dr. Melissa Smith4
MIT Lincoln Laboratory

Additive manufacturing techniques, such as three-dimensional (3D) printing and 3D assembly, have transformed production by breaking long-established rules of economies of scale and manufacturing complexity. MIT Lincoln Laboratory has a suite of ongoing efforts to develop novel materials and processes for the construction of nonplanar 3D devices, integrated assemblies and systems, using our expertise in material science, semiconductor fabrication, chemistry, and mechanical and electrical engineering. Our goal is to develop processes that enable "one-stop" fabrication of complete systems with both mechanical and electrical functionality. Ongoing programs include: design of materials and processes for concurrent 3D printing of dissimilar materials; development of high-resolution, radio-frequency 3D-printed structures for low size, weight, and power applications extending to the THz range; design of novel microplasma-based sputtering of conductive materials for direct write of interconnect on nonplanar substrates; and development of reconstructed wafer processes to overcome die-size limitations and enable scaling of digital integrated circuits to large formats.

1PhD, Organic Chemistry, Brown University
2PhD, Electrical Engineering, MIT
PhD, Materials Science and Engineering, MIT
4PhD, Materials Science and Engineering, MIT


Quantum Information Science with Superconducting Artificial Atoms

Dr. William D. Oliver1
MIT Lincoln Laboratory
& the Research Laboratory of Electronics

Superconducting qubits are artificial atoms assembled from electrical circuit elements. When cooled to cryogenic temperatures, these circuits exhibit quantized energy levels. Transitions between levels are induced by applying pulsed microwave electromagnetic radiation to the circuit, revealing quantum coherent phenomena analogous to (and in certain cases beyond) those observed with coherent atomic systems.

This talk provides an overview of quantum information science and superconducting artificial atoms, including several demonstrations of quantum coherence using these circuits: Landau-Zener-Stückelberg oscillations [1], microwave-induced qubit cooling to temperatures less than 3 mK (colder than the refrigerator) [2], and a new broadband spectroscopy technique called amplitude spectroscopy [3]. We then discuss in detail a highly coherent aluminum qubit (T1 = 12 µs, T2Echo = 23 µs, fidelity = 99.75%) with which we demonstrated noise spectroscopy using nuclear magnetic resonance (NMR)-inspired control sequences comprising 100s of pulses [4, 5].

These experiments exhibit a remarkable agreement with theory and are extensible to other solid-state qubit modalities. In addition to fundamental studies of quantum coherence in solid-state systems, we anticipate these devices and techniques will advance qubit control and state-preparation methods for quantum information science and technology applications.

[1] W.D. Oliver, et al., "Mach-Zehnder Interferometry in a Strongly Driven Superconducting Qubit," Science, vol. 310, no. 5754, pp. 1653–1657, (2005.
[2] S.O. Valenzuela, et al., "Microwave-Induced Cooling of a Superconducting Qubit," Science, vol. 314, no. 5805, pp. 1589–1592, 2006.
[3] D.M. Berns et al., "Amplitude Spectroscopy of a Solid-State Artificial Atom," Nature, vol. 455, pp. 51–57, 2008.
[4] J. Bylander, et al., "Noise Spectroscopy Through Dynamical Decoupling with a Superconducting Flux Qubit," Nature Physics, vol. 7, pp. 565–570, 2011.
[5] F.Yan, et al., "Rotating-Frame Relaxation as a Noise Spectrum Analyzer of a Superconducting Qubit Undergoing Driven Evolution," accepted for publication in Nature Communications, 2013.

1PhD, Electrical Engineering, Stanford University


Slab-Coupled Optical Waveguide Devices and Their Applications

Dr. Paul W. Juodawlkis1, Dr. Joseph P. Donnelly2,
Dr. Gary M. Smith3, and Dr. George W. Turner4
MIT Lincoln Laboratory

For the past decade, MIT Lincoln Laboratory has been developing new classes of high-power semiconductor optoelectronic emitters and detectors based on the slab-coupled optical waveguide (SCOW) concept. The key characteristics of the SCOW design include (1) the use of a planar slab waveguide to filter the higher-order transverse modes from a large rib waveguide, (2) low overlap between the optical mode and the active layers, and (3) low excess optical loss. These characteristics enable waveguide devices having large (> 5 × 5 μm) symmetric fundamental-mode operation and long length (~1 cm). These large dimensions, relative to conventional waveguide devices, allow efficient coupling to optical fibers and external optical cavities, and provide reduced electrical and thermal resistances for improved heat dissipation.

This seminar will review the SCOW operating principles and describe applications of the SCOW technology, including Watt-class semiconductor SCOW lasers (SCOWLs) and amplifiers (SCOWAs), monolithic and ring-cavity mode-locked lasers, single-frequency external cavity lasers, and high-current waveguide photodiodes. The SCOW concept has been demonstrated in a variety of material systems at wavelengths including 915, 960–980, 1040, 1300, 1550, and 2100 nm. In addition to single emitters, higher brightness has been obtained by combining arrays of SCOWLs and SCOWAs using wavelength beam-combining and coherent combining techniques. These beam-combined SCOW architectures offer the potential of kilowatt-class, high-efficiency, electrically pumped optical sources.

1PhD, Electrical Engineering, Georgia Institute of Technology

2PhD, Electrical Engineering, Carnegie Mellon University
3PhD, Electrical Engineering, University of Illinois at Urbana-Champaign
4PhD, Electrical Engineering, Johns Hopkins University


Toward Large-Scale Trapped-Ion Quantum Processing

Dr. John Chiaverini1
MIT Lincoln Laboratory

Quantum computers have the potential to deliver a profound computational advantage when applied to many important problems because they manipulate information in a fundamentally different way from that of current (classical) hardware. Individual atomic ions, held and manipulated using electromagnetic fields, constitute a leading candidate system to realize such a processor because of their long coherence times and uniform physical properties. Accomplishments at the few-qubit level include high-fidelity demonstrations of basic quantum algorithms, but a clear path to a large-scale processor is not fully defined. This presentation will describe Lincoln Laboratory’s efforts to develop scalable ion techniques and technologies. These efforts include the loading and manipulation of ions in two-dimensional surface-electrode-trap arrays, the integration of technology for increased on-chip control of ion qubits, and the reduction of noise that can limit multi-qubit gate fidelity; all these techniques will likely be necessary for reaching the large scales required to realize the promise of quantum computing.

1PhD, Physics, Stanford University


Ultrasensitive Mass Spectrometry Development
at MIT Lincoln Laboratory

Dr. Ta-Hsuan Ong1, Dr. Jude Kelley2, and Dr. Roderick Kunz3
MIT Lincoln Laboratory

Mass spectrometry (MS) has long been regarded as one of the most reliable methods for chemical analysis because of its ability to identify molecules based on their molecular weight and fragmentation patterns combined with its sensitivity in the pico- to femtogram range. Traditionally, analytical systems that utilize mass spectrometry have coupled this method with other analytical techniques such as gas or liquid chromatography; however, the development of ambient ionization techniques and improvements in mass spectrometer design have demonstrated that this technique can function on its own as a multipurpose chemical detector.

MIT Lincoln Laboratory has been advancing these systems in an effort to develop the next generation of MS-based sensing systems focused on detection missions relevant to national security. This work has centered on ways to improve the sensitivity of MS-based systems to explosives when they are encountered as vapors and as trace particulate residues. The vapor detection system that the Laboratory has developed has a real-time sensitivity to concentrations in the parts-per-quadrillion (ppqv) range. Real-time sensitivity at these levels rivals that of conventional vapor detectors (canines), providing opportunities to (1) better understand the origins, dynamics, concentrations, and attenuation levels of vapor signatures associated with concealed threats and (2) help improve canine training. The Laboratory's work on detecting an explosive particulate residue has focused on applying thermal desorption and atmospheric pressure chemical ionization (TD-APCI) to swipe-based surface samples across a wide range of explosive classes. For explosive threats that are challenging to detect with TD-APCI, Lincoln Laboratory researchers have developed specialized chemical reagents that can be added directly to the ionization source to improve both the specificity of the technique as well as its sensitivity. The information revealed from these studies is used to assess current mass spectrometer-based explosive trace detection systems and guide future development efforts.

1PhD, Analytical Chemistry, University of Illinois at Urbana-Champaign
2PhD, Physical Chemistry, Yale University
3PhD, Analytical Chemistry, University of North Carolina at Chapel Hill


Space Control Technology

Haystack Ultrawideband Satellite Imaging Radar Antenna

Dr. Joseph M. Usoff1
MIT Lincoln Laboratory

The Haystack facility in Westford, Massachusetts, has been in operation since 1964 and has conducted a wide range of communications, radar, and radio astronomy missions. MIT Lincoln Laboratory, under sponsorship of the United States Air Force, recently upgraded the facility, including the replacement of the 120-foot diameter Cassegrain antenna with a new antenna and the addition of a wideband W-band radar to the existing wideband X-band radar. The upgraded antenna is of the same diameter and has the same optical parameters as the old antenna, but the surface tolerance has been significantly improved to be better than 100 µm rms. The improved antenna surface tolerance permits efficient operation at higher frequencies, enabling W-band radar operations and radio astronomy experiments up to the 230 GHz band. This presentation will provide an overview of the new antenna; describe the fabrication and construction challenges; and highlight the surface alignment process.

1PhD, Electrical Engineering, The Ohio State University


Miniature Ultrawideband Antennas for Low-SWaP Platforms

Dr. Raoul O. Ouedraogo1
MIT Lincoln Laboratory

The demand for miniaturized antennas that can be integrated on size, weight, and power (SWaP)-constrained platforms has significantly increased over the past decade. For various military and commercial applications operations, there is also an increasing interest for the miniaturized antennas to be multifunctional and operable over ultra-wide frequency bands. While rapid advances in semiconductor, gallium nitride, and integrated circuit technologies continue to enable the design of smaller agile payloads, antennas are still largely designed as static and bulky devices. As a result, users must carry multiple antennas to service different applications or physically large antennas that provide the wide bandwidth needed to cover multiple applications. Unfortunately, these are options that cannot be implemented on very low-SWaP platforms such as micro-unmanned aerial vehicles or wearable devices.

To overcome these difficulties, we present efforts carried out at MIT Lincoln Laboratory to develop novel antenna systems based on the integration of a pixel-based multi-level topology optimization approach with microfluidics to create highly efficient subwavelength antenna systems with up to 20:1 instantaneous bandwidths. The pixel-based optimization approach is an in-silico design methodology through which the metallization or dielectric of an antenna is parameterized into pixels (or voxels) and the topology of each pixel is altered in order to create new antenna geometries that exhibit the desired electrical characteristics. This seminar describes the design methodology and outlines the performance (analytical and measured) of various prototype directional and omnidirectional miniaturized ultrawideband and reconfigurable antennas used in various efforts.

1PhD, Electrical Engineering, Michigan State University


New Techniques for High-Resolution Atmospheric Sounding

Dr. William J. Blackwell1
MIT Lincoln Laboratory

Modern spaceborne atmospheric sounders consist of passive spectrometers that measure spectral radiance intensity in microwave (approximately 1 cm to 1 mm wavelength), millimeter wave (approximately 1 mm to 300 µm), and thermal infrared (approximately 3 µm to 16 µm) bands. In the last decade, advanced microwave sounders (AMSU and ATMS) and hyperspectral infrared sounders (AIRS, IASI, and CrIS) have substantially improved forecast skill, provided new products relevant to a wide range of science application areas, and contributed to our ability to characterize the Earth's climate system.

This presentation will focus on two areas of current research: (1) new algorithmic approaches to geophysical parameter retrieval that fully exploit the spectral richness of the microwave and hyperspectral microwave observations and (2) next-generation sensor systems that build upon recent successes to provide improved spectral coverage (e.g., hyperspectral microwave systems) and improved spatial revisit (e.g., small satellite constellation architectures) for observations of dynamic meteorology and severe weather. Specific topics to be addressed include a neural network algorithm for temperature and moisture profile retrieval that is being used as part of the AIRS Science Team Version 6 algorithm, recent technology development funded by NASA to demonstrate a hyperspectral microwave receiver subsystem, and system performance analyses of nanosatellite constellation architectures, including the MiRaTA 3U atmospheric sounding CubeSat to be launched by NASA in 2017 to demonstrate core constellation elements.

AMSU – Advanced Microwave Sounding Unit
ATMS – Advanced Technology Microwave Sounder
AIRS – Atmospheric Infrared Sounder
IASI – Infrared Atmospheric Sounding Interferometer
CrIS – Cross-track Infrared Sounder
MiRaTA – Microwave Radiometer Technology Acceleration

1PhD, Electrical Engineering, Massachusetts Institute of Technology


Synoptic Astronomy with the Space Surveillance Telescope

Dr. Deborah F. Woods1
MIT Lincoln Laboratory

The Space Surveillance Telescope (SST) is a 3.5 meter telescope with a 3°x2° field of view developed by MIT Lincoln Laboratory for the Defense Advanced Research Projects Agency for tracking satellites and Earth-orbiting space debris. The telescope has a three-mirror Mersenne-Schmidt design that obtains a fast focal ratio of F/1.0, enabling deep, wide-area searches with rapid revisit rates. These attributes also have utility for detections in time domain astronomy, in which objects are observed to vary in position or in brightness.

Since initiating a dedicated asteroid survey mode in 2014, the SST has been the main source of asteroid observations in support of the Lincoln Near-Earth Asteroid Research program. In 2015, SST published over 7.2 million observations, the largest number of asteroid observations ever submitted to the Minor Planet Center by a single site in one year. Meanwhile, the image data obtained for the asteroid surveys also has excellent utility to synoptic astronomy applications.

The program is seeking to develop a science pipeline for the processing of archived and possibly future datasets. The early science goals are to identify specific classes of variable stars and to use the stars' period of variability and color information from other astronomical catalogs to estimate distances to the objects. In addition, software tools may be developed to detect transient phenomena, such as supernovae, or even the optical counterparts of the Laser Interferometer Gravitational-Wave Observatory. While the SST's primary mission is to support space situational awareness, the image data collected during the course of operations has been valuable for astronomical applications, especially detections of near-Earth asteroids.

1PhD, Astronomy, Harvard University


Systems and Architectures

Choices, Choices, Choices (Decisions, Decisions, Decisions)

Dr. Robert T. Shin1
MIT Lincoln Laboratory

As you plan a career after graduating from a university or college, you are faced with many choices and decisions, not just now but for many and your most productive years. While there generally is not a right or wrong decision, it is helpful to think about and make these decisions in a more systematic way. This seminar looks at perspectives on how one might think about making a choice and making an impact, especially as an architect of future advanced systems. Also, my lessons learned along the way, in architectural thinking and in management, are presented in the hope that you might find them useful as you advance in your careers. Finally, a short overview of MIT Lincoln Laboratory, a federally funded research and development center, is presented as a case study on how one can leverage such an organization to make an impact.

1PhD, Electrical Engineering, Massachusetts Institute of Technology