Publications

Refine Results

(Filters Applied) Clear All

COVID-19 exposure notification in simulated real-world environments

Summary

Privacy-preserving contact tracing mobile applications, such as those that use the Google-Apple Exposure Notification (GAEN) service, have the potential to limit the spread of COVID-19 in communities, but the privacy-preserving aspects of the protocol make it difficult to assess the performance of the apps in real-world populations. To address this gap, we exercised the CovidWatch app on both Android and iOS phones in a variety of scripted realworld scenarios, relevant to the lives of university students and employees. We collected exposure data from the app and from the lower-level Android service, and compared it to the phones' actual distances and durations of exposure, to assess the sensitivity and specificity of the GAEN service configuration as of February 2021. Based on the app's reported ExposureWindows and alerting thresholds for Low and High alerts, our assessment is that the chosen configuration is highly sensitive under a range of realistic scenarios and conditions. With this configuration, the app is likely to capture many long-duration encounters, even at distances greater than six feet, which may be desirable under conditions with increased risk of airborne transmission.
READ LESS

Summary

Privacy-preserving contact tracing mobile applications, such as those that use the Google-Apple Exposure Notification (GAEN) service, have the potential to limit the spread of COVID-19 in communities, but the privacy-preserving aspects of the protocol make it difficult to assess the performance of the apps in real-world populations. To address this...

READ MORE

The Simulation of Automated Exposure Notification (SimAEN) Model

Summary

Automated Exposure Notication (AEN) was implemented in 2020 to supplement traditional contact tracing for COVID-19 by estimating "too close for too long" proximities of people using the service. AEN uses Bluetooth messages to privately label and recall proximity events, so that persons who were likely exposed to SARS-CoV-2 can take the appropriate steps recommended by their health care authority. This paper describes an agent-based model that estimates the effects of AEN deployment on COVID-19 caseloads and public health workloads in the context of other critical public health measures available during the COVID-19 pandemic. We selected simulation variables pertinent to AEN deployment options, varied them in accord with the system dynamics available in 2020-2021, and calculated the outcomes of key metrics across repeated runs of the stochastic multi-week simulation. SimAEN's parameters were set to ranges of observed values in consultation with public health professionals and the rapidly accumulating literature on COVID-19 transmission; the model was validated against available population-level disease metrics. Estimates from SimAEN can help public health officials determine what AEN deployment decisions (e.g., configuration, workflow integration, and targeted adoption levels) can be most effective in their jurisdiction, in combination with other COVID-19 interventions (e.g., mask use, vaccination, quarantine and isolation periods).
READ LESS

Summary

Automated Exposure Notication (AEN) was implemented in 2020 to supplement traditional contact tracing for COVID-19 by estimating "too close for too long" proximities of people using the service. AEN uses Bluetooth messages to privately label and recall proximity events, so that persons who were likely exposed to SARS-CoV-2 can take...

READ MORE

Bluetooth Low Energy (BLE) Data Collection for COVID-19 Exposure Notification

Summary

Privacy-preserving contact tracing mobile applications, such as those that use the Google-Apple Exposure Notification (GAEN) service, have the potential to limit the spread of COVID-19 in communities; however, the privacy-preserving aspects of the protocol make it difficult to assess the performance of the Bluetooth proximity detector in real-world populations. The GAEN service configuration of weights and thresholds enables hundreds of thousands of potential configurations, and it is not well known how the detector performance of candidate GAEN configurations maps to the actual "too close for too long" standard used by public health contact tracing staff. To address this gap, we exercised a GAEN app on Android phones at a range of distances, orientations, and placement configurations (e.g., shirt pocket, bag, in hand), using RF-analogous robotic substitutes for human participants. We recorded exposure data from the app and from the lower-level Android service, along with the phones' actual distances and durations of exposure.
READ LESS

Summary

Privacy-preserving contact tracing mobile applications, such as those that use the Google-Apple Exposure Notification (GAEN) service, have the potential to limit the spread of COVID-19 in communities; however, the privacy-preserving aspects of the protocol make it difficult to assess the performance of the Bluetooth proximity detector in real-world populations. The...

READ MORE

Radar-optimized wind turbine siting

Author:
Published in:
IEEE Trans. Sustain. Energy, Vol. 13, No. 1, January 2022, pp. 403-13.

Summary

A method for analyzing wind turbine-radar interference is presented. A model is used to derive layouts for siting wind turbines that reduces their impact on radar systems, potentially allowing for increased wind turbine development near radar sites. By choosing a specific wind turbine grid stagger based on a wind farm's orientation relative to a radar site, the impacts on that radar can be minimized. The proposed changes to wind farm siting are relatively minor and do not have a significant effect on wind turbine density. With proper optimization of radar clutter mitigation, radar tracking performance above such wind farms can be significantly increased. Both present-day and potential future or upgraded radar systems are analyzed. The reduction in radar performance due to wind turbine clutter is approximately halved using this method. The developed method is robust with respect to controlled variations in wind turbine placement caused by potential obstructions.
READ LESS

Summary

A method for analyzing wind turbine-radar interference is presented. A model is used to derive layouts for siting wind turbines that reduces their impact on radar systems, potentially allowing for increased wind turbine development near radar sites. By choosing a specific wind turbine grid stagger based on a wind farm's...

READ MORE

Multimodal representation learning via maximization of local mutual information [e-print]

Published in:
Intl. Conf. on Medical Image Computing and Computer Assisted Intervention, MICCAI, 27 September-1 October 2021.

Summary

We propose and demonstrate a representation learning approach by maximizing the mutual information between local features of images and text. The goal of this approach is to learn useful image representations by taking advantage of the rich information contained in the free text that describes the findings in the image. Our method learns image and text encoders by encouraging the resulting representations to exhibit high local mutual information. We make use of recent advances in mutual information estimation with neural network discriminators. We argue that, typically, the sum of local mutual information is a lower bound on the global mutual information. Our experimental results in the downstream image classification tasks demonstrate the advantages of using local features for image-text representation learning.
READ LESS

Summary

We propose and demonstrate a representation learning approach by maximizing the mutual information between local features of images and text. The goal of this approach is to learn useful image representations by taking advantage of the rich information contained in the free text that describes the findings in the image...

READ MORE

Learning emergent discrete message communication for cooperative reinforcement learning

Published in:
37th Conf. on Uncertainty in Artificial Intelligence, UAI 2021, early access, 26-30 July 2021.

Summary

Communication is a important factor that enables agents work cooperatively in multi-agent reinforcement learning (MARL). Most previous work uses continuous message communication whose high representational capacity comes at the expense of interpretability. Allowing agents to learn their own discrete message communication protocol emerged from a variety of domains can increase the interpretability for human designers and other agents. This paper proposes a method to generate discrete messages analogous to human languages, and achieve communication by a broadcast-and-listen mechanism based on self-attention. We show that discrete message communication has performance comparable to continuous message communication but with much a much smaller vocabulary size. Furthermore, we propose an approach that allows humans to interactively send discrete messages to agents.
READ LESS

Summary

Communication is a important factor that enables agents work cooperatively in multi-agent reinforcement learning (MARL). Most previous work uses continuous message communication whose high representational capacity comes at the expense of interpretability. Allowing agents to learn their own discrete message communication protocol emerged from a variety of domains can increase...

READ MORE

Towards a distributed framework for multi-agent reinforcement learning research

Summary

Some of the most important publications in deep reinforcement learning over the last few years have been fueled by access to massive amounts of computation through large scale distributed systems. The success of these approaches in achieving human-expert level performance on several complex video-game environments has motivated further exploration into the limits of these approaches as computation increases. In this paper, we present a distributed RL training framework designed for super computing infrastructures such as the MIT SuperCloud. We review a collection of challenging learning environments—such as Google Research Football, StarCraft II, and Multi-Agent Mujoco— which are at the frontier of reinforcement learning research. We provide results on these environments that illustrate the current state of the field on these problems. Finally, we also quantify and discuss the computational requirements needed for performing RL research by enumerating all experiments performed on these environments.
READ LESS

Summary

Some of the most important publications in deep reinforcement learning over the last few years have been fueled by access to massive amounts of computation through large scale distributed systems. The success of these approaches in achieving human-expert level performance on several complex video-game environments has motivated further exploration into...

READ MORE

Augmented Annotation Phase 3

Author:
Published in:
MIT Lincoln Laboratory Report TR-1248

Summary

Automated visual object detection is an important capability in reducing the burden on human operators in many DoD applications. To train modern deep learning algorithms to recognize desired objects, the algorithms must be "fed" more than 1000 labeled images (for 55%–85% accuracy according to project Maven - Oct 2017 O6, Working Group slide 27) of each particular object. The task of labeling training data for use in machine learning algorithms is human intensive, requires special software, and takes a great deal of time. Estimates from ImageNet, a widely used and publicly available visual object detection dataset, indicate that humans generated four annotations per minute in the overall production of ImageNet annotations. DoD's need is to reduce direct object-by-object human labeling particularly in the video domain where data quantity can be significant. The Augmented Annotations System addresses this need by leveraging a small amount of human annotation effort to propagate human initiated annotations through video to build an initial labeled dataset for training an object detector, and utilizing an automated object detector in an iterative loop to assist humans in pre-annotating new datasets.
READ LESS

Summary

Automated visual object detection is an important capability in reducing the burden on human operators in many DoD applications. To train modern deep learning algorithms to recognize desired objects, the algorithms must be "fed" more than 1000 labeled images (for 55%–85% accuracy according to project Maven - Oct 2017 O6...

READ MORE

Feature forwarding for efficient single image dehazing

Published in:
IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, CVPRW, 16-17 June 2019.

Summary

Haze degrades content and obscures information of images, which can negatively impact vision-based decision-making in real-time systems. In this paper, we propose an efficient fully convolutional neural network (CNN) image dehazing method designed to run on edge graphical processing units (GPUs). We utilize three variants of our architecture to explore the dependency of dehazed image quality on parameter count and model design. The first two variants presented, a small and big version, make use of a single efficient encoder–decoder convolutional feature extractor. The final variant utilizes a pair of encoder-decoders for atmospheric light and transmission map estimation. Each variant ends with an image refinement pyramid pooling network to form the final dehazed image. For the big variant of the single-encoder network, we demonstrate state-of-the-art performance on the NYU Depth dataset. For the small variant, we maintain competitive performance on the superresolution O/I-HAZE datasets without the need for image cropping. Finally, we examine some challenges presented by the Dense-Haze dataset when leveraging CNN architectures for dehazing of dense haze imagery and examine the impact of loss function selection on image quality. Benchmarks are included to show the feasibility of introducing this approach into real-time systems.
READ LESS

Summary

Haze degrades content and obscures information of images, which can negatively impact vision-based decision-making in real-time systems. In this paper, we propose an efficient fully convolutional neural network (CNN) image dehazing method designed to run on edge graphical processing units (GPUs). We utilize three variants of our architecture to explore...

READ MORE

Geospatial analysis based on GIS integrated with LADAR

Summary

In this work, we describe multi-layered analyses of a high-resolution broad-area LADAR data set in support of expeditionary activities. High-level features are extracted from the LADAR data, such as the presence and location of buildings and cars, and then these features are used to populate a GIS (geographic information system) tool. We also apply line-of-sight (LOS) analysis to develop a path-planning module. Finally, visualization is addressed and enhanced with a gesture-based control system that allows the user to navigate through the enhanced data set in a virtual immersive experience. This work has operational applications including military, security, disaster relief, and task-based robotic path planning.
READ LESS

Summary

In this work, we describe multi-layered analyses of a high-resolution broad-area LADAR data set in support of expeditionary activities. High-level features are extracted from the LADAR data, such as the presence and location of buildings and cars, and then these features are used to populate a GIS (geographic information system)...

READ MORE

Showing Results

1-10 of 14