Publications

Refine Results

(Filters Applied) Clear All

Visibility estimation through image analytics

Published in:
MIT Lincoln Laboratory Report ATC-453

Summary

MIT Lincoln Laboratory (MIT LL) has developed an algorithm, known as the Visibility Estimation through Image Analytics Algorithm (VEIA), that ingests camera imagery collected by the FAA Weather Cameras Program Office (WeatherCams) and estimates the meteorological visibility in statute miles. The algorithm uses the presence of edges in the imagery and the strength of those edges to provide an estimation of the meteorological visibility within the scene. The algorithm also combines the estimates from multiple camera images into one estimate for a site or location using information about the agreement between camera estimates and the position of the Sun relative to each camera's view. The final output for a site is a prevailing visibility estimate in statute miles that can be easily compared to existing automated surface observation systems (ASOS) and/or human-observed visibility. This report includes thorough discussion of the VEIA background, development methodology, and transition process to the WeatherCams office operational platform (Sections 2–4). A detailed software description with flow diagrams is also provided in Section 5. Section 6 provides a brief overview of future research and development related to the VEIA algorithm.
READ LESS

Summary

MIT Lincoln Laboratory (MIT LL) has developed an algorithm, known as the Visibility Estimation through Image Analytics Algorithm (VEIA), that ingests camera imagery collected by the FAA Weather Cameras Program Office (WeatherCams) and estimates the meteorological visibility in statute miles. The algorithm uses the presence of edges in the imagery...

READ MORE

Priority scheduling for multi-function apertures with hard- and soft-time constraints

Published in:
2021 IEEE Aerospace Conf., 6-13 March 2021.

Summary

A multi-function aperture (MFA) is an antenna array that supports multiple RF signals for a diverse set of activities. An MFA may support multiple activities simultaneously if they are compatible, and platforms may utilize multiple MFAs to meet field-of-regard and frequency range requirements. Efficient MFA utilization requires a Resource Manager (RM) that routes signals to the correct MFA based on field-of-view and other requirements, and schedules MFA access to resolve conflicts based on request priority. An efficient RM scheduler time-interleaves requests from different activities as needed. Requested access events may be hard-time—that is, the event must be scheduled at a specified time or not at all, or soft-time, indicating it may be scheduled anytime in a specified window. Hard-time events include communications channels with assigned time slots, and soft-time events include asynchronous communications channels. This paper describes and evaluates an optimal algorithm to jointly schedule sequences of hard-time requests, maximizing the number of scheduled events while meeting priority requirements. An extension of this algorithm provides near-optimal schedules for sequences of soft-time or mixed hard- and soft-time events. Algorithms are evaluated by simulation, using two conflict models. The first is based on fixed signal paths that conflict if two paths share a common resource. The second model assumes the RM dynamically assigns resources. As implemented, these algorithms are too slow for real-time operation, and further work is required. They do provide insight into the MFA management problem, a useful metric for evaluating resource sharing and scheduling approaches, and may suggest efficient sub-optimal algorithms.
READ LESS

Summary

A multi-function aperture (MFA) is an antenna array that supports multiple RF signals for a diverse set of activities. An MFA may support multiple activities simultaneously if they are compatible, and platforms may utilize multiple MFAs to meet field-of-regard and frequency range requirements. Efficient MFA utilization requires a Resource Manager...

READ MORE

Graph matching via multi-scale heat diffusion

Author:
Published in:
IEEE Intl. Conf. on Big Data, 9-12 December 2019.

Summary

We propose a novel graph matching algorithm that uses ideas from graph signal processing to match vertices of graphs using alternative graph representations. Specifically, we consider a multi-scale heat diffusion on the graphs to create multiple weighted graph representations that incorporate both direct adjacencies as well as local structures induced from the heat diffusion. Then a multi-objective optimization method is used to match vertices across all pairs of graph representations simultaneously. We show that our proposed algorithm performs significantly better than the algorithm that only uses the adjacency matrices, especially when the number of known latent alignments between vertices (seeds) is small. We test the algorithm on a set of graphs and show that at the low seed level, the proposed algorithm performs at least 15–35% better than the traditional graph matching algorithm.
READ LESS

Summary

We propose a novel graph matching algorithm that uses ideas from graph signal processing to match vertices of graphs using alternative graph representations. Specifically, we consider a multi-scale heat diffusion on the graphs to create multiple weighted graph representations that incorporate both direct adjacencies as well as local structures induced...

READ MORE

This looks like that: deep learning for interpretable image recognition

Published in:
Neural Info. Process., NIPS, 8-14 December 2019.

Summary

When we are faced with challenging image classification tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification. The algorithm thus reasons in a way that is qualitatively similar to the way ornithologists, physicians, geologists, architects, and others would explain to people on how to solve challenging image classification tasks. The network uses only image-level labels for training, meaning that there are no labels for parts of images. We demonstrate the method on the CIFAR-10 dataset and 10 classes from the CUB-200-2011 dataset.
READ LESS

Summary

When we are faced with challenging image classification tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network...

READ MORE

Uncovering human trafficking networks through text analysis

Author:
Published in:
2019 Grace Hopper Celebration, 1-4 October 2019.

Summary

Human trafficking is a form of modern-day slavery affecting an estimated 40 million victims worldwide, primarily through the commercial sexual exploitation of women and children. In the last decade, the advertising of victims has moved from the streets to websites on the Internet, providing greater efficiency and anonymity for sex traffickers. This shift has allowed traffickers to list their victims in multiple geographic areas simultaneously, while also improving operational security by using multiple methods of electronic communication with buyers; complicating the ability of law enforcement to disrupt these illicit organizations. In this presentation, we present a novel unsupervised template matching algorithm for analyzing and detecting complex organizations operating on adult service websites. We apply this method to a large corpus of adult service advertisements retrieved from backpage.com, and show that the networks identified through the algorithm match well with surrogate truth data derived from phone number networks in the same corpus. Further exploration of the results show that the proposed method provides deeper insights into the complex structures of sex trafficking organizations, not possible through networks derived from phone numbers alone. This method provides a powerful new capability for law enforcement to more completely identify and gather evidence about trafficking operations
READ LESS

Summary

Human trafficking is a form of modern-day slavery affecting an estimated 40 million victims worldwide, primarily through the commercial sexual exploitation of women and children. In the last decade, the advertising of victims has moved from the streets to websites on the Internet, providing greater efficiency and anonymity for sex...

READ MORE

On large-scale graph generation with validation of diverse triangle statistics at edges and vertices

Published in:
2018 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW, 21 May 2018.

Summary

Researchers developing implementations of distributed graph analytic algorithms require graph generators that yield graphs sharing the challenging characteristics of real-world graphs (small-world, scale-free, heavy-tailed degree distribution) with efficiently calculable ground-truth solutions to the desired output. Reproducibility for current generators used in benchmarking are somewhat lacking in this respect due to their randomness: the output of a desired graph analytic can only be compared to expected values and not exact ground truth. Nonstochastic Kronecker product graphs meet these design criteria for several graph analytics. Here we show that many flavors of triangle participation can be cheaply calculated while generating a Kronecker product graph.
READ LESS

Summary

Researchers developing implementations of distributed graph analytic algorithms require graph generators that yield graphs sharing the challenging characteristics of real-world graphs (small-world, scale-free, heavy-tailed degree distribution) with efficiently calculable ground-truth solutions to the desired output. Reproducibility for current generators used in benchmarking are somewhat lacking in this respect due to...

READ MORE

Peregrine: 3-D network localization and navigation

Published in:
IEEE 9th Latin-American Conf. on Communications, LATINCOM, 8-10 November 2017.

Summary

Location-aware devices will create new services and applications in emerging fields such as autonomous driving, smart cities, and the Internet of Things. Many existing localization systems rely on anchors such as satellites at known positions which broadcast radio signals. However, such signals may be blocked by obstacles, corrupted by multipath propagation, or provide insufficient localization accuracy. Therefore, ubiquitous localization remains an extremely challenging problem. This paper introduces Peregrine, a 3-D cooperative network localization and navigation (NLN) system. Peregrine nodes are low-cost business-card-sized devices, consisting of a microprocessor, a commercially available ultra-wideband (UWB) radio module, and a small battery. Recently developed distributed algorithms are used in Peregrine to solve the highly interrelated problems of node inference and node activation in real-time, enabling resource efficiency, scalability, and accuracy for NLN. Node inference – based on the recently introduced sigma point belief propagation (SPBP) algorithm – enables spatiotemporal cooperation in realtime and estimates the nodes' positions accurately from UWB distance measurements. A distributed node activation algorithm controls channel access to improve the efficiency and reduce the localization error of the network. Contributions of each algorithmic component to overall system performance are validated through indoor localization experiments. Our results show that Peregrine achieves decimeter-level 3-D position accuracy in a challenging propagation environment.
READ LESS

Summary

Location-aware devices will create new services and applications in emerging fields such as autonomous driving, smart cities, and the Internet of Things. Many existing localization systems rely on anchors such as satellites at known positions which broadcast radio signals. However, such signals may be blocked by obstacles, corrupted by multipath...

READ MORE

Command and control for multifunction phased array radar

Published in:
IEEE Trans. Geosci. Remote Sens., Vol. 55, No. 10, October 2017, pp. 5899-5912.

Summary

We discuss the challenge of managing the Multifunction Phased Array Radar (MPAR) timeline to satisfy the requirements of its multiple missions, with a particular focus on weather surveillance. This command and control (C2) function partitions the available scan time among these missions, exploits opportunities to service multiple missions simultaneously, and utilizes techniques for increasing scan rate where feasible. After reviewing the candidate MPAR architectures and relevant previous research, we describe a specific C2 framework that is consistent with a demonstrated active array architecture using overlapped subarrays to realize multiple, concurrent receive beams. Analysis of recently articulated requirements for near-airport and national-scale aircraft surveillance indicates that with this architecture, 40–60% of the MPAR scan timeline would be available for the high-fidelity weather observations currently provided by the Weather Service Radar (WSR-88D) network. We show that an appropriate use of subarray generated concurrent receive beams, in concert with previously documented, complementary techniques to increase the weather scan rate, could enable MPAR to perform full weather volume scans at a rate of 1 per minute. Published observing system simulation experiments, human-in-the-loop studies and radar-data assimilation experiments indicate that high-quality weather radar observations at this rate may significantly improve the lead time and reliability of severe weather warnings relative to current observation capabilities.
READ LESS

Summary

We discuss the challenge of managing the Multifunction Phased Array Radar (MPAR) timeline to satisfy the requirements of its multiple missions, with a particular focus on weather surveillance. This command and control (C2) function partitions the available scan time among these missions, exploits opportunities to service multiple missions simultaneously, and...

READ MORE

Streaming graph challenge: stochastic block partition

Summary

An important objective for analyzing real-world graphs is to achieve scalable performance on large, streaming graphs. A challenging and relevant example is the graph partition problem. As a combinatorial problem, graph partition is NP-hard, but existing relaxation methods provide reasonable approximate solutions that can be scaled for large graphs. Competitive benchmarks and challenges have proven to be an effective means to advance state-of-the-art performance and foster community collaboration. This paper describes a graph partition challenge with a baseline partition algorithm of sub-quadratic complexity. The algorithm employs rigorous Bayesian inferential methods based on a statistical model that captures characteristics of the real-world graphs. This strong foundation enables the algorithm to address limitations of well-known graph partition approaches such as modularity maximization. This paper describes various aspects of the challenge including: (1) the data sets and streaming graph generator, (2) the baseline partition algorithm with pseudocode, (3) an argument for the correctness of parallelizing the Bayesian inference, (4) different parallel computation strategies such as node-based parallelism and matrix-based parallelism, (5) evaluation metrics for partition correctness and computational requirements, (6) preliminary timing of a Python-based demonstration code and the open source C++ code, and (7) considerations for partitioning the graph in streaming fashion. Data sets and source code for the algorithm as well as metrics, with detailed documentation are available at GraphChallenge.org.
READ LESS

Summary

An important objective for analyzing real-world graphs is to achieve scalable performance on large, streaming graphs. A challenging and relevant example is the graph partition problem. As a combinatorial problem, graph partition is NP-hard, but existing relaxation methods provide reasonable approximate solutions that can be scaled for large graphs. Competitive...

READ MORE

A linear algebra approach to fast DNA mixture analysis using GPUs

Summary

Analysis of DNA samples is an important step in forensics, and the speed of analysis can impact investigations. Comparison of DNA sequences is based on the analysis of short tandem repeats (STRs), which are short DNA sequences of 2-5 base pairs. Current forensics approaches use 20 STR loci for analysis. The use of single nucleotide polymorphisms (SNPs) has utility for analysis of complex DNA mixtures. The use of tens of thousands of SNPs loci for analysis poses significant computational challenges because the forensic analysis scales by the product of the loci count and number of DNA samples to be analyzed. In this paper, we discuss the implementation of a DNA sequence comparison algorithm by re-casting the algorithm in terms of linear algebra primitives. By developing an overloaded matrix multiplication approach to DNA comparisons, we can leverage advances in GPU hardware and algorithms for Dense Generalized Matrix-Multiply (DGEMM) to speed up DNA sample comparisons. We show that it is possible to compare 2048 unknown DNA samples with 20 million known samples in under 6 seconds using a NVIDIA K80 GPU.
READ LESS

Summary

Analysis of DNA samples is an important step in forensics, and the speed of analysis can impact investigations. Comparison of DNA sequences is based on the analysis of short tandem repeats (STRs), which are short DNA sequences of 2-5 base pairs. Current forensics approaches use 20 STR loci for analysis...

READ MORE