Publications

Refine Results

(Filters Applied) Clear All

Self-supervised contrastive pre-training for time series via time-frequency consistency

Published in:
arXiv, June 16, 2022.

Summary

Pre-training on time series poses a unique challenge due to the potential mismatch between pre-training and target domains, such as shifts in temporal dynamics, fast-evolving trends, and long-range and short cyclic effects, which can lead to poor downstream performance. While domain adaptation methods can mitigate these shifts, most methods need examples directly from the target domain, making them suboptimal for pre-training. To address this challenge, methods need to accommodate target domains with different temporal dynamics and be capable of doing so without seeing any target examples during pre-training. Relative to other modalities, in time series, we expect that time-based and frequency-based representations of the same example are located close together in the time-frequency space. To this end, we posit that time-frequency consistency (TF-C) — embedding a time-based neighborhood of a particular example close to its frequency-based neighborhood and back—is desirable for pre-training. Motivated by TF-C, we define a decomposable pre-training model, where the self-supervised signal is provided by the distance between time and frequency components, each individually trained by contrastive estimation. We evaluate the new method on eight datasets, including electrodiagnostic testing, human activity recognition, mechanical fault detection, and physical status monitoring. Experiments against eight state-of-the-art methods show that TF-C outperforms baselines by 15.4% (F1 score) on average in one-to-one settings (e.g., fine-tuning an EEG-pretrained model on EMG data) and by up to 8.4% (F1 score) in challenging one-to-many settings (e.g., fine-tuning an EEG-pretrained model for either hand-gesture recognition or mechanical fault prediction), reflecting the breadth of scenarios that arise in real-world applications. The source code and datasets are available at https://anonymous.4open.science/r/TFC-pretraining-6B07.
READ LESS

Summary

Pre-training on time series poses a unique challenge due to the potential mismatch between pre-training and target domains, such as shifts in temporal dynamics, fast-evolving trends, and long-range and short cyclic effects, which can lead to poor downstream performance. While domain adaptation methods can mitigate these shifts, most methods need...

READ MORE

Graph-guided network for irregularly sampled multivariate time series

Published in:
International Conference on Learning Representations, ICLR 2022.

Summary

In many domains, including healthcare, biology, and climate science, time series are irregularly sampled with varying time intervals between successive readouts and different subsets of variables (sensors) observed at different time points. Here, we introduce RAINDROP, a graph neural network that embeds irregularly sampled and multivariate time series while also learning the dynamics of sensors purely from observational data. RAINDROP represents every sample as a separate sensor graph and models time-varying dependencies between sensors with a novel message passing operator. It estimates the latent sensor graph structure and leverages the structure together with nearby observations to predict misaligned readouts. This model can be interpreted as a graph neural network that sends messages over graphs that are optimized for capturing time-varying dependencies among sensors. We use RAINDROP to classify time series and interpret temporal dynamics on three healthcare and human activity datasets. RAINDROP outperforms state-of-the-art methods by up to 11.4% (absolute F1-score points), including techniques that deal with irregular sampling using fixed discretization and set functions. RAINDROP shows superiority in diverse setups, including challenging leave-sensor-out settings.
READ LESS

Summary

In many domains, including healthcare, biology, and climate science, time series are irregularly sampled with varying time intervals between successive readouts and different subsets of variables (sensors) observed at different time points. Here, we introduce RAINDROP, a graph neural network that embeds irregularly sampled and multivariate time series while also...

READ MORE

A near-quantum-limited Josephson traveling-wave parametric amplifier

Published in:
Sci., Vol. 350, No. 6258, 16 October 2015,pp. 307-10.

Summary

Detecting single photon level signals--carriers of both classical and quantum information--is particularly challenging for low-energy microwave frequency excitations. Here we introduce a superconducting amplifier based on a Josephson junction transmission line. Unlike current standing-wave parametric amplifiers, this traveling wave architecture robustly achieves high gain over a bandwidth of several gigahertz with sufficient dynamic range to read out 20 superconducting qubits. To achieve this performance, we introduce a sub-wavelength resonant phase matching technique that enables the creation of nonlinear microwave devices with unique dispersion relations. We benchmark the amplifier with weak measurements, obtaining a high quantum efficiency of 75% (70% including following amplifier noise). With a flexible design based on compact lumped elements, this Josephson amplifier has broad applicability to microwave metrology and quantum optics.
READ LESS

Summary

Detecting single photon level signals--carriers of both classical and quantum information--is particularly challenging for low-energy microwave frequency excitations. Here we introduce a superconducting amplifier based on a Josephson junction transmission line. Unlike current standing-wave parametric amplifiers, this traveling wave architecture robustly achieves high gain over a bandwidth of several gigahertz...

READ MORE

Showing Results

1-3 of 3