Publications

Refine Results

(Filters Applied) Clear All

Affective ratings of nonverbal vocalizations produced by minimally-speaking individuals: What do native listeners perceive?

Published in:
10th Intl. Conf. Affective Computing and Intelligent Interaction, ACII, 18-21 October 2022.

Summary

Individuals who produce few spoken words ("minimally-speaking" individuals) often convey rich affective and communicative information through nonverbal vocalizations, such as grunts, yells, babbles, and monosyllabic expressions. Yet, little data exists on the affective content of the vocal expressions of this population. Here, we present 78,624 arousal and valence ratings of nonverbal vocalizations from the online ReCANVo (Real-World Communicative and Affective Nonverbal Vocalizations) database. This dataset contains over 7,000 vocalizations that have been labeled with their expressive functions (delight, frustration, etc.) from eight minimally-speaking individuals. Our results suggest that raters who have no knowledge of the context or meaning of a nonverbal vocalization are still able to detect arousal and valence differences between different types of vocalizations based on Likert-scale ratings. Moreover, these ratings are consistent with hypothesized arousal and valence rankings for the different vocalization types. Raters are also able to detect arousal and valence differences between different vocalization types within individual speakers. To our knowledge, this is the first large-scale analysis of affective content within nonverbal vocalizations from minimally verbal individuals. These results complement affective computing research of nonverbal vocalizations that occur within typical verbal speech (e.g., grunts, sighs) and serve as a foundation for further understanding of how humans perceive emotions in sounds.
READ LESS

Summary

Individuals who produce few spoken words ("minimally-speaking" individuals) often convey rich affective and communicative information through nonverbal vocalizations, such as grunts, yells, babbles, and monosyllabic expressions. Yet, little data exists on the affective content of the vocal expressions of this population. Here, we present 78,624 arousal and valence ratings of...

READ MORE

Modeling real-world affective and communicative nonverbal vocalizations from minimally speaking individuals

Published in:
IEEE Trans. on Affect. Comput., Vol. 13, No. 4, October 2022, pp. 2238-53.

Summary

Nonverbal vocalizations from non- and minimally speaking individuals (mv*) convey important communicative and affective information. While nonverbal vocalizations that occur amidst typical speech and infant vocalizations have been studied extensively in the literature, there is limited prior work on vocalizations by mv* individuals. Our work is among the first studies of the communicative and affective information expressed in nonverbal vocalizations by mv* children and adults. We collected labeled vocalizations in real-world settings with eight mv* communicators, with communicative and affective labels provided in-the-moment by a close family member. Using evaluation strategies suitable for messy, real-world data, we show that nonverbal vocalizations can be classified by function (with 4- and 5-way classifications) with F1 scores above chance for all participants. We analyze labeling and data collection practices for each participating family, and discuss the classification results in the context of our novel real-world data collection protocol. The presented work includes results from the largest classification experiments with nonverbal vocalizations from mv* communicators to date.
READ LESS

Summary

Nonverbal vocalizations from non- and minimally speaking individuals (mv*) convey important communicative and affective information. While nonverbal vocalizations that occur amidst typical speech and infant vocalizations have been studied extensively in the literature, there is limited prior work on vocalizations by mv* individuals. Our work is among the first studies...

READ MORE

Contrast-enhanced ultrasound to detect active bleeding

Published in:
J. Acoust. Soc. Am. 152, A280 (2022)

Summary

Non-compressible internal hemorrhage (NCIH) is the most common cause of death in acute non-penetrating trauma. NCIH management requires accurate hematoma localization and evaluation for ongoing bleeding for risk stratification. The current standard point-of-care diagnostic tool, the focused assessment with sonography for trauma (FAST), detects free fluid in body cavities with conventional B-mode imaging. The FAST does not assess whether bleeding is ongoing, at which location(s), and to what extent. Here, we propose contrast-enhanced ultrasound (CEUS) techniques to better identify, localize, and quantify hemorrhage. We designed and fabricated a custom hemorrhage-mimicking phantom, comprising a perforated vessel and cavity to simulate active bleeding. Lumason contrast agents (UCAs) were introduced at clinically relevant concentrations (3.5×108 bubbles/ml). Conventional and contrast pulse sequence images were captured, and post-processed with bubble localization techniques (SVD clutter filter and bubble localization). The results showed contrast pulse sequences enabled a 2.2-fold increase in the number of microbubbles detected compared with conventional CEUS imaging, over a range of flow rates, concentrations, and localization processing parameters. Additionally, particle velocimetry enabled mapping of dynamic flow within the simulated bleeding site. Our findings indicate that CEUS combined with advanced image processing may enhance visualization of hemodynamics and improve non-invasive, real-time detection of active bleeding.
READ LESS

Summary

Non-compressible internal hemorrhage (NCIH) is the most common cause of death in acute non-penetrating trauma. NCIH management requires accurate hematoma localization and evaluation for ongoing bleeding for risk stratification. The current standard point-of-care diagnostic tool, the focused assessment with sonography for trauma (FAST), detects free fluid in body cavities with...

READ MORE

Multimodal physiological monitoring during virtual reality piloting tasks

Summary

This dataset includes multimodal physiologic, flight performance, and user interaction data streams, collected as participants performed virtual flight tasks of varying difficulty. In virtual reality, individuals flew an "Instrument Landing System" (ILS) protocol, in which they had to land an aircraft mostly relying on the cockpit instrument readings. Participants were presented with four levels of difficulty, which were generated by varying wind speed, turbulence, and visibility. Each of the participants performed 12 runs, split into 3 blocks of four consecutive runs, one run at each difficulty, in a single experimental session. The sequence of difficulty levels was presented in a counterbalanced manner across blocks. Flight performance was quantified as a function of horizontal and vertical deviation from an ideal path towards the runway as well as deviation from the prescribed ideal speed of 115 knots. Multimodal physiological signals were aggregated and synchronized using Lab Streaming Layer. Descriptions of data quality are provided to assess each data stream. The starter code provides examples of loading and plotting the time synchronized data streams, extracting sample features from the eye tracking data, and building models to predict pilot performance from the physiology data streams.
READ LESS

Summary

This dataset includes multimodal physiologic, flight performance, and user interaction data streams, collected as participants performed virtual flight tasks of varying difficulty. In virtual reality, individuals flew an "Instrument Landing System" (ILS) protocol, in which they had to land an aircraft mostly relying on the cockpit instrument readings. Participants were...

READ MORE

Feature importance analysis for compensatory reserve to predict hemorrhagic shock

Published in:
44th Annual Int. Conf. of IEEE Engineering in Medicine & Biology Society (EMBC), DOI: 10.1109/EMBC48229.2022.9871661.

Summary

Hemorrhage is the leading cause of preventable death from trauma. Traditionally, vital signs have been used to detect blood loss and possible hemorrhagic shock. However, vital signs are not sensitive for early detection because of physiological mechanisms that compensate for blood loss. As an alternative, machine learning algorithms that operate on an arterial blood pressure (ABP) waveform acquired via photoplethysmography have been shown to provide an effective early indicator. However, these machine learning approaches lack physiological interpretability. In this paper, we evaluate the importance of nine ABP-derived features that provide physiological insight, using a database of 40 human subjects from a lower-body negative pressure model of progressive central hypovolemia. One feature was found to be considerably more important than any other. That feature, the half-rise to dicrotic notch (HRDN), measures an approximate time delay between the ABP ejected and reflected wave components. This delay is an indication of compensatory mechanisms such as reduced arterial compliance and vasoconstriction. For a scale of 0% to 100%, with 100% representing normovolemia and 0% representing decompensation, linear regression of the HRDN feature results in root-mean-squared error of 16.9%, R2 of 0.72, and an area under the receiver operating curve for detecting decompensation of 0.88. These results are comparable to previously reported results from the more complex black box machine learning models. Clinical Relevance- A single physiologically interpretable feature measured from an arterial blood pressure waveform is shown to be effective in monitoring for blood loss and impending hemorrhagic shock based on data from a human lower-body negative pressure model of progressive central hypolemia.
READ LESS

Summary

Hemorrhage is the leading cause of preventable death from trauma. Traditionally, vital signs have been used to detect blood loss and possible hemorrhagic shock. However, vital signs are not sensitive for early detection because of physiological mechanisms that compensate for blood loss. As an alternative, machine learning algorithms that operate...

READ MORE

Transfer learning for automated COVID-19 B-line classification in lung ultrasound

Published in:
44th Annual Int. Conf. of IEEE Engineering in Medicine & Biology Society (EMBC), DOI: 10.1109/EMBC48229.2022.9871894.

Summary

Lung ultrasound (LUS) as a diagnostic tool is gaining support for its role in the diagnosis and management of COVID-19 and a number of other lung pathologies. B-lines are a predominant feature in COVID-19, however LUS requires a skilled clinician to interpret findings. To facilitate the interpretation, our main objective was to develop automated methods to classify B-lines as pathologic vs. normal. We developed transfer learning models based on ResNet networks to classify B-lines as pathologic (at least 3 B-lines per lung field) vs. normal using COVID-19 LUS data. Assessment of B-line severity on a 0-4 multi-class scale was also explored. For binary B-line classification, at the frame-level, all ResNet models pretrained with ImageNet yielded higher performance than the baseline nonpretrained ResNet-18. Pretrained ResNet-18 has the best Equal Error Rate (EER) of 9.1% vs the baseline of 11.9%. At the clip-level, all pretrained network models resulted in better Cohen's kappa agreement (linear-weighted) and clip score accuracy, with the pretrained ResNet-18 having the best Cohen's kappa of 0.815 [95% CI: 0.804-0.826], and ResNet-101 the best clip scoring accuracy of 93.6%. Similar results were shown for multi-class scoring, where pretrained network models outperformed the baseline model. A class activation map is also presented to guide clinicians in interpreting LUS findings. Future work aims to further improve the multi-class assessment for severity of B-lines with a more diverse LUS dataset.
READ LESS

Summary

Lung ultrasound (LUS) as a diagnostic tool is gaining support for its role in the diagnosis and management of COVID-19 and a number of other lung pathologies. B-lines are a predominant feature in COVID-19, however LUS requires a skilled clinician to interpret findings. To facilitate the interpretation, our main objective...

READ MORE

Axon tracing and centerline detection using topologically-aware 3D U-nets

Published in:
2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2022, pp. 238-242

Summary

As advances in microscopy imaging provide an ever clearer window into the human brain, accurate reconstruction of neural connectivity can yield valuable insight into the relationship between brain structure and function. However, human manual tracing is a slow and laborious task, and requires domain expertise. Automated methods are thus needed to enable rapid and accurate analysis at scale. In this paper, we explored deep neural networks for dense axon tracing and incorporated axon topological information into the loss function with a goal to improve the performance on both voxel-based segmentation and axon centerline detection. We evaluated three approaches using a modified 3D U-Net architecture trained on a mouse brain dataset imaged with light sheet microscopy and achieved a 10% increase in axon tracing accuracy over previous methods. Furthermore, the addition of centerline awareness in the loss function outperformed the baseline approach across all metrics, including a boost in Rand Index by 8%.
READ LESS

Summary

As advances in microscopy imaging provide an ever clearer window into the human brain, accurate reconstruction of neural connectivity can yield valuable insight into the relationship between brain structure and function. However, human manual tracing is a slow and laborious task, and requires domain expertise. Automated methods are thus needed...

READ MORE

Wearable technology in extreme environments

Published in:
Chapter 2 in: Cibis, T., McGregor AM, C. (eds) Engineering and Medicine in Extreme Environments. Springer, Cham. https://doi.org/10.1007/978-3-030-96921-9_2

Summary

Humans need to work in many types of extreme environments where there is a need to stay safe and even to improve performance. Examples include: medical providers treating infectious disease, people responding to other biological or chemical hazards, firefighters, astronauts, pilots, divers, and people working outdoors in extreme hot or cold temperatures. Wearable technology is ubiquitous in the consumer market but is still needed for extreme environments. For these applications, it is particularly challenging to meet requirements to be actionable, accurate, acceptable, integratable, and affordable. To provide insight into these needs and possible solutions and the technology trade-offs involved, several examples are provided. A physiological monitoring example is described for predicting and avoiding heat injury. A cognitive monitoring example is described for estimating cognitive workload, with broader applicability to a variety of conditions, such as cognitive fatigue and depression. Finally, eye tracking is considered as a promising wearable sensing modality with applications for both physiological and cognitive monitoring. Concluding thoughts are offered on the compelling need for wearable technology in the face of pandemics, wildfires, and climate change, but also for global projects that can uplift mankind, such as long-duration spaceflight and missions to Mars.
READ LESS

Summary

Humans need to work in many types of extreme environments where there is a need to stay safe and even to improve performance. Examples include: medical providers treating infectious disease, people responding to other biological or chemical hazards, firefighters, astronauts, pilots, divers, and people working outdoors in extreme hot or...

READ MORE

Detection of COVID-19 using multimodal data from a wearable device: results from the first TemPredict Study

Summary

Early detection of diseases such as COVID-19 could be a critical tool in reducing disease transmission by helping individuals recognize when they should self-isolate, seek testing, and obtain early medical intervention. Consumer wearable devices that continuously measure physiological metrics hold promise as tools for early illness detection. We gathered daily questionnaire data and physiological data using a consumer wearable (Oura Ring) from 63,153 participants, of whom 704 self-reported possible COVID-19 disease. We selected 73 of these 704 participants with reliable confirmation of COVID-19 by PCR testing and high-quality physiological data for algorithm training to identify onset of COVID-19 using machine learning classification. The algorithm identified COVID-19 an average of 2.75 days before participants sought diagnostic testing with a sensitivity of 82% and specificity of 63%. The receiving operating characteristic (ROC) area under the curve (AUC) was 0.819 (95% CI [0.809, 0.830]). Including continuous temperature yielded an AUC 4.9% higher than without this feature. For further validation, we obtained SARS CoV-2 antibody in a subset of participants and identified 10 additional participants who self-reported COVID-19 disease with antibody confirmation. The algorithm had an overall ROC AUC of 0.819 (95% CI [0.809, 0.830]), with a sensitivity of 90% and specificity of 80% in these additional participants. Finally, we observed substantial variation in accuracy based on age and biological sex. Findings highlight the importance of including temperature assessment, using continuous physiological features for alignment, and including diverse populations in algorithm development to optimize accuracy in COVID-19 detection from wearables.
READ LESS

Summary

Early detection of diseases such as COVID-19 could be a critical tool in reducing disease transmission by helping individuals recognize when they should self-isolate, seek testing, and obtain early medical intervention. Consumer wearable devices that continuously measure physiological metrics hold promise as tools for early illness detection. We gathered daily...

READ MORE

Artificial intelligence for detecting COVID-19 with the aid of human cough, breathing and speech signals: scoping review

Summary

Background: Official tests for COVID-19 are time consuming, costly, can produce high false negatives, use up vital chemicals and may violate social distancing laws. Therefore, a fast and reliable additional solution using recordings of cough, breathing and speech data forpreliminary screening may help alleviate these issues. Objective: This scoping review explores how Artificial Intelligence (AI) technology aims to detect COVID-19 disease by using cough, breathing and speech recordings, as reported in theliterature. Here, we describe and summarize attributes of the identified AI techniques and datasets used for their implementation. Methods: A scoping review was conducted following the guidelines of PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews). Electronic databases (Google Scholar, Science Direct, and IEEE Xplore) were searched between 1st April 2020 and 15st August 2021. Terms were selected based on thetarget intervention (i.e., AI), the target disease (i.e., COVID-19) and acoustic correlates of thedisease (i.e., speech, breathing and cough). A narrative approach was used to summarize the extracted data. Results: 24 studies and 8 Apps out of the 86 retrieved studies met the inclusion criteria. Halfof the publications and Apps were from the USA. The most prominent AI architecture used was a convolutional neural network, followed by a recurrent neural network. AI models were mainly trained, tested and run-on websites and personal computers, rather than on phone apps. More than half of the included studies reported area-under-the-curve performance of greater than 0.90 on symptomatic and negative datasets while one study achieved 100% sensitivity in predicting asymptomatic COVID-19 for cough-, breathing- or speech-based acoustic features. Conclusions: The included studies show that AI has the potential to help detect COVID-19 using cough, breathing and speech samples. However, the proposed methods with some time and appropriate clinical testing would prove to be an effective method in detecting various diseases related to respiratory and neurophysiological changes in human body.
READ LESS

Summary

Background: Official tests for COVID-19 are time consuming, costly, can produce high false negatives, use up vital chemicals and may violate social distancing laws. Therefore, a fast and reliable additional solution using recordings of cough, breathing and speech data forpreliminary screening may help alleviate these issues. Objective: This scoping review...

READ MORE