Publications

Refine Results

(Filters Applied) Clear All

A neural network estimation of ankle torques from electromyography and accelerometry

Published in:
IEEE Trans. Neural Syst. Rehabilitation Eng., Vol. 29, 2021, pp. 1624-33.

Summary

Estimations of human joint torques can provide clinically valuable information to inform patient care, plan therapy, and assess the design of wearable robotic devices. Predicting joint torques into the future can also be useful for anticipatory robot control design. In this work, we present a method of mapping joint torque estimates and sequences of torque predictions from motion capture and ground reaction forces to wearable sensor data using several modern types of neural networks. We use dense feedforward, convolutional, neural ordinary differential equation, and long short-term memory neural networks to learn the mapping for ankle plantarflexion and dorsiflexion torque during standing,walking, running, and sprinting, and consider both single-point torque estimation, as well as the prediction of a sequence of future torques. Our results show that long short-term memory neural networks, which consider incoming data sequentially, outperform dense feedforward, neural ordinary differential equation networks, and convolutional neural networks. Predictions of future ankle torques up to 0.4 s ahead also showed strong positive correlations with the actual torques. The proposed method relies on learning from a motion capture dataset, but once the model is built, the method uses wearable sensors that enable torque estimation without the motion capture data.
READ LESS

Summary

Estimations of human joint torques can provide clinically valuable information to inform patient care, plan therapy, and assess the design of wearable robotic devices. Predicting joint torques into the future can also be useful for anticipatory robot control design. In this work, we present a method of mapping joint torque...

READ MORE

Development of a field artifical intelligence triage tool: Confidence in the prediction of shock, transfusion, and definitive surgical therapy in patients with truncal gunshot wounds

Summary

BACKGROUND: In-field triage tools for trauma patients are limited by availability of information, linear risk classification, and a lack of confidence reporting. We therefore set out to develop and test a machine learning algorithm that can overcome these limitations by accurately and confidently making predictions to support in-field triage in the first hours after traumatic injury. METHODS: Using an American College of Surgeons Trauma Quality Improvement Program-derived database of truncal and junctional gunshot wound (GSW) patients (aged 1~0 years), we trained an information-aware Dirichlet deep neural network (field artificial intelligence triage). Using supervised training, field artificial intelligence triage was trained to predict shock and the need for major hemorrhage control procedures or early massive transfusion (MT) using GSW anatomical locations, vital signs, and patient information available in the field. In parallel, a confidence model was developed to predict the true-dass probability ( scale of 0-1 ), indicating the likelihood that the prediction made was correct, based on the values and interconnectivity of input variables.
READ LESS

Summary

BACKGROUND: In-field triage tools for trauma patients are limited by availability of information, linear risk classification, and a lack of confidence reporting. We therefore set out to develop and test a machine learning algorithm that can overcome these limitations by accurately and confidently making predictions to support in-field triage in...

READ MORE

Health-informed policy gradients for multi-agent reinforcement learning

Summary

This paper proposes a definition of system health in the context of multiple agents optimizing a joint reward function. We use this definition as a credit assignment term in a policy gradient algorithm to distinguish the contributions of individual agents to the global reward. The health-informed credit assignment is then extended to a multi-agent variant of the proximal policy optimization algorithm and demonstrated on simple particle environments that have elements of system health, risk-taking, semi-expendable agents, and partial observability. We show significant improvement in learning performance compared to policy gradient methods that do not perform multi-agent credit assignment.
READ LESS

Summary

This paper proposes a definition of system health in the context of multiple agents optimizing a joint reward function. We use this definition as a credit assignment term in a policy gradient algorithm to distinguish the contributions of individual agents to the global reward. The health-informed credit assignment is then...

READ MORE

Beyond expertise and roles: a framework to characterize the stakeholders of interpretable machine learning and their needs

Published in:
Proc. Conf. on Human Factors in Computing Systems, 8-13 May 2021, article no. 74.

Summary

To ensure accountability and mitigate harm, it is critical that diverse stakeholders can interrogate black-box automated systems and find information that is understandable, relevant, and useful to them. In this paper, we eschew prior expertise- and role-based categorizations of interpretability stakeholders in favor of a more granular framework that decouples stakeholders' knowledge from their interpretability needs. We characterize stakeholders by their formal, instrumental, and personal knowledge and how it manifests in the contexts of machine learning, the data domain, and the general milieu. We additionally distill a hierarchical typology of stakeholder needs that distinguishes higher-level domain goals from lower-level interpretability tasks. In assessing the descriptive, evaluative, and generative powers of our framework, we find our more nuanced treatment of stakeholders reveals gaps and opportunities in the interpretability literature, adds precision to the design and comparison of user studies, and facilitates a more reflexive approach to conducting this research.
READ LESS

Summary

To ensure accountability and mitigate harm, it is critical that diverse stakeholders can interrogate black-box automated systems and find information that is understandable, relevant, and useful to them. In this paper, we eschew prior expertise- and role-based categorizations of interpretability stakeholders in favor of a more granular framework that decouples...

READ MORE

Ultrasound diagnosis of COVID-19: robustness and explainability

Published in:
arXiv:2012.01145v1 [eess.IV]

Summary

Diagnosis of COVID-19 at point of care is vital to the containment of the global pandemic. Point of care ultrasound (POCUS) provides rapid imagery of lungs to detect COVID-19 in patients in a repeatable and cost effective way. Previous work has used public datasets of POCUS videos to train an AI model for diagnosis that obtains high sensitivity. Due to the high stakes application we propose the use of robust and explainable techniques. We demonstrate experimentally that robust models have more stable predictions and offer improved interpretability. A framework of contrastive explanations based on adversarial perturbations is used to explain model predictions that aligns with human visual perception.
READ LESS

Summary

Diagnosis of COVID-19 at point of care is vital to the containment of the global pandemic. Point of care ultrasound (POCUS) provides rapid imagery of lungs to detect COVID-19 in patients in a repeatable and cost effective way. Previous work has used public datasets of POCUS videos to train an...

READ MORE

Ankle torque estimation during locomotion from surface electromyography and accelerometry

Published in:
2020 8th IEEE Intl. Conf. on Biomedical Robotics and Biomechatronics, BioRob, 29 November - 1 December 2020.

Summary

Estimations of human joint torques can provide quantitative, clinically valuable information to inform patient care, plan therapy, and assess the design of wearable robotic devices. Standard methods for estimating joint torques are limited to laboratory or clinical settings since they require expensive equipment to measure joint kinematics and ground reaction forces. Wearable sensor data combined with neural networks may offer a less expensive and obtrusive estimation method.We present a method of mapping joint torque estimates obtained from motion capture and ground reaction forces to wearable sensor data. We use several different neural networks to learn the torque mapping for the ankle joints during standing, walking, running, and sprinting. Our results show that neural networks that consider time (recurrent and long short-term memory networks) outperform feedforward network architectures, producing results in the range of 0.005-0.008 N m/kg mean squared error (MSE) when compared to the inverse dynamics model on which it was trained. As a point of reference, the typical measurement errors from inverse dynamics models are in the range of 0.0004-0.0064 N m/kg MSE. Errors tended to increase with locomotion speed, with the highest errors during sprinting and the lowest during standing or walking. Future work may investigate model generalizability across sensor placements, subjects, locomotion variants, and usage duration. The proposed method relies on learning from a motion capture dataset, but once the model is built, the method uses wearable sensors that enable torque estimation without the motion capture data. These methods also have potential uses for the design and testing of wearable robotic systems outside of a laboratory environment.
READ LESS

Summary

Estimations of human joint torques can provide quantitative, clinically valuable information to inform patient care, plan therapy, and assess the design of wearable robotic devices. Standard methods for estimating joint torques are limited to laboratory or clinical settings since they require expensive equipment to measure joint kinematics and ground reaction...

READ MORE

A multi-task LSTM framework for improved early sepsis prediction

Published in:
Proc. Artificial Intelligence in Medicine, AIME, 2020, pp. 49-58.

Summary

Early detection for sepsis, a high-mortality clinical condition, is important for improving patient outcomes. The performance of conventional deep learning methods degrades quickly as predictions are made several hours prior to the clinical definition. We adopt recurrent neural networks (RNNs) to improve early prediction of the onset of sepsis using times series of physiological measurements. Furthermore, physiological data is often missing and imputation is necessary. Absence of data might arise due to decisions made by clinical professionals which carries information. Using the missing data patterns into the learning process can further guide how much trust to place on imputed values. A new multi-task LSTM model is proposed that takes informative missingness into account during training that effectively attributes trust to temporal measurements. Experimental results demonstrate our method outperforms conventional CNN and LSTM models on the PhysioNet-2019 CiC early sepsis prediction challenge in terms of area under receiver-operating curve and precision-recall curve, and further improves upon calibration of prediction scores.
READ LESS

Summary

Early detection for sepsis, a high-mortality clinical condition, is important for improving patient outcomes. The performance of conventional deep learning methods degrades quickly as predictions are made several hours prior to the clinical definition. We adopt recurrent neural networks (RNNs) to improve early prediction of the onset of sepsis using...

READ MORE

GraphChallenge.org triangle counting performance [e-print]

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems. GraphChallenge.org provides a wide range of preparsed graph data sets, graph generators, mathematically defined graph algorithms, example serial implementations in a variety of languages, and specific metrics for measuring performance. The triangle counting component of GraphChallenge.org tests the performance of graph processing systems to count all the triangles in a graph and exercises key graph operations found in many graph algorithms. In 2017, 2018, and 2019 many triangle counting submissions were received from a wide range of authors and organizations. This paper presents a performance analysis of the best performers of these submissions. These submissions show that their state-of-the-art triangle counting execution time, Ttri, is a strong function of the number of edges in the graph, Ne, which improved significantly from 2017 (Ttri \approx (Ne/10^8)^4=3) to 2018 (Ttri \approx Ne/10^9) and remained comparable from 2018 to 2019. Graph Challenge provides a clear picture of current graph analysis systems and underscores the need for new innovations to achieve high performance on very large graphs
READ LESS

Summary

The rise of graph analytic systems has created a need for new ways to measure and compare the capabilities of graph processing systems. The MIT/Amazon/IEEE Graph Challenge has been developed to provide a well-defined community venue for stimulating research and highlighting innovations in graph analysis software, hardware, algorithms, and systems...

READ MORE

A framework to improve evaluation of novel decision support tools

Published in:
11th Intl. Conf. on Applied Human Factors and Ergonomics, AHFE, 16-20 July 2020.

Summary

Organizations that introduce new technology into an operational environment seek to improve some aspect of task conduct through technology use. Many organizations rely on user acceptance measures to gauge technology viability, though misinterpretation of user feedback can lead organizations to accept non-beneficial technology or reject potentially beneficial technology. Additionally, teams that misinterpret user feedback can spend time and effort on tasks that do not improve either user acceptance or operational task conduct. This paper presents a framework developed through efforts to transition technology to the U.S. Transportation Command (USTRANSCOM). The framework formalizes aspects of user experience with technology to guide organization and development team research and assessments. The case study is examined through the lens of the framework to illustrate how user-focused methodologies can be employed by development teams to systematically improve development of new technology, user acceptance of new technology, and assessments of technology viability.
READ LESS

Summary

Organizations that introduce new technology into an operational environment seek to improve some aspect of task conduct through technology use. Many organizations rely on user acceptance measures to gauge technology viability, though misinterpretation of user feedback can lead organizations to accept non-beneficial technology or reject potentially beneficial technology. Additionally, teams...

READ MORE

This looks like that: deep learning for interpretable image recognition

Published in:
Neural Info. Process., NIPS, 8-14 December 2019.

Summary

When we are faced with challenging image classification tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification. The algorithm thus reasons in a way that is qualitatively similar to the way ornithologists, physicians, geologists, architects, and others would explain to people on how to solve challenging image classification tasks. The network uses only image-level labels for training, meaning that there are no labels for parts of images. We demonstrate the method on the CIFAR-10 dataset and 10 classes from the CUB-200-2011 dataset.
READ LESS

Summary

When we are faced with challenging image classification tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network...

READ MORE