Summary
Estimations of human joint torques can provide quantitative, clinically valuable information to inform patient care, plan therapy, and assess the design of wearable robotic devices. Standard methods for estimating joint torques are limited to laboratory or clinical settings since they require expensive equipment to measure joint kinematics and ground reaction forces. Wearable sensor data combined with neural networks may offer a less expensive and obtrusive estimation method.We present a method of mapping joint torque estimates obtained from motion capture and ground reaction forces to wearable sensor data. We use several different neural networks to learn the torque mapping for the ankle joints during standing, walking, running, and sprinting. Our results show that neural networks that consider time (recurrent and long short-term memory networks) outperform feedforward network architectures, producing results in the range of 0.005-0.008 N m/kg mean squared error (MSE) when compared to the inverse dynamics model on which it was trained. As a point of reference, the typical measurement errors from inverse dynamics models are in the range of 0.0004-0.0064 N m/kg MSE. Errors tended to increase with locomotion speed, with the highest errors during sprinting and the lowest during standing or walking. Future work may investigate model generalizability across sensor placements, subjects, locomotion variants, and usage duration. The proposed method relies on learning from a motion capture dataset, but once the model is built, the method uses wearable sensors that enable torque estimation without the motion capture data. These methods also have potential uses for the design and testing of wearable robotic systems outside of a laboratory environment.