Estimations of human joint torques can provide clinically valuable information to inform patient care, plan therapy, and assess the design of wearable robotic devices. Predicting joint torques into the future can also be useful for anticipatory robot control design. In this work, we present a method of mapping joint torque estimates and sequences of torque predictions from motion capture and ground reaction forces to wearable sensor data using several modern types of neural networks. We use dense feedforward, convolutional, neural ordinary differential equation, and long short-term memory neural networks to learn the mapping for ankle plantarflexion and dorsiflexion torque during standing,walking, running, and sprinting, and consider both single-point torque estimation, as well as the prediction of a sequence of future torques. Our results show that long short-term memory neural networks, which consider incoming data sequentially, outperform dense feedforward, neural ordinary differential equation networks, and convolutional neural networks. Predictions of future ankle torques up to 0.4 s ahead also showed strong positive correlations with the actual torques. The proposed method relies on learning from a motion capture dataset, but once the model is built, the method uses wearable sensors that enable torque estimation without the motion capture data.