Dr. Miriam Cha is a technical staff member in the Artificial Intelligence Technology Group. Her research centers around multimodal representation learning and cross-modal synthesis. She is interested in developing artificial intelligence to be able to interpret and translate multimodal information, similar to how humans have a natural ability to process and relate inputs from different sensory modalities (e.g., vision, hearing). She is currently investigating learning algorithms for multiple remote sensing modalities, and in particular, synthetic aperture radar.
Cha completed her PhD degree in computer science at Harvard University. She received BS and MS degrees in electrical and computer engineering from Carnegie Mellon University. She was a recipient of a National Science Foundation Graduate Research Fellowship, a National Defense Science and Engineering Graduate Fellowship, and a Lincoln Scholars Fellowship.