What does your job entail and what are you currently working on?
My job entails internally and externally proposing interesting artificial intelligence (AI) projects to improve relevant mission areas, developing AI algorithms, managing the projects, and transitioning the developed technologies to sponsors. My research centers around multimodal representation learning and cross-modal synthesis. I am interested in developing AI to be able to interpret and translate multimodal information — similar to how humans have a natural ability to process and relate inputs from different sensory modalities (e.g., vision, hearing). My current research is on investigating learning algorithms for multiple sensing modalities, in particular, multiple remote sensing sensors and various medical images and clinical reports.
How did you get interested in your field of study?
The first time I applied machine learning theory to computer vision application was in Pattern Recognition Theory, a class taught by Professor Marios Savvides at Carnegie Mellon University. It was quite fascinating to learn how linear algebra could be applied to solving real-world problems such as face recognition.
What are some of your future goals?
My latest goal is strengthening collaboration between MIT campus and the Laboratory. At MIT and Lincoln Laboratory, we have experts in various fields mostly working independently. I aim to bring the experts together to create synergy between the two MIT communities.
What is something outside of your technical work that you are passionate about?
I’m getting more interested in environmental changes. Lately, I am looking at remote sensing images in the Amazon rainforest. Despite international efforts to reduce deforestation, the world loses an area of forest the size of 40 football fields every minute!