Thomas Quatieri

Photo of Thomas Quatieri with a poster on autism spectrum disorder in the background.
For the future, I envision a collaborative translational center that bridges speech and language with other sensory indicators.

When did you join the Laboratory?

I joined in 1980 after getting my doctorate at MIT under Professor Alan Oppenheim, who asked me to join him to initiate a new multi-dimensional signal processing team at the Laboratory. The plan was to make this a two-year gig and then move on to a university faculty position, but my wonderful Lincoln Laboratory colleagues and the fusion of applied and academic research drew me in. And well, here I am in 2022.

What project have you enjoyed working on the most?

For about the last decade, I have worked in the Human Health and Performance Systems Group, bridging human speech information and other sensory indicators to biomedical problems. It’s difficult to pinpoint any one project as my favorite because so many have been quite exciting, but three of my most recent and most personally motivating stand out: our COVID-19, autism spectrum disorder (ASD), and hearing enhancement efforts.

In our COVID-19 work, our goal is to use vocal cues to detect the presence of the virus, but more specifically its location in the body, such as upper or lower respiratory. For ASD we are using speech and perception to help nonverbal individuals “find their own voice,” while our hearing effort uses a brain-computer interface to enhance an attended talker. All three pursuits are seemingly high-risk and difficult to achieve with so many confounders, but these are the kinds of challenging problems I thrive on.

How did you get interested in your field of study?

One summer as an undergraduate, I studied echolocation signals of bats and dolphins for the purpose of designing an echolocating cane for the blind. I simulated the generation and reception of the bat signal on an analog computer and learned for the first time the miracle of the production-perception loop: the bat chirp is generated and then received by a matched filter in its auditory system to allow tiny-object, position-velocity sensing. This experience, and my early volunteer work with children with sensory disabilities, planted the seeds to where I’m at today.