Summary
We describe a transduction-based, neurodynamic approach to estimating the amplitude-modulated (AM) and frequency-modulated (FM) components of a signal. We show that the transduction approach can be realized as a bank of constant-Q bandpass filters followed by envelope detectors and shunting neural networks, and the resulting dynamical system is capable of robust AM-FM estimation. Our model is consistent with recent psychophysical experiments that indicate AM and FM components of acoustic signals may be transformed into a common neural code in the brain stem via FM-to-AM transduction. The shunting network for AM-FM decomposition is followed by a contrast enhancement shunting network that provides a mechanism for robustly selecting auditory filter channels as the FM of an input stimulus sweeps across the multiple filters. The AM-FM output of the shunting networks may provide a robust feature representation and is being considered for applications in signal recognition and multi-component decomposition problems.