FOURTH ANNUAL
ASAP '96 WORKSHOP


____

Scalable Portable Parallel
Algorithms for STAP

Prashanth B. Bhat, Young W. Lim, and Viktor K. Prasanna
University of Southern California
Department of Electrical Engineering
Los Angeles, CA 90089-2562
tel: (213) 740-4483
fax: (213) 740-4418
email: prasanna@usc.edu

Abstract This presentation summarizes our analytical and experimental results in developing efficient parallel solutions for STAP on general purpose massively parallel systems. In this context, our research consists of designing general algorithmic techniques such as partitioning, mapping, and communication scheduling so as to enable efficient execution on general purpose HPC platforms. To illustrate these techniques, we have designed coarse grained parallel algorithms for various partially adaptive STAP approaches such as HOPD (Higher Order Post Doppler), as well as other Element-Space and Beam-Space approaches, on general purpose massively parallel systems. The algorithms have been implemented using C and MPI (Message Passing Interface Standard), thereby ensuring portability across several state-of-the-art HPC platforms. The parallel algorithms have been designed using a realistic model of general purpose distributed memory systems. The model assists us in analyzing performance benefits of algorithmic techniques. For instance, data remapping enables scalable performance in the case of HOPD processing. We have implemented these algorithms on the IBM SP-2 and Cray T3D, and are currently performing implementations on the Intel Paragon. The algorithms have been tested using the ARPA Mountaintop STAP database. The experimental results indicate that the performance scales linearly with system and problem sizes. We have also developed efficient data redistribution schemes to send data which is input from the antenna arrays to the compute nodes. The number of computing nodes is typically much larger than the number of antenna elements. We use our model to design a pipelined communication schedule for this operation. Experimental results indicate improved performance over a straightforward scheduling scheme.

 


LL Logo Disclaimer

Direct comments and questions to: webmaster@ll.mit.edu

MIT Lincoln Laboratory. All rights reserved.