|K. Hwang, Z. Xu, and M. Arakawa
University of Southern California
Parallel Computing Research Laboratory
Los Angeles, CA 90089-2562
Abstract We have converted the sequential C code of the STAP processor benchmark from MIT Lincoln Laboratory into parallel C code ported on the 400-node IBM SP2 system at Maui High-Performance Computing Center. This presentation presents program partitioning, STAP data distribution, internode message passing, performance measurements, and scalability analysis of the STAP programs on various SP2 configurations ranging from 1 to 256 computing nodes. Only coarse-grain SPMD parallelism was efficiently exploited on the SP2 due to high communication/computing ratio encountered.
The STAP programs have a high degree of parallelism (DOP). Both APT and HO-PD programs are parallelized along the dimensions of range gates (RNG) and of PRI (256 for APT and 1024 for HO-PD in both dimensions). The FFT has a maximal DOP of 8192 in APT and 49,156 in HO-PD, but we exploited only 256. The General benchmark has a maximum DOP of 8192, but we limited it to 256. On a 256-node SP2 system, the Parallel APT, HO-PD, and General benchmarks can be executed in 0.16 s, 0.56 s, and 0.61 s, respectively. These benchmark runs result in a sustained 2.5 Gflops to 18 Gflops including all message-passing overheads. With dedicated use, the SP2 achieved a 34% efficiency in executing the STAP programs.
The commercial SP2 was not specially tailored for real-time STAP applications. If the SP2 were modified for dedicated STAP applications, a real-time OS must be developed. Using the POWER2 processors with a fast switching network, a 128-node military version of the SP2 is sufficient to yield 0.5 s execution time and a sustained 25 Gflops rate. The parallel STAP suite scales well with increasing SP2 machine sizes and radar parameters. The suite can be also ported on Intel Paragon and Cray T3D with some modifications.
This research was supported by a research contract from MIT Lincoln Laboratory as part of the ARPA Mountaintop program.
Direct comments and questions to: email@example.com
© MIT Lincoln Laboratory. All rights reserved.