Achieving Low Range Sidelobes and Deep Nulls in Wideband ABF Systems

Daniel Rabideau and Peter Parker
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-2892
email: danr@ll.mit.edu

Presentation

 

Abstract Radars transmit wideband pulses to obtain high range resolution. To avoid extremely high peak transmit power, these pulses must be encoded. One of the most popular coding approaches is Linear Frequency Modulation. This is because (in part) LFM pulses can be decoded (i.e., compressed) via a technique known as Stretch processing, permitting great reductions in receiver cost and complexity.

At a recent ASAP workshop, Davis et al formulated two adaptive beamforming (ABF) methods for Stretch-based array systems. Their two methods [referred to as the "Time Domain" and "Frequency Domain" Cancellers] were shown to offer equivalent rejection of jamming.

In this paper, we first show (through statistical analysis validated by simulation) that the Frequency Domain method offers superior performance from the point-of-view of target resolution. That is, the Frequency Domain canceller results in inherently lower time sidelobes (i.e., range sidelobes). Methods for compensating the Time Domain Canceller are described, but such methods involve additional computations and/or reduced jammer cancellation. The problem is not entirely unique to Stretch receivers as we also show how similar time-sidelobe issues occur in a novel ABF architecture designed for Stepped Frequency Waveforms.

The second part of the presentation considers other practical issues with Stretch ABF. For example, it would seem apparent that proper channel equalization and flatness is required to achieve low time sidelobes and deep jammer nulls. However, adaptive equalization processing techniques compatible with Stretch receivers have yet to appear in the literature. Thus, we will offer a novel approach for channel equalization that will match wideband channels via narrowband EQ filters (i.e., that is compatible with the Stretch receiver).


Capon Redux: New Formulas, Geometries, and Computational Efficiencies

Louis Scharf
Colorado State University
Ft. Collins, CO 80523-1373
tel: (970) 491-2979
email: scharf@engr.colostate.edu

L. Todd McWhorter
Mission Research
3665 JFK Parkway, Bldg 1, Suite 206
Ft. Collins, CO 80526
tel: (970) 282-4400, ext 25
email: mcwhorter@mrcmry.com

S. Kraut
Duke University
Dept of ECE 
Box 90291
Durham, NC 27708-0291
tel: (919) 660-5419
email: kraut@ee.duke.edu

Presentation

 

Abstract In this paper we derive a new formula for the Capon beamformer, illustrating that it measures the principal angle between two subspaces, one spanned by the colored steering vector and the other by the columns of the colored generalized sidelobe canceller. This finding forms the basis for low-order approximations that are computationally efficient adjustments to the Bartlett beamformer. All of these results extend to the case of data-deficient and multi-rank beamforming. Moreover, they produce a generalized Capon beamformer that accounts for wavefront coherence between successive array snapshots, when there is such coherence to be exploited. Based on the new formula for the Capon beamformer, we analyze various known factorizations for efficiently approximating the Capon beamformer with low computational complexity. While the basis for the generalized sidelobe canceller in these factorizations does not matter for a full-rank computation of the Capon, it does matter for reduced-rank approximations. Nonetheless we discuss a particularly simple beamformer, using a fixed FFT basis, followed by pivoting rules based on Schur complements, for finding a good rank-r approximation to the Capon beamformer. These approximations to the Capon are robust in the sense that they are never worse than the Bartlett. They apply to arbitrary array geometries.. 


The Performance of the Parametric Vector AR Adaptive Beamformer

Peter Parker and Michael Zatman
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-3233
email: pparker@ll.mit.edu

Presentation


Abstract The parametric vector autoregressive (PVAR) technique has been proposed for adaptive beamforming and space-time adaptive processing (STAP). This algorithm utilizes the structure inherent in many array-processing applications to improve convergence and reduce computation. For adaptive beamforming it uses the structure in uniform linear arrays and for STAP it uses either the structure in uniform sampling in time or uniform linear arrays or both to improve convergence. This paper concentrates on using the PVAR technique for adaptive beamforming. 

In the paper it is shown that the PVAR adaptive beamformer is equivalent to estimating the direction of arrival of the jamming signals with the spatially smoothed root-MUSIC algorithm, and then placing deterministic nulls in the estimated directions. Using this information, the asymptotic expected signal to interference plus noise ratio performance of the PVAR adaptive beamformer is derived. The theoretical results are confirmed using simulated data. 

The deterministic nulling that is employed by the PVAR algorithm is based on the assumption that the array manifold is perfectly known and in the ideal case the algorithm has rapid convergence. However, this algorithm (along with any deterministic nulling algorithm) is very sensitive to errors in the array manifold. This paper also quantifies the performance loss due to inadequate knowledge of the array manifold. 

The performance of the PVAR algorithm is verified and compared to other STAP techniques using simulated data that is meant to emulate real radar data. This simulated data includes array calibration errors and fluctuates the power across range bins in a way similar to real radar data.


Beam-Based Adaptive Processing for a Volumetric Array

Henry Cox and Kevin Heaney
ORINCON Corporation
4350 N. Fairfax Drive
Arlington, VA 22203
tel: (703) 351-4440 ext 102
email: hcox@east.orincon.com

Presentation



Abstract A beam-based adaptive algorithm was developed to efficiently perform adaptive processing for a volumetric array. The array considered has 320 elements consisting of 40 vertical line staves of 8 elements each. The array is dense in the horizontal reducing aliasing at the cost of reduced horizontal resolution. Extensive simulations including the effects of full-field acoustic propagation, surface noise, moving interferers, array dynamics and finite sampling were performed addressing the key technical issues. These issues include: value of vertical aperture, beam patterns dependence on frequency, possibility of summing vertical staves to form array of horizontally focused elements as a pre-processor, additivity of vertical and horizontal gain, effectiveness of beam-based ABF after 14:1 compression over element-based processing. Comparisons of the beam-based approach and the element-based approach for a fixed number of snapshots and degrees of freedom indicate the beam-based approach is less sensitive to motion. It is shown that the beam-based processor is effective for rejection of moving interferers and quiet target detection and can be implemented simply and economically.


Robust MVDR Beamforming Using Worst-Case Performance Optimization

Sergiy Vorobyov, Alex Gershman, and Zhi-Quan Luo
McMaster University
1280 Main St. W., Hamilton
Ontario, L8S 4K1 Canada 
tel: (905) 5259140 ext.24094
email: svor@mail.ece.mcmaster.ca

Presentation



Abstract Adaptive beamforming methods are known to degrade if some of underlying assumptions on the environment, sources, or sensor array become violated. In particular, if the desired signal is present in training snapshots, the adaptive array performance may be quite sensitive even to slight mismatches between the presumed and actual signal steering vectors (spatial signatures). Such mismatches can occur as a result of environmental nonstationarities, look direction errors, imperfect array calibration or distorted antenna shape, as well as distortions caused by medium inhomogeneities, near-far mismatch, source spreading and fading, and local scattering. The similar type of performance degradation can occur when the signal steering vector is known exactly but the training sample size is small. 

In this paper, we develop a new approach to robust adaptive beamforming in the presence of an arbitrary unknown signal steering vector mismatch. Our approach is based on the optimization of worst-case performance. It turns out that the natural formulation of this adaptive beamforming problem involves minimization of a quadratic function subject to infinitely many nonconvex quadratic inequality (soft) constraints. We show that this originally intractable problem can be reformulated in a convex form as the so-called Second-Order Cone (SOC) program and solved efficiently (in polynomial time) using the well established interior point method. Computer simulations with several frequently encountered types of signal steering vector mismatches show a substantially better performance of our robust beamformer as compared to existing adaptive beamforming algorithms. Moreover, even in the case of exactly known signal steering vector, the proposed algorithm is shown to enjoy a significantly better performance and faster convergence rate than other known adaptive beamforming techniques.


Statistical Signal Processing Algorithms for Time-Varying Sensor Arrays

David Rieken and Daniel Fuhrmann
Washington University in St. Louis
St. Louis, MO 63130
tel: (314) 935-7551
email: rieken@essrl.wustl.edu

Presentation 



Abstract In this talk we will consider signal processing algorithms for arrays in which the position of the individual sensors changes during the observation interval. These time-varying arrays cannot use the same algorithms that time-invariant arrays use because they are designed for stationary random processes and the output of the time-varying array is nonstationary. In this talk we will discuss several novel beamforming, direction-of-arrival estimation, and spatial spectrum estimation algorithms for time-varying arrays of sensors. Key to all of these applications is estimation of the covariance matrix, for if that is known existing algorithms that make use of the covariance matrix may be applied. Since the covariance matrix associated with the nonstationary data stream is also time-varying we have designed an algorithm to estimate a sequence of covariance matrices. We will also discuss a modification we have made to the MUSIC algorithm that makes use of the resulting matrix sequence and offers improved performance in direction-of-arrival estimation. Furthermore, it will be shown that time-varying arrays offer improved performance in spatial-spectrum estimation by using the covariance matrix sequence, or by a maximum-likelihood spectrum estimation algorithm that uses the raw array output. 


Super-Resolution Source Localization through Data-Adaptive Regularization

Dmitry Malioutov, Mudjat Cetin, John Fisher III, and Alan Willsky
MIT
77 Massachusetts Avenue
Cambridge, MA 02139
tel: (617) 975-0481
email: dmm@mit.edu

Presentation



Abstract We address the problem of source localization using a novel non-parametric data-adaptive approach based on a regularized linear inverse problem solution with sparsity constraints. The notion of sparsity in this context refers to assuming a small number of sources with each source being localized spatially, which fits well into a standard source localization scenario. We express the problem in a variational framework and use a particular class of non-quadratic cost functionals, which favor solutions with few non-zero entries (sparse). We present a computationally efficient technique for the numerical solution of the ensuing optimization problem similar in spirit to the half-quadratic regularization method used in image processing.

In comparison to conventional source localization methods, the proposed approach provides increased resolution, reduced sidelobes, and offers better robustness to noise and limited number of time samples. In addition, the method works equally well for the case of coherent sources, and has a lower sensitivity to mismatches in source frequency for narrowband signals, and to the coarseness of the search grid (e.g. if the actual locations of the sources fall outside the grid of searched locations).

First we develop the method for a basic narrowband, farfield problem, and then extend it to other scenarios, namely nearfield, and broadband. The experimental results are obtained via a computer simulation as well as by applying the algorithm to acoustic data collected using a microphone array. Our current work, which we also aim to present at the workshop, involves the extension of the variational framework to achieve self-calibration and source localization simultaneously.


Threshold Region Performance of Maximum Likelihood DOA Estimation for a Single Source

Fredrik Athley 
Department of Signals and Systems
Chalmers University of Technology
SE-412 96 Goteborg
Sweden
tel: +46 31 772 8062
email: athley@s2.chalmers.se

Presentation



Abstract This paper presents a performance analysis of Maximum Likelihood (ML) Direction-Of-Arrival (DOA) estimation using sensor arrays for the case of a single signal in white Gaussian noise. Particular attention is paid to the threshold effect that is common in nonlinear estimation. The threshold effect of ML estimation is caused by large errors that stem from peaks in the likelihood function far away from the true peak. Local bounds like the Cramer-Rao Bound (CRB) do not capture this effect and are consequently far from tight bounds in the threshold region. Since the DOA estimation performance rapidly deteriorates below the threshold, it is important to have an analysis tool that can predict when this effect appears. This is of particular interest in applications where sparse arrays are employed, since the threshold may appear at a relatively high Signal-to-Noise Ratio (SNR) due to the spatial undersampling.

The paper presents approximations to the probability of outlier and Mean Square estimation Error (MSE) of the ML estimator. Both the stochastic and deterministic signal models are treated. Simulations are used to verify that the derived approximations are able to accurately predict the performance of the ML estimator for a wide range of SNRs. It is also shown that, for a single snapshot, the stochastic ML DOA estimator cannot reach the CRB as the SNR tends to infinity. This is due to the fact that, in this case, the MSE contribution from outliers does not go to zero any faster than the local MSE does. Hence, the effect of outliers on the overall MSE cannot be neglected, not even at very high SNR.


Fast Subspace Updating Using the Multi-Stage Eigenspace Estimation Filter

Yung Lee
SAIC
1710 SAIC Drive
McLean, VA 22102
tel: (703) 676-6512
email: yung@osg.saic.com

Presentation


Abstract A new fast algorithm for updating eigenvalues, eigenvectors, and the dimension of the signal subspace using a nested structure similar to the multi-stage Wiener filter is presented. The multi-stage Wiener filter has a pyramid-like structured decomposition. First, it decomposes a block of observed data into two subspaces: a response in the desired steering direction and a residual in the direction orthogonal to the steering direction. Next, a normalized cross-correlation vector is calculated between the residual and main response. In the case of interference sidelobe leakage in the main response, the normalized cross-correlation vector defines the direction of an interferer. The normalized cross-correlation vector is then defined to be the desired steering direction for the next stage, and the residual then is decomposed again in the same manner. Stage by stage, a sequence of interference vectors and their associated responses are calculated. In the new multi-stage subspace updating algorithm, the initial steering vector points toward the noise space obtained from the previous estimation, and the new "interference" vectors are estimated from the multi-stage filter. These then form the signal subspace for the current estimation. For a simulation of a large sonar towed array, the new algorithm has been shown to outperform several subspace tracking approaches having the similar computation complexity.


Space-Time Adaptive Processing with a Distorted Linear Array in Inhomogeneous Clutter Environments

Jeffrey Krolik and Vijay Varadarajan
Duke University
Box 90291
Durham, NC 27708
tel: (919) 660-5274
email: jk@ee.duke.edu

Presentation



Abstract This paper presents a technique for improving space-time adaptive processing (STAP) performance with a distorted linear array when clutter inhomogeneity seriously limits the number of snapshots available to estimate the adaptive weights. A distorted linear array geometry may occur, for example, in activated sonar towed-array or conformal-array applications. With perfectly linear arrays, the required sample support for estimating the clutter covariance is lower bounded by the well-known Brennan's rule. For a distorted linear array, however, the clutter covariance matrix rank, derived here as a function of the effective aperture co-linear and orthogonal to the array axis, is shown to be at least twice that of the undistorted linear array. Thus conventional STAP with a distorted array is subject to a much greater SINR loss due to limited sample support relative to the undistorted case. In this paper, a non-adaptive space-time transformation is presented which significantly reduces the limited sample support SINR loss for a distorted array. The transformation is derived by minimizing the mean-square error between distorted array clutter and that, which would be received by a virtual linear array subject to a constraint, which preserves the target response. Simulation results are presented which demonstrate that transformed distorted array STAP converges with much less sample support to its asymptotic SINR and space-time array pattern than does conventional processing.


Extraction of Multiple-Bounce Ghosting Artifacts in Array Imaging

David Garren
SAIC
4501 Daly Drive
Suite 400
Chantilly, VA  20151-3707
tel: (703) 814-8277
email: david.a.garren@saic.com

 

Presentation



Abstract This analysis develops an innovative array image formation algorithm that separates direct-scatter echoes in an image from echoes that are the result of multiple bounces, and then maps each set of reflections to a metrically correct image space. Current processing schemes place the multiple-bounce (MB) echoes at incorrect (i.e., ghost) locations due to fundamental assumptions implicit in conventional array processing. Two desired results are achieved by use of this new Image Reconstruction Algorithm for Multi-bounce Scattering (IRAMS). First, ghost returns are eliminated from the primary image space, thereby improving the relationship between the image pattern and the physical distribution of the scatterers. Second, a higher dimensional image space containing only multi-bounce echoes is created which possesses characteristic information about the scene being imaged. This auxiliary image space offers the potential of dramatically improving target detection and identification capabilities.

IRAMS computes frequency-domain back-projection functions based upon the measured phase-history data that contain contributions due to MB scattering events. The back-projection functions decompose these phase history measurements into contributions due to single-bounce (SB) scattering events and MB interactions. An enhanced quality image uncorrupted by MB ghosting artifacts is constructed by retaining only the SB contributions. In addition, the extracted MB ghosting artifacts are projected into a higher-dimensional space that can be used to enhance object detection and identification. Because this algorithm incorporates a physical model that explicitly allows for MB effects in the image formation process, it is applicable to any image formation technology where such phenomenon is found. Thus, IRAMS offers the potential of improving image quality and target detection and identification performance in many domains, including: a) real aperture radar imaging, b) synthetic aperture radar (SAR) imaging, c) inverse SAR (ISAR) imaging, and d) active sonar underwater acoustic mapping.



Improved Target Classification at Low SNR with Beamspace HDI

Duy Nguyen, Gerald Benitz, John Kay, Bradley Orchard, and Robert Whiting
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-2079
email: duy@ll.mit.edu

Presentation


Abstract Lincoln Laboratory has previously demonstrated that a two-dimensional automatic target recognition (ATR) system employing high definition vector imaging (HDVI) processed synthetic aperture radar (SAR) images outperformed those systems whose SAR images have been processed with the weighted fast Fourier transform (FFT). However, when a target is moving, the only signature that can be reliably obtained from the target is its high range resolution (HRR) profile. Under the moving target exploitation (MTE) program, Lincoln Laboratory successfully demonstrated that a one-dimensional ATR system employing HRR profiles can alternatively be used to provide target ID. Although a single HRR profile does not provide the performance of SAR imagery, the HRR performance can provide adequate probability of correct classification (Pcc) given a few looks at the target while reducing the overall integration time. By applying HDVI to the HRR profiles, we successfully demonstrated improved target recognition performance compared to traditional image processing techniques (i.e. weighted FFT). 

Recently, Lincoln Laboratory has integrated the HRR ATR into a traditional kinematics tracker by using the target ID information to help match target tracks with sensor reports for situations in which the targets under track exhibit similar kinematics. For operational reasons, this classification-aided tracker (CAT) must operate at low signal-to-noise ratio (SNR), in the range of 15-25 dB. Unfortunately, the performance of the HDVI-processed HRR ATR classifiers, necessary for CAT in addition to providing target ID, suffers significant performance degradation as the SNR is decreased from 35 to 15 dB. An additional objective, driven by operational considerations, is to reduce other radar resources (i.e., bandwidth) required to achieve a desired probability of correct classification (Pcc) and probability of false classification (Pfc).

In this talk, we present recent results at low SNR using a new super-resolution technique known as beamspace high definition imaging (BHDI). The enhanced one-dimensional ATR system using BHDI-processed HRR profiles exhibits significantly improved target recognition performance at low SNR compared to the conventional weighted FFT and super-resolution HRR methods, such as HDVI and spatially variant apodization (SVA).  In addition, less bandwidth is needed to achieve results comparable to those obtained at higher bandwidth using more conventional HRR processing methods. 


Detection Algorithms for Hyperspectral Imaging Data Exploitation

Dimitris Manolakis, Traci Latlippe, and David Marden
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-0524
email: dmanolakis@ll.mit.edu

Presentation 


Abstract Detection and identification of military and civilian targets from airborne platforms using hyperspectral sensors is of great interest. Relative to multispectral sensing, hyperspectral sensing can increase the detectability of pixel and subpixel size targets by exploiting finer detail in the spectral signatures of targets and natural backgrounds. A multitude of adaptive detection algorithms for resolved and subpixel targets, with known or unknown spectral characterization, in a background with known or unknown statistics, theoretically justified or ad hoc, with low or high computational complexity, have appeared in the literature or have found their way into software packages and end-user systems. The purpose of this paper is threefold. First, we present a unified mathematical summary of most adaptive matched filter detectors using common notation, and we state clearly the underlying theoretical assumptions. Whenever possible, we express existing ad hoc algorithms as computationally simpler versions of optimal methods. Second, we present a comparative performance analysis of the basic algorithms using theoretically obtained performance characteristics. We focus on algorithms characterized by theoretically desirable properties, practically desired features, or implementation simplicity. A primary goal is to identify best-of-class algorithms for detailed performance evaluation. Finally, we illustrate the practical performance of the most promising algorithms using hyperspectral data from the HYDICE sensor.


Optimized Multichannel Waveforms with Application to Polarimetrics

Unnikrishna Pillai
Polytechnic University
Six Metrotech Center
Brooklyn, NY 11021
tel: (718) 260-3732
email: pillai@hora.poly.edu

Joseph Guerci
DARPA / SPO
3701 North Fairfax Drive
Arlington, VA 22203
tel: (703) 248-1548
email: jguerci@darpa.mil

Presentation



Abstract Recent research has highlighted the potential benefits of polarization diverse radar systems to enhance the separation of targets from clutter, and aid in target ID [1]. To achieve maximum gain, however, the waveform and polarization should be jointly designed. This requires a rigorous theoretical framework for multichannel waveform optimization.

In this paper, a recently developed multichannel waveform optimization framework is presented. Unlike previous single-input, single-output (SISO) approaches [2], or methods that produce non-causal waveforms [3], this framework produces strictly causal results, and allows the designer the flexibility of controlling uncompressed pulse lengths while preserving resolution (i.e., compression). Although nonlinear when clutter is present, a very efficient and rapidly converging iterative algorithm has been developed which makes the approach attractive for both offline and potentially online multichannel waveform optimization. The efficacy of the approach is illustrated with applications to varying target-clutter scenarios.


A General Framework for Space-Time Coding in MIMO Wireless Communications Systems

A. Lee Swindlehurst
Brigham Young University
459 Clyde Building
Provo, UT  84604
tel: (801) 378-4343
email: swindle@ee.buy.edu

Presentation

 

Abstract The advantages of using multiple antennas at both the transmit and receive ends of a wireless communications link have recently been noted.  A number of space-time codes (STCs) have been proposed that exploit the potential for increased throughout and diversity that such systems offer.  In this talk, it is shown that most of these codes can be described as special cases of a general STC framework in which a given data sequence is linearly precoded prior to transmissions from each source antenna.  If training data is employed, then the precoding is affine.

Classifying STCs under this framework facilitates code comparisons and potentially allows for optimal code and training sequence design.  In addition, it permits the design of modular receivers that can be applied to a wide variety of sytems.  The focus of this presentation is how the proposed STC framework can be exploited for blind (or semi-blind) equalization and direct sequence estimation.  In particular, a set of channel-independent linear equations are derived whose solution simultaneously yields the transmitted data sequence and a vector containing all possible zero-forcing receivers.  Conditions on the linear precoders and training that guarantee unique solutions are also described.  While the details of the algorithms are presented for the single-user flat-fading case, extensions to situations involving frequency selective fading and multiple users are discussed, along with modifications of the algorithm required when there are more transmit than receive antennas or the channel is rank deficient.


Synthetic Aperture Geolocation of Cellular Phones in the Presence of Multiple Access Interference

Daniel Bliss
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-
email: bliss@ll.mit.edu

Nicholas Chang
Princeton University
Princeton, NJ
tel: 
email: nchang@princeton.edu

Amanda Chan
University of Michigan
Ann Arbor, MI
tel: 
email: amchan@eecs.umich.edu

Presentation



Abstract Motivated by the cellular phone geolocation needs of law enforcement and emergency response personnel, a precision geolocation technique is proposed in this paper. This technique, employing moving receivers, is entitled synthetic aperture multipath imaging (SAMI). This approach is an extension to frequency difference of arrival (FDOA) and time difference of arrival (TDOA) techniques. The technique addresses the three significant issues associated with precision geolocation: resolution, interference, and computational complexity. Employing a synthetic aperture, very fine angular resolution is achievable. In complicated scattering environments, the multipath arriving from multiple directions can overwhelm angle of arrival (AOA) estimation using a small antenna array. The relatively large synthetic aperture improves the likelihood of resolving these modes, enabling geolocation in more complicated environments. The proposed technique uses temporal interference mitigation to overcome multiple access interference. Finally, the technique employs a mix of parametric and nonparametric approaches, exploiting computationally efficient FFTs when possible. The proposed technique is exercised using experimental data in the presence of interfering users. Transmitters at known locations are used for calibration.


RF Tags: Using Radar as a Communications Channel

Patrick Bidigare and Majid Nayeri
Veridian Systems
P.O. Box 134008
Ann Arbor, MI 48113-4008
tel: (734) 994-1200 x2792
email: bidigare@erim-int.com

Presentation



Abstract Over the past 10 years, a number of government agencies including DARPA have been developing active radar transponders (RF Tags) for covert communication applications. These devices communicate information through an ISR radar collection (SAR or GMTI) by receiving radar pulses; modulating these in some way and retransmitting these pulses back to the radar. The RF tags embed signals into the radar data stream that can be extracted and decoded into the information they represent. RF tag communication is inherently very covert because the EM emissions from the tags are "covered" both by the radar pulses and the echoes from objects in the scene. 

RF tags present a unique communications problem because radar provides a mixed continuous time (within a pulse) and discrete space-time (between pulses and subarrays) channel. The dominant source of channel noise is the clutter returns from the illuminated scene. The well-studied spatio-temporal correlation properties of clutter returns are fundamentally important in achieving high RF tag data rates.

There are two main objectives of this paper. The first is to derive an analytic expression for the space-time correlation of a homogeneous clutter scene. This expression depends only on the transmit and receive antenna aperture lengths and weights and the antenna phase center displacements. This result is used in deriving the channel capacity of the radar system.

The second objective is to present an adaptive approach to clutter suppression and tag signal extraction. Classic STAP approaches to clutter suppression for GMTI target detection are not useful here because they corrupt the tag signals. We present an algorithm that adaptively estimates the clutter correlation and produces an unbiased estimate of the tag signals embedded in the scene. This algorithm compensates very effectively for channel imbalances due to transfer function differences and non-identical antenna patterns.


Cost Optimized Antenna Arrays in CDMA Cellular Networks

Bruce McGuffin
Nanyang Technological University 
Singapore 
tel: (781) 981-4849
email: mcguffin@ll.mit.edu

Presentation



Abstract Code division multiple access (CDMA) wireless networks are designed to provide data or voice communications in the desired coverage area at a specified probability of link closure. A cellular system splits the area into multiple cells, with a shared base station serving users in each cell. The base station distinguishes between users by their spreading codes.

The wireless propagation environment is highly variable, with multipath fading, and shadowing. Networks use closed-loop power control to reduce fluctuations, and prevent excessive co-channel interference. Antenna diversity helps reduce fades. Another approach is to use an adaptive array antenna, or ``smart antenna'', which can null strong interferers. Array interference canceling increases base station capacity with or without closed-loop power control. Although smart antennas reduce the number of cells required, they increase the cost of each base station.

This paper describes the optimal trade-off between antenna array cost and the number of base stations under some simplifying assumptions, and using a simple, statistical propagation model. Monte Carlo simulations found the relationship between base station capacity and the array size for both optimal diversity and antenna nulling in different propagation environments. 

Using these relationships, the minimum cost per user was found as a function of the antenna array size. For a diversity receiver, the design is optimal when the ratio of array cost (antennas plus receivers) to fixed base station cost is a constant determined by the propagation model and link closure requirements. For smart antennas, the design is optimal when the ratio of the beamforming weight estimation cost to fixed base station cost equals 1/2 when using a recursive least squares (RLS) weight estimation algorithm, and 1 using a least mean squares (LMS) algorithm. This relationship holds for all propagation models and link closure requirements.

An example shows that although processing cost is very low compared to antenna and receiver cost, it can limit the optimal array size at realistic update rates. 


A Novel Blind Adaptive Broadband Beamformer for Multi-Channel Multi-Rate CDMA

Emanuela Falletti, Mario Micciche and Fabrizio Sellone 
Dip. Elettronica
Politecnico di Torino
C.so Duca degli Abruzzi, 24
10129 Torino (TO)
ITALY
tel: +39 011 564 4196
email: falletti@polito.it

Presentation



Abstract The capacity of conventional wireless communications systems, like IMT-2000 and the European UMTS, based upon Direct-Sequence Code Division Multiple Access (DS-CDMA) schemes, is mainly limited by the Multiple Access Interference (MAI), therefore methods aimed at limiting such interference are mandatory solutions to increase the capacity of these systems. Smart Antennas are a promising technology for improving the performance of high capacity mobile communications systems, because they are able to increase the SINR of the received signals by means of an adaptive modification of the equivalent array beam pattern, which can track the users moving within the cell. 

We propose a novel blind adaptive beamforming algorithm tailored for the up-link of multi-channel multi-rate DS-CDMA wireless communication systems. The principal feature of the proposed technique, based upon a modified Constant Modulus (CM) criterion, is its capability to overcome the main drawback of classic CM algorithms, which lies in its difficulty in ensuring the convergence to the desired user, avoiding the convergence to an interferer. Without requiring additional conditions to guarantee the convergence to the correct solution, the proposed algorithm exploits user-specific information intrinsically known at the receiver, which is the code uniquely associated to each user. Neither the knowledge of the spatio-temporal propagation channel nor any training sequence is required. Furthermore, the overall algorithm complexity is independent from the number of simultaneous accessing users, because only the knowledge of the desired code is due.

We develop a broadband beamformer structure able to temporally re-align uncorrelated clusters of multipaths, as well as spatially recombine correlated multipaths belonging to the same cluster, employing a computationally non-intensive algorithm. Opposite to many proposed solutions for Space-Time RAKE receivers, this scheme requires only one dispreading device after the filtering structure. 

We provide computer simulations showing that our proposed technique can effectively exploit path diversity and multipath correlation provided by wireless channels in order to increase the SINR at the receiver, outperforming classic Space-Time RAKE receiver techniques proposed in literature. Furthermore, it is shown that the performance achieved by the novel algorithm is close to the theoretical optimum SINR.


Superresolution Techniques in Time of Arrival Estimation for Precise Geolocation 

Gary Hatke
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-3364
email: hatke@ll.mit.edu

Presentation



Abstract Precise geolocation of uncooperative electromagnetic emitters has long been a goal in many areas of signals intelligence. Numerous methods, such as intersecting line-of-bearing (LOB), time-difference-of-arrival (TDOA), frequency-difference-of-arrival (FDOA), and joint TDOA/FDOA processing have been tried. With stationary interceptor platforms and stationary target emitters, the FDOA techniques are not applicable. In addition, the LOB techniques typically require well-calibrated antenna array equipment, and the accuracy of this method decreases inversely with the range to the emitting source. TDOA techniques, therefore, are often the most viable for such situations as moderate to long-range geolocation of stationary sources by ground-based assets.

Unfortunately, in many urban environments signal multipath can seriously degrade the performance of TDOA techniques. Often, the direct path signal from the emitting source is highly attenuated, and multipath components dominate the received signal energy. The use of conventional cross-correlation techniques leads to biased position estimates, where the error in range to a target from a sensor can be as large as c/BW (here, c is the speed of light and BW is the bandwidth of the emitter signal). 

This paper addresses this problem by applying superresolution techniques to the TDOA problem when the signal waveform can be well estimated, as in cases such as cellular telephony or radios with constant amplitude signals. We discuss two methods. The first method is applicable when multiple arrays of sensors are available at distinct locations. This method is a variant of the HDVI algorithm proposed by Benitz for synthetic aperture radar (SAR) imaging. The second technique requires only a single sensor at each location, and can be thought of as an extension of array-based superresolution techniques to one-dimensional data streams. We will present the algorithms, along with results on simulated data. In addition, an attempt to quantify the benefits of these algorithms on real-world data will be made using measurements taken by the DARPA-sponsored novel antenna program (NAP) data acquisition system.


Adaptive Processing in a SBR Bistatic Adjunct for Surveillance and Engagement

Mark Davis 
AFRL / SN
26 Electronic Parkway
Rome, NY 13441-4514
tel: (315) 330-2211 
email: mark.davis@rl.af.mil



Braham Himed
AFRL / SNRT
26 Electronic Parkway
Rome, NY 13441-4514
tel: (315) 330-2551
email: braham.himed@rl.af.mil



Michael Hartnett
Emergent, Inc.
1300-B Floyd Avenue
Rome, NY 13440
tel: (315) 339-6184
email: michael.hartnett@emergent-IT.com

Presentation Not Available

Abstract Airborne Bistatic adjuncts to a monostatic Space Based Radar (SBR) provide the ability to extend detection range, attain better tracking accuracy, and sense lower velocity targets. Effective adjuncts necessitate multiple simultaneous receive beams and adaptive processing to obtain the desired system performance. Both requirements demand advances in array processing to achieve the system sensitivity and signal characteristics to meet the operational requirement.

AFRL has been investigating several critical issues in Bistatic CONOPS for surveillance and engagement applications, including: Range-Doppler compensation of bistatic STAP algorithms, estimation of processing requirements for multiple beams, and characteristics of array processing algorithms and their implementation on high performance computers.

Several system operational concepts are considered that consider airborne platforms. The demands on the digital beamforming and Space-Time Adaptive Processing (STAP) are presented for these future surveillance systems. Considerations on antenna size, numbers of simultaneous receive beams, and bistatic geometries are presented and their influence on detection probability and minimum detectable velocity (MDV).

The clutter spectrum observed from an SBR system is shown to be both range and geometry dependent. As such, mitigating the clutter using traditional monostatic STAP approaches is not appropriate. The clutter characteristics are much more complicated in bistatic spaceborne applications, than those seen in monostatic systems. STAP algorithms must not only take into account geometry-induced dispersions, but also must accommodate for rapidly changing clutter and target characteristics as dictated by the SBR orbit. We propose to compensate for these geometry-induced dispersions through a two-dimensional transformation matrix, which will align the clutter spectral centers. Since the clutter spectrum is also varying in time, the demands on temporal updates in the adaptive weights are discussed. Finally, an estimate of the throughput requirements on a real time processor is presented.


Bistatic Clutter Suppression and Target Detection Analysis for the RADARSAT / GMTI Experiment

Carl Pearson and Stephen Pohlig
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-4118
email: pearson@ll.mit.edu

Presentation Not Available

Abstract An experiment is being designed to validate the extension of traditional monostatic geometry STAP algorithms to bistatic geometries (1). Theoretical work has shown that both the Derivative Based Updating (DBU) technique and the recently developed Higher Order Doppler Warping (HODW) technique can achieve near-ideal STAP performance (2). In this paper, we examine in detail the performance of these algorithms in the context of the RADARSAT GMTI experiment planned for late Spring 2002. We examine the effects of finite bandwidth and aliasing (range and Doppler) on algorithm performance. We also quantify the possible improvements available from suitable design of the engagement geometry to minimize the clutter spread ("clutter tuning" (3)), including the associated reduction in sensitivity to target motion ("GDOP"). We conclude with a summary of the system geometry and timing requirements for a successful demonstration.


Bistatic Radar Clutter Suppression Error Sensitivity

Jacob Griesbach
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-2954
email: jgriesba@ll.mit.edu

Presentation Not Available

Abstract The primary difficulty in airborne bistatic radar clutter suppression is that clutter statistics are nonstationary in range, and thus training an adaptive algorithm is difficult. However, implementable bistatic clutter suppression algorithms have been recently proposed for bistatic radar. Higher Order Doppler Warping (HODW) is one algorithm where the data is aligned prior to adaptive processing (STAP) in order to reduce the nonstationarity. While HODW has been shown to perform well in numerical simulations, analysis has not been performed that tests the clutter suppression performance when combined with real-world measurement errors. This paper seeks to fill this gap by estimating the sensitivity of SINR loss for HODW STAP with respect to system errors such as position and velocity errors. The analysis is conducted using an analytic eigenvalue decomposition of the covariance matrices generated by HODW. Since the covariance matrices are functions of the bistatic geometry, the SINR loss sensitivity depends on transmitter and receiver radars' physical locations. Numerical simulations are included which support the analytic results. This analysis is being done to predict the performance of bistatic experiments planned for the spring and summer of 2002.


A Look at Bistatic STAP from the Viewpoint of SAR Image Formation

Gerald Benitz 
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-4665
email: benitz@ll.mit.edu

Presentation Not Available

Abstract This presentation provides a common analytical viewpoint for bistatic SAR and STAP. It arises from investigations into multiple-function bistatic radars, and from a demonstration of GMTI with UHF SAR imagery. This common viewpoint relies on the wavenumber representation of radar data employed in SAR image formation, e.g., the SAR polar format algorithm. Radar returns are viewed as samples from the Fourier transform of the reflectance function. Wavenumbers, in addition to providing focusing information, also define the data alignment required to achieve clutter cancellation. This principle is derived analytically, and then illustrated in the examples of DPCA, STAP, and SAR-GMTI. Extension to bistatic data is straightforward, and shows how GMTI can be performed in bistatic SAR imagery, in a jammer-free case. When there are jammers along with clutter, a practical STAP solution entails a resampling, or warping, of the wavenumber data. This is shown to be equivalent to Higher-Order Doppler Warping (HODW). An example is presented illustrating the warping, its effect on the image point response, and filters to correct the point response and maximize STAP performance.


FOPEN GMTI Using Multi-Channel Adaptive SAR

Ali Yegulalp
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420 
tel: (781) 981-0886
email: yegulalp@ll.mit.edu

Presentation Not Available

Abstract The intelligence, surveillance, and reconnaissance need for total battlefield awareness has motivated a growing interest in low-frequency radar technology to detect moving military targets under foliage. In the last year, DARPA sponsored a first-effort data collection at the Aberdeen Proving Grounds to examine feasibility and explore some of the basic technical issues. Due to the low radar frequency (UHF) and limited array size (4 meters), all ground targets of interest compete against strong main beam clutter. An additional complication is that the array is uncalibrated and suffers from abundant airframe multipath. This talk will describe some novel adaptive SAR-based GMTI techniques developed at Lincoln Laboratory and applied to the Aberdeen data. The general framework for SAR-GMTI will be explained, as well as the specific adaptive algorithms used for detecting targets in main beam clutter. Algorithm performance will be quantified in two ways: processing gain and probability of detection versus false alarm density curves.


The Performance of the Parametric Vector AR STAP with Experimental Data

Peter Parker and Michael Zatman
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-3233
email: pparker@ll.mit.edu

Presentation Not Available


Abstract The parametric vector autoregressive (PVAR) technique for space-time adaptive processing (STAP) has been shown to have good performance using simulated data.  This presentation compares PVAR to other well-known STAP algorithms using experimental data.  The data was collected on a 12 element linear array as part of the foliage penetration experiment at Aberdeen Proving Grounds in September 2000.  One variant of PVAR (which requires a uniform linear array) is shown to have poor performance because of calibration errors.  The presentation also shows that pre-Doppler STAP and spatially unstructured PVAR have comparable performance and both outperform the eigencanceller in most cases.


Rapid Adaptive Interference Cancellation for Passive Sonar

Nigel Lee, Brian Tracey, and Lisa Zurk
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-2908
email: nigel@ll.mit.edu

Presentation Not Available

 

Abstract Sample-covariance-based adaptive array processors suffer from interference motion during the time interval required to estimate the covariance matrix for adaptive weight formation. Moving interferers are not properly nulled by sample covariance matrices, which represent only "average" interferer positions over time. As a result, sidelobes from strong, moving interferers can obscure detection of weak targets, even for adaptive beamformers.

Almost all algorithms designed to mitigate the effects of moving interferers are "rapid adaptation" techniques, in which interferers are cancelled on a (nearly) snapshot-by-snapshot basis. One type of rapid adaptation is the so-called derivative-based updating (DBU) algorithm, in which interference motion is assumed to be approximately captured by the first derivative of a time-varying adaptive weight vector. A second type of rapid adaptation projects nulls on an estimate of the interference subspace at each snapshot. Two examples of the latter are eigenvector-based nulling, which estimates the interference subspace using data eigenvectors, and model-based nulling, which estimates the interference subspace using prior knowledge of the interferer location.

This paper examines the problem of moving interferers in passive sonar detection. First, conditions are established as to when interference motion is a problem, which is a function of array aperture, frequency, signal-to-interference-plus-noise ratio, interferer velocity, and the beamforming algorithm. Simulation results are presented which demonstrate the loss of performance in the standard adaptive beamformer when interferers are moving. Second, the rapid adaptation techniques (DBU and projection nulling) are applied to the same set of simulations to determine what improvement they provide and how sensitive the improvement is to algorithm assumptions. Finally, the rapid adaptation algorithms are applied to passive sonar towed array data and their performance improvement over a standard adaptive beamformer is quantified.


Adaptive Reverberation Mitigation for the MK-48 Torpedo

Nicholas Pulsone
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (81) 981-0268
email: pulsone@ll.mit.edu

Presentation Not Available



Abstract The MK-48 heavyweight torpedo is undergoing the Common Broadband Advanced Sonar System (CBASS) upgrade effort. This effort is focused on improving counter-countermeasure (CCM) performance in the shallow waters of the littoral environment, increasing the operating bandwidth, and providing frequency agile flexibility. Adaptive processing techniques will be a major part of this effort. This work serves as an evaluation of wideband adaptive processing techniques for the MK-48 torpedo. In this work we intend to leverage processing techniques currently used in radar applications for the broadband active sonar problem. Wideband adaptive array processing techniques are well understood in the radar community and have been successfully implemented to suppress wideband RF jamming energy and backscatter from the local terrain, i.e. clutter. For example, space-time adaptive processing techniques can be used in the airborne radar surveillance problem to leverage spatial-temporal characteristics inherent in clutter and interference. Many challenges in airborne radar surveillance are similar to problems in a torpedo active sonar system. Therefore, adaptive techniques that provide dramatic performance gains in radar are also potentially capable of providing significant gains in underwater sonar applications. However, several challenges are evident in the sonar application to a degree not found in radar applications. For example, the characteristics of reverberation noise vary rapidly from sample to sample. Therefore, rapid adaptation techniques are needed to track the noise statistics. Furthermore, because of the rapid noise variation there are typically limited training samples available to estimate the noise structure. In this case, adaptive training strategies may include the test data within the noise training set and consequently target self-nulling issues arise. Robustness to target self-nulling can be improved with a number of signal processing techniques. This work will include a description of a candidate adaptive processing architecture for reverberation cancellation and a performance evaluation with in-water data.


Adaptive Beamformers for the SURTASS TwinLine Towed Array

Walter Allensworth, Robert Zeskind, and Ben Eldridge
Applied Hydro-Acoustics Research, Inc.
Suite 135
15825 Shady Grove Road
Rockville, MD 20850
tel: (301) 840-9722
email: walt@aharinc.com



Ronald Warren and Jeffrey Strauss
Digital System Resources, Inc.
180 N. Riverview Drive
Suite 300
Anaheim Hills, CA 92808
tel: (714) 922-2115
email: rwarren@dsrnet.com



Not Available for Publication.


Application of Covariance Matrix Filtering for Adaptive Beamforming with Moving Interference

Bruce Newhall
The Johns Hopkins University
Applied Physics Laboratory
11100 Johns Hopkins Road
Laurel, MD 20723-6099
tel: (240) 228-4287
email: bruce.newhall@jhuapl.edu

Presentation Not Available


Abstract A covariance matrix filtering approach has been developed for adaptive beamforming for mobile sonars operating in an environment with moving interference from surface shipping1. The approach is reviewed and its application to passive sonar towed arrays in shallow water is examined. An analytic expression for the ensemble mean covariance has been obtained. In practice the parameters of each interferer are not known with sufficient precision to use this modeled ensemble mean as a basis for adaptive beamforming. Hence, techniques to accurately estimate the ensemble mean based on covariance data samples are developed. The two primary motion parameters, bearing rate and range rate, are readily estimated in the covariance matrix frequency domain. For a uniform horizontal line array, toeplitz averaging can provide increased robustness in this estimation process, since the ensemble mean must be toeplitz. Once the motion parameters are known, the time varying covariance matrix can be reliably estimated and adaptive beamforming employed. Techniques are applied to both realistic simulations and to measured ocean acoustic data.


Robust Eigenvector Adaptive Beamforming for the TB-16 Array


Stephen Kogon and Vincent Premus
MIT Lincoln Laboratory
244 Wood Street
Lexington, MA 02420
tel: (781) 981-3275
email: kogon@ll.mit.edu

Thomas Phipps and Richard Gramann
ARL, University of Texas at Austin
P.O. Box 8209
Austin, TX 78713-8209
tel: (512) 835-3692
email: phipps@arlut.utexas.edu

Presentation Not Available


Abstract:  In most passive sonar applications, the goal of a tactical towed array is the detection of submarines from their own acoustic waterborne broadband and narrowband signals. The primary source of interference for a passive sonar is commercial shipping. In the littoral environment, shipping density can be high and loud merchant ships often times can obscure the presence of quiet submarines. The loss in performance due to these loud interferers motivates the use of adaptive beamfoming to facilitate the detection of weak target signals. Since passive sonar by its very nature implies that all signals are present at all times, an adaptive beamformer must contend with the problem of not having training data without the signal of interest. In the case of target signals contained in the array covariance matrix, target self-nulling becomes a significant issue due to mismatch between the true and assumed array response vectors in the adaptive beamformer.  Causes of mismatch include multipath, array errors, and pointing errors. In order to effectively detect target signals, any adaptive beamformer, therefore, must have as a part of it robustness measures that limit target self-nulling in the presence of mismatch.

In this paper, we discuss the design methodology for a new adaptive beamformer aimed at optimizing target detection. The target array in this study is the TB-16 tactical submarine towed array. The goal of the adaptive beamformer is to not only null interference but to also limit targets in bearing extent as much as possible. An adaptive beamforming algorithm can be decomposed into several components: covariance matrix estimation, adaptive weight criteria, and robustness measures employed to minimize self-nulling. Another concern for an adaptive beamformer is the ability to rapidly adapt to highly dynamic environments that mostly impacts the covariance estimation. Directly linked to the covariance estimation task is the selection of degrees of freedom. A popular vehicle for determining degrees of freedom is to use principal components, provided by a singular value decomposition. Algorithms that fall under this category include Dominant Mode Rejection (Owsley) and Principal Components Inverse (Tufts). We extend upon the unifying framework presented by Cox (1998) to address the covariance matrix estimation with principal components. The SVD also provides a convenient framework in which to provide robustness. Each component is individually interrogated for its effect on white noise gain (WNG), a common measure of target self-nulling. Finally, we address the issue of adaptive weight criteria. In order to provide beamshape protection, linear constraints are employed. Beamshape protection is a crucial concern when a limited number of beams are used.