## Abstract

The need to successfully navigate in the absence of GNSS has grown in recent years. In particular, light aircraft such as UAVs are growing in popularity for a variety of applications vulnerable to GPS denial. The research presented here develops a GPS-denied navigation scheme for light aircraft employing images formed from a synthetic aperture radar system. Past research has explored the utility of radar telemetry in GPS-denied systems. This research advances previous work by exploiting radar images to obtain range and cross-range position measurements. Images are formed using the Range-Doppler Algorithm, an efficient image formation algorithm ideal for the sometimes limited processing packages available to light aircraft. An inertial navigation and radar processing system is implemented using both real and simulated radar images to aid in estimating an aircraft’s state in a GPS-denied environment. The results show that navigation in the absence of GPS using synthetic aperture radar is feasible with converging and bounded estimation errors.

- GPS-denied
- inertial navigation
- Range-Doppler Algorithm
- synthetic aperture radar
- UAV

## 1 INTRODUCTION

The popularity of using light aircraft such as UAVs is growing in a variety of applications from military use to urban consumer use. Regardless of the application, precision navigation is necessary and usually dependent on a global navigation satellite system (GNSS). Unfortunately, such systems, such as GPS, are easily denied. GPS can, for example, be jammed or spoofed either accidentally or intentionally. It can also be obscured or subject to multipath in the presence of natural or manmade structures obscuring the sky. The ease at which GPS can be denied motivates the field of GPS-denied navigation.

GPS-denied navigation can be accomplished in a variety of ways and typically involves an inertial navigation system supplemented by auxiliary sensors such as cameras, lidar, radar, etc. (Balamurugan et al., 2016). These supplementary sensors provide measurements that replace information lost by GPS denial. Some current research explores the feasibility of using synthetic aperture radar (SAR) as an auxiliary sensor to GPS. SAR is a method of processing radar data to form images of a landscape, typically from an aircraft or spacecraft. SAR is self-illuminating and can be used in all weather conditions and at any time of day. The research presented in this paper expands on previous research to further explore the utility of SAR in aiding GPS-denied navigation.

Specifically, images are created using the Range-Doppler Algorithm (RDA), which is an efficient Fourier-based image formation technique. Navigation measurements are extracted from images through a comparison with known target location data. Target comparison is performed via a 2D cross-correlation after autofocusing to remove image blur due to velocity and attitude state estimation errors. Measurements are applied to a local coordinate frame and take the form of range and cross-range measurements.

A 6DOF flight and radar simulations are implemented and verified via Monte Carlo analysis. Algorithms developed in the simulated system are applied to real navigation and radar data collected in Logan, Utah, by the Space Dynamics Laboratory at Utah State University. Results for both the simulated data and real data show successful GPS-denied navigation using RDA SAR images.

### 1.1 Literature Review

GPS is integral in most navigation systems. Without GPS, the position, velocity, and attitude estimates would accumulate error over time. GPS denial occurs in a variety of situations. For example, GPS is easily jammed or spoofed. GPS can also be denied by natural or manmade structures obscuring the sky. Without GPS, a vehicle must rely on information from other sensors to precisely navigate.

GPS-denied navigation is typically performed using accelerometers and gyroscopes coupled with a selection of supplementary sensors from which auxiliary measurements are collected. Examples of possible supplementary sensors include lidar, cameras, radar, range finders, etc. (Balamurugan et al., 2016). Auxiliary sensors attempt to supply knowledge sufficient to replace the knowledge lost by the lack of GPS.

This paper explores using synthetic aperture radar (SAR) as a supplementary sensor to aid in navigation. Using SAR as an aid to navigation is advantageous as radar pulses can penetrate cloud cover and are actively illuminating. As such, images of a landscape using SAR can be formed during stormy weather and at night. This presents an advantage over other imaging systems that employ cameras, which can’t image at night or during inclement weather, or lidar which may have trouble in inclement weather depending on illumination wavelengths.

Methods for using radar during navigation have been a topic of research interest and have been generally classified as methods of either relative navigation or absolute navigation. Relative navigation is concerned with localizing the vehicle relative to some reference location. Scannapieco (2019) and Scannapieco et al. (2018) detected ground targets using a constant false alarm rate and tracked their position using the global nearest neighbor in range-compressed data. Quist and Beard (2016) exploited the fact that, for linear flight paths, prominent reflectors manifest themselves as hyperbolic curves in the range-compressed vs. time image (Niedfeldt et al., 2014). Features were tracked using a hyperbolic Hough transform, after which, successive pairs of range measurements were used to determine changes in horizontal position. Mostafa et al. (2018) developed a combined radar odometry and visual odometry method and demonstrated performance on real data.

Simultaneous localization and mapping (SLAM) is another example of relative navigation. Where odometry approaches contain only states of the vehicle, SLAM approaches augment the vehicle state vector with the position of the reflectors in an extended Kalman filter (EKF) framework. The SLAM approach was studied over a period of five years by a group from the Air Force Institute of Technology (Kauffman, 2009; Kauffman et al., 2010a, 2010b, 2011a, 2011b, 2013a, 2013b). They developed a SLAM architecture that utilized the range-compressed radar data to detect, track, then estimate both the vehicle and reflector states culminating in a hardware implementation with highly reflective targets. Quist et al. (2016) extended research relying on the Hough transform by implementing a more efficient recursive RANSAC algorithm for data association and the tracking of reflectors. Here, reflector states were included in the state vector, resulting in a SLAM framework that was demonstrated with real outdoor flight data.

Autofocus algorithms for synthetic aperture radar (SAR) images may also be considered as a form of relative navigation in which velocity errors are estimated to yield an improved SAR image (Samczynski, 2012; Samczynski & Kulpa, 2010).

In contrast with relative approaches, absolute navigation localizes the vehicle in a global coordinate system such as latitude, longitude, and altitude, and does not suffer from an unbounded growth of navigation errors. Terrain contour matching is one approach to absolute navigation which exploits variations in terrain height along the flight path of an aircraft to determine the location of the vehicle in a global reference frame. Variations in terrain can be measured from radar nadir bounces or from interferometric SAR. These methods have been explored by Hollowell (1990), Bergman et al. (1999), Nordlund and Gustafsson (2009), Long Zhao et al. (2015), and Jiang et al. (2018), each approach varying in complexity from grids of Kalman filters to marginalized particle filters.

Of note is the work done by Kim et al. (2018), where measurements from an interferometric radar altimeter (IRA) provided the range to the nearest point to the radar vehicle from the underlying terrain. This measurement is processed together with information from a digital elevation model (DEM) in a particle filter to predict a unique trajectory. Simulation results show less than 3 meters root-mean-square (RMS) of absolute estimation error, with the major factor in navigation accuracy being the accuracy of the DEM.

Absolute radar-aided navigation using SAR images has been explored by the SARINA project (Fabbrini et al., 2011; Greco et al., 2011a, 2011b, 2012; Nitti et al., 2015; Sjanic & Gustafsson, 2010). Assuming knowledge of the aircraft attitude, the SARINA project correlated salient features in the online-generated SAR images with geolocated features in a reference map.

Consistent with results from the SARINA project, research performed by Christensen et al. (2019) concluded that navigation estimation errors over time induce specific error characteristics in SAR images that can be exploited to perform GPS-denied navigation. This research demonstrates a potential method for how this exploitation can be achieved.

### 1.2 Contributions

The research in this paper is an extension of these previous projects exploring GPS-denied navigation. The results presented here build upon those results leveraging SAR imagery to obtain precise navigation information. Specifically, a system was developed that forms SAR images in a GPS-denied environment. This degrades the quality of the images. The poorly formed SAR images are then used to infer navigation errors.

Furthermore, images were created using the Range-Doppler Algorithm, showing that navigation errors can be inferred using images that leverage efficient Fourier methods. This result suggests that SAR systems with lightweight processing packages, such as those on an unmanned aerial vehicle, could perform GPS-denied navigation.

The navigation architecture developed in this research is an absolute, SAR image-aided navigation approach that does not require the presence of significant terrain variations, nor knowledge of the aircraft attitude. A radar-aided, inertial navigation system (INS) based on an indirect EKF was developed. To bound the growth of INS errors in the range and cross-range directions, the EKF exploited apparent shifts after correcting for blurs in SAR images generated onboard an aircraft with respect to an error-free SAR image. The EKF was further aided by altimeter measurements to constrain the growth of vertical errors. A measurement model was proposed for the EKF that relates estimation errors in the aircraft position to range and cross-range shifts of the generated SAR image.

As a whole, this research strengthens the position of GPS-denied navigation using SAR imagery. A working system was demonstrated and validated using both real and simulated data as part of this work.

The remainder of the paper is organized as follows. Section 2 provides a tutorial on SAR image formation using the RDA paired with a mapdrift autofocus algorithm. Section 3 develops an inertial navigation system (INS) using SAR image measurements incorporated via an EKF. Section 4 validates the EKF via Monte Carlo analysis, presents the results of real and simulated data, and provides discussion. Section 5 concludes the document and suggests areas of future work.

## 2 SYNTHETIC APERTURE RADAR IMAGING

As the formation of synthetic aperture radar (SAR) images may be unfamiliar to the typical navigation engineer, this section contains a discussion of SAR image formation. The discussion ultimately leads to a derivation of the Range-Doppler Algorithm (RDA), the imaging algorithm employed in this research.

SAR is an imaging technique based on the transmission of radar pulses at intervals along a portion of an aircraft’s trajectory. This portion of the trajectory is called the synthetic aperture. Return pulses are collected along the synthetic aperture and are processed into an image. Fundamentally, forming a SAR image is a process of matched filtering the return signals in the range and cross-range dimensions. Matched filtering is essentially a correlation operation, where return data are correlated with a copy of the expected return pulse. Peaks in the filter output indicate high correlation and, thus, represent target returns of the original radar pulse.

Several algorithms can be used to process SAR data into images. Differences in image formation algorithms boil down to differences in how matched filtering is performed. For example, images can be formed using the Back-Projection Algorithm (BPA) where images are processed with a time domain matched filter on hyperbolic curves in the data. The time domain matched filter is flexible and can accommodate complicated flight trajectories, but it is computationally expensive and sensitive to navigation errors (Duersch & Long, 2015).

The RDA is an image formation algorithm that performs efficient matched filtering in the frequency domain where the filtering operation of convolution is equivalent to a complex point-wise multiplication of two vectors. Though the RDA is less expensive, its most basic implementation requires straight trajectories to form accurate images.

The research in this paper uses the RDA to form images and to derive SAR-based navigation measurements. This research is aimed toward the navigation of light aircraft such as UAVs. As such, the RDA was chosen due to its computational efficiency, which is attractive for the potentially smaller processing packages available to UAVs.

SAR images are dependent on vehicle position and velocity. This dependence means that errors in trajectory estimation will result in errors in the final SAR image. For example, position estimation errors cause imaged targets to shift in, and some types of velocity or attitude estimation errors cause imaged targets to both shift and blur (Lindstrom et al., 2022). In this research, the errors induced on the image by trajectory estimation errors are exploited to recover navigation information lost during GPS denial.

### 2.1 The Range-Doppler Algorithm

The RDA is a SAR image formation technique that takes advantage of the Fourier domain to perform efficient matched filtering and can be summarized into the following steps (Cumming & Wong, 2005):

Perform range compression on backscattered radar signal

Compute the fast Fourier transform (FFT) in the azimuth dimension

Compensate for range cell migration

Perform azimuth compression

Return data to spatial domain using the inverse FFT

The process of forming RDA images begins by transmitting radar pulses at multiple points along a vehicle trajectory. The backscattered pulses are collected by the radar receiver as raw SAR data. Radar pulses can have various structures but are commonly of a linear frequency modulated (LFM) form. LMF signals, or *chirp* signals, are characterized by an instantaneous frequency that varies linearly with time. Radar data is inherently complex and is represented by a complex exponential. The transmitted LFM signal *s _{tx}*(

*t*) is therefore written as:

1

where *f*_{0} is the initial frequency, *K* is the linear FM rate in Hertz per second, and *T* is the pulse duration. The bandwidth of the transmitted pulse is equal to the product of the linear FM rate and the pulse duration, which can be interpreted as the range of frequencies swept through during a single LFM pulse.

The return LFM signal is denoted *s _{rx}*(

*t*). The return signal is written using a stop-and-hop assumption, which assumes the vehicle transmits and receives each radar pulse without traversing any distance.

2

In this equation, *A* is the attenuation of the signal and *τ* is the time delay between the transmitted and received signal.

After collecting raw radar data, the data is fed through a matched filter in the range dimension to perform range compression. The matched filter is a time-reversed conjugate version of the transmitted signal *s _{tx}*(

*t*), which is denoted

*g*(

*t*). For efficiency, the matched filter can be applied in the frequency domain. Let a capital letter denote a signal in the frequency domain. The result of matched filtering is denoted

*s*(

_{rc}*t*), and is equal to:

3

Each output of the matched filter is collected together to form the aggregate range compressed data, which will be denoted by augmenting *s _{rc}*(

*t*) to

*s*(

_{rc}*t, η*), where

*η*is the variable for azimuth time, as opposed to range time denoted by

*t*. Visually, range compressed data appears as a composition of several hyperbolic arcs overlaid on top of each other. Each hyperbolic arc represents a ground target. This is illustrated in Figure 1.

Azimuth compression begins by taking an FFT of the range compressed data in the azimuth dimension. The data in the range dimension is left in the time domain. The azimuth-frequency range-time domain is referred to as the Range-Doppler Domain. The data in the Range-Doppler Domain is denoted:

4

Note that the *η* subscript on denotes the FFT in the azimuth dimension. Range cell migration correction must be applied before applying a matched filter to range-Doppler data. Range cell migration refers to the hyperbolic shape of the targets in the data. Each of these hyperbolic arcs must be reformatted to form a straight line within the data. Without correcting for range cell migration, the final image will blur, and resolution will decrease. Figure 2 shows the data in the Range-Doppler Domain before and after range cell migration correction.

Range cell migration correction can be performed by interpolating and re-indexing each row of the range-Doppler data according to a correction factor. The extent and method of interpolation required is system dependent and is typically determined during the design of the radar system. The amount of correction required for each range cell is denoted Δ*R*(*fη*) and is equal to:

5

where *λ* is the wavelength of the radar pulse’s center frequency, *R*_{0} is the range of closest approach to a given target, *f _{η}* is the azimuth Doppler frequency, and

*V*is the forward velocity of the radar vehicle (Cumming & Wong, 2005). The above expression is derived from the hyperbolic range equation. The hyperbolic shape of the radar data can be described by:

_{r}6

where the approximation comes from a Taylor series expansion and an assumption that *R*_{0} ≫ *V _{r}η*. Similar to the FM rate,

*K*, in the range dimension, the azimuth FM rate is denoted

*K*and is approximated by:

_{a}7

The azimuth frequency *f _{η}* is related to the azimuth FM rate (Cumming & Wong, 2005) by:

8

Using Equations (7), (8), and the quadratic portion of Equation (6), the range cell migration correction factor can be derived as:

9

The data after the range cell migration correction is denoted *S _{rcmc}*(

*t, f*). After range cell migration correction, a matched filter is applied in the azimuth dimension. The azimuth matched filter is denoted

_{η}*h*(

*η*) and is a phase-only filter equal to the following:

10

Using Equation (7), the azimuth matched filter can be rewritten as:

11

which is of a similar form to the range matched filter, which is the conjugate of Equation (1). The only differences between the range and azimuth matched filters are the FM rates, the time index being used, and center frequency, which in the azimuth case is equal to zero.

Azimuth filtering is then the multiplication of the range cell migration corrected data and the Fourier transformed matched filter, . The data after this step is denoted *S _{ac}*(

*t, f*):

_{η}12

The final step in forming images using the RDA is to take the inverse FFT in the Doppler dimension of *S _{ac}*(

*t, f*):

_{η}13

Figure 3 shows a fully formed SAR image after azimuth compression. The image in this case is formed and displayed in the slant plane. To display the image in the ground plane, the SAR image can be projected down onto either a uniform plane representing the ground or a digital elevation map of the radar scene (Cumming & Wong, 2005).

### 2.2 Mapdrift Autofocus

For a variety of reasons (such as motion error, FM rate estimation error, assumption violations, etc.), a SAR image can be blurred or distorted upon completion of the image formation algorithm. This warrants the use of an autofocusing algorithm. Autofocusing algorithms modify the matched filters used in the image formation process to produce sharper images.

There are several methods of autofocusing images ranging from contrast maximization to a generalized phase gradient algorithm. Autofocusing algorithms can be classified as either magnitude- or phase-based and iterative or non-iterative (Blacknell et al., 1992; Evers, 2019). The research in this paper performs a magnitude-based non-iterative mapdrift technique derived from the text by Cumming and Wong (2005), which is sometimes called the Azimuth Misregistration Algorithm.

The mapdrift technique is based on multilook processing and azimuth misregistration. Multilook processing is a method of using only a portion of the collected radar data to form the image of a target. For example, one could split the return radar data into two sets and apply a separate matched filter to each set. If done correctly, doing so will create two distinct images of the same target area.

In the presence of phase errors in the matched filter or errors in the estimated azimuth FM rate, imaged targets can become blurred and unfocused. In the context of multilook processing, blurred targets result in a misregistration, or location shift, of a target between each of the separate looks. This misregistration can be used to estimate a phase correction for the matched filter and fix any blurring induced by phase errors.

For the research in this paper, azimuth FM rate errors were introduced into the SAR imaging system when forward velocity or attitude errors (which induce additional velocity errors) were present. Such errors result in a blurred image if not corrected by an autofocusing algorithm (Lindstrom et al., 2022). To express azimuth FM errors, let *K _{amf}* be the azimuth FM rate with an error included such that

*K*=

_{amf}*K*+ Δ

_{a}*K*where Δ

*K*is the amount of error present in

*K*.

_{amf}The matched filter is adjusted to produce two different looks of the same target. This is done by selecting two disjointed frequency bands within the frequency range of the original matched filter. This means that rather than using *f _{η}* from Equation (8) in the matched filter, two separate frequencies are chosen, denoted

*f*

_{η,1}and

*f*

_{η,2}, where Δ

*f*=

_{a}*f*

_{η,1}−

*f*

_{η, 1}is the difference between the two frequencies. The amount of misregistration between the two separate looks is denoted Δ

*η*and is defined as:

14

Δ*η* can be measured from the data by cross-correlating the two separate looks produced by the *f*_{η,1} and *f*_{η,2} matched filters. The peak of the cross-correlation shifts according to the amount of misregistration present.

By rearranging Equation (14), the azimuth FM rate error can be expressed as:

15

The matched filter from Equation (11) can be rewritten with the azimuth FM rate correction represented by Equation (15) as:

16

This new azimuth matched filter was used to perform azimuth compression. The result was a focused image that could then be used for the navigation measurements described in subsequent sections. Figure 4 provides a demonstration of the mapdrift autofocusing technique. The first subfigure is a blurred SAR image resulting from a forward velocity error. The second subfigure shows the same SAR image after mapdrift autofocus is performed.

## 3 NAVIGATION SYSTEM AND MONTE CARLO FRAMEWORK DEVELOPMENT

This section develops the framework of an aided inertial navigation system (INS) applicable to the short data collection times typical of SAR imagery. The INS framework utilizes concepts discussed in Farrell (2008), Grewal et al. (2020), and Savage et al. (2000), and is most directly related to the so-called *Tangent Frame* kinematic model (Farrell, 2008), with assumptions of constant gravity and a non-rotating Earth, both of which are often applicable over the short timeframe typical of an airborne SAR data collection.

In the development that follows, the truth and navigation states are defined with the associated differential equations. Consistent with an extended Kalman filter framework, the truth state differential equations are linearized about the estimated navigation state to derive the differential equations of the estimation errors, or error states. Models are presented for the SAR-based measurement and subsequently linearized. Finally, the covariance propagation, update, and navigation state correction steps are defined.

### 3.1 Truth and Navigation Models

The truth state vector is comprised of the true position (* p^{n}*), velocity (

*), and attitude quaternion of the vehicle, in addition to biases for the accelerometer*

**v**^{n}**b**, gyroscope

_{a}**b**, altimeter

_{g}*b*, radar range

_{h}*b*, and radar cross-range measurements

_{r}*b*:

_{c}17

where *n* and *b* refer to the navigation and body frame, respectively. The body frame origin is coincidental with the navigation center of the inertial measurement unit (IMU), with the axes aligned and rotating with the vehicle body axes. The navigation frame is defined with the *x*-axis pointing north, the *y*-axis pointing east, and the *z*-axis pointing in the direction of gravity.

Since the multiplicity of attitude quaternion conventions is often a point of confusion, an explanation sufficient for this paper is provided. For more detail, the interested reader is referred to excellent discussions on topic (Farrell, 2008; Savage et al., 2000; Zanetti, 2019).

The attitude quaternion is a typical choice in inertial navigation systems to efficiently represent the attitude of the vehicle without the singularities associated with Euler angles. Another singularity-free representation is the attitude transformation matrix, which is a 3 × 3 matrix that transforms a vector expressed in the *b* frame to the same vector expressed in the *n* frame (i.e., passive interpretation of a rotation [Zanetti, 2019]).

The quaternion is the generalization of the imaginary numbers to three dimensions and consists of a scalar term, *a*, and vector term, [*b c d*]* ^{T}* in the

*i*,

*j*, and

*k*directions, respectively.

18

The scalar and vector portions are commonly packaged in a 4 *×* 1 vector:

19

where *q* = *a* and * q* = [

*b c d*]

*. Analytical relationships exist with the corresponding attitude transformation matrix (Savage et al., 2000). More importantly, the composition of transformation matrices:*

^{T}20

is equivalently performed using quaternions:

21

where quaternion multiplication is defined as:

22

Analogous to the transposition rule for transformation matrices:

23

the direction of an attitude quaternion is reversed via its complex conjugate or the negation of the vector portion.

24

In contrast with transformation matrices, many quaternion conventions unfortunately exist (Zanetti, 2019). The work described in this manuscript utilizes the conventions in Savage et al. (2000).

The truth state differential equations of position, velocity, and attitude are modeled via kinematic relationships, while the sensor biases follow modern best practices of first-order Gauss-Markov (FOGM) models with associated time constants *τ* and process noise *w* (Carpenter & D’Souza, 2018):

25

The vector * g^{n}* is the local gravity expressed in the navigation frame. The vectors

**and**

*v*^{b}**are true accelerometer and gyroscope measurements, free of noise or bias. Realistic measurements from accelerometers and gyroscopes are corrupted by noise,**

*ω*^{b}*, and bias,*

**n***, and are denoted with a tilde as and :*

**b**26

27

The navigation states, distinguished from truth by the presence of the decoration, are defined identical to the truth states, but are propagated using compensated and noisy accelerometer and gyroscope measurements without the process noise for the FOGM bias states.

28

### 3.2 Linear Error Model

The estimation error, or error state vector, is comprised of perturbations in position, *δ p^{n}*, velocity,

*δ*, attitude,

**v**^{n}*δ*, and sensor biases,

**θ**^{n}*:*

**b**29

Attitude in the linear error model is represented as a three-component vector rather than a quaternion. The vector *δθ ^{n}* contains the north, east, and down rotation angle errors synonymous with roll, pitch, and yaw angle errors, respectively.

For all but the attitude states, the difference is defined by a simple subtraction. For the attitude quaternion, the difference is defined by a quaternion product:

30

31

32

33

where the subscript *i* on *δb _{i}* is an index indicating any of the various bias terms. It is also convenient to define the attitude errors in terms of the true and estimated transformation matrices:

34

where the cross operator, ×, is equivalent to a cross-product and defined formally as:

Linearization of Equation (25) about the estimated state results in the error state differential equation:

35

where the error state dynamics matrix *F* is defined as:

36

37

38

39

The process noise coupling matrix *B* is given by:

40

41

The vector * w* is the noise vector and is comprised of accelerometer and gyroscope measurement noise as well as the process noise driving the sensor bias states.

42

The error state covariance matrix propagates according to:

43

where the *Q* matrix is the power spectral density of the noise vector **w**, defined as:

44

where and are defined, respectively, as the velocity random walk and angular random walk of the accelerometer and gyroscope. Each term is a steady-state variance term for the FOGM models governing each bias state and each *τ _{i}* term is the time constant of the FOGM models.

### 3.3 Measurement Modeling

This section discusses the measurements provided to the EKF developed in the previous section. In general, the measurement model used to design the EKF differs from the true measurement model used to synthesize measurements. In the case of the altimeter, the truth and design models were identical and modeled as the vertical position of the vehicle corrupted by bias and noise.

45

In the case of the SAR image-derived measurements, the truth and design models differ; a detailed discussion regarding each model is included in what follows. This measurement formulation assumes a roughly straight flight. For the vast majority of SAR systems, including the system used in this research, this assumption is very likely to be true during data collection. Furthermore, the measurement model assumes a local coordinate system centered on an origin located within the trajectory of the current SAR image’s subaperture. For extended flights, the local coordinate system changes with the formation of each new SAR subaperture.

From each SAR image, range and cross-range measurements were taken. The range and cross-range measurements provided information on apparent shifts of the in-flight-generated SAR image in the range and cross-range directions as compared to reference information of the imaged area. For the purposes of this research, the reference information is in the form of SAR images with known geolocation data.

In calculating range and cross-range shifts, this research assumed the availability of a geo-referenced SAR image transformed to the slant plane containing targets of interest or distinctive features. Methods for identifying suitable scenes in a geo-referenced SAR image database and subsequent transformation to the slant plane are outside the scope of this research, which instead focuses on the measurement formulation and its incorporation into the EKF. The interested reader is referred to the many publications on this topic (Bhanu & Lin, 2003; Gao et al., 2009; Hounam & Wagel, 2001; Srinivas et al., 2014; Zhao & Principe, 2001).

In this research, the reference image was formed in the slant plane using the truth trajectory of the vehicle and the methods outlined in Section 2. For the simulation-based results, the truth trajectory was easily obtained from the truth state of the simulation. Simulated RDA SAR images were generated from backscattered radiofrequency signals of randomly placed point targets on the ground. For demonstration on the real flight data, the truth trajectory was obtained by post-processing the IMU, GPS, and base station data using the NovAtel Waypoint Explorer software. The result was a trajectory estimate accurate to approximately 2 cm which was used to generate the reference imagery.

The range and cross-range shifts of the in-flight-generated SAR image were computed using a two-dimensional cross-correlation (Bourke, 1996) with the reference SAR image defined by:

46

where *X* and *Y* are two *M* × *N* images and the bar over denotes complex conjugation. The location of the peak of the cross-correlation was used to find the offset from the true target location. The offsets were used to calculate the range and cross-range measurements. This process is illustrated in Figure 5.

It is important to note that the developed method does not depend on the presence of a particular target in the SAR image, thus avoiding the complexity of target recognition in SAR imagery as done by Zhao and Principe (2001) or Sun et al. (2007). Rather, the in-flight-generated SAR image was cross-correlated with a geolocated reference image to produce the range and cross-range measurements. The accuracy of the derived measurements was dependent on the reference image content. Reference images with distinct features produce more accurate results because the cross-correlation produces a more distinctive peak without ambiguity.

In general, navigation errors cause both shifting and blurring of the in-flight-generated SAR image (Lindstrom et al., 2022). While shifting is easily detected using the above-mentioned cross-correlation, the presence of blurring can yield inaccurate results. The autofocusing algorithm described in Section 2 was, therefore, performed prior to cross-correlation to remove blurring and increase the accuracy of the derived measurement.

The design model of the shift measurement in the range direction is:

47

where * p_{targ}* is the known location of the imaged target and

*(*

**p**^{n}*η*

_{0}) is the position of the aircraft at the time of closest approach,

*η*

_{0}. The time of closest approach is the point in time during flight when the aircraft is closest to the target position (i.e., the time of minimum range between the aircraft and target). The measurement was corrupted by bias

*b*and noise

_{r}*n*.

_{r}The design model of the of the cross-range shift measurement was defined as the along-track distance traveled by the aircraft from a reference point , corrupted by bias *b _{c}* and noise

*n*:

_{c}48

where:

49

represents the along-track, or cross-range, direction of the aircraft and * p^{n}*(

*η*) represents the position of the aircraft at the beginning of the synthetic aperture. While no constraints were placed on the reference point , the numerical value during an individual measurement was chosen to be equal to the position of the vehicle at the beginning of the synthetic aperture. It follows that when , the equation for collapses to a form similar to Equation (47).

_{apt}As part of the measurement processing in the Kalman update, the altitude, range, and cross-range measurements must be predicted using state estimates. The form of the estimated measurement equations is nearly identical to the design models with the exception of the unknown measurement noise.

50

51

52

To process a Kalman update, the EKF must have a model of the first-order sensitivity of the measurement to estimation errors:

53

where *γ* is the measurement noise of the measurement under consideration. The quantity *H* is the sensitivity matrix and is derived by taking the Jacobian of design models in Equations (45), (47), and (48), and evaluating it at the current state estimate. The sensitivities for each measurement are as follows:

54

55

56

where .

It is important to note that in the derivation of Equation (55), two approximations were made, the validity of which is confirmed in the Monte Carlo simulations. First, the dependence of on the estimated state was neglected and the corresponding Jacobian was not evaluated. In the Monte Carlo verification that follows, however, the quantity was evaluated using the estimated change in position, so as to not inform the filter of truth state values. Second, the approximation was made that the position errors at the time of closest approach were approximately equal to the position errors at the end of the synthetic aperture, i.e., *δ p^{n}*(

*t*) ≈

_{toc}*δ*(

**p**^{n}*t*). While this approximation is valid across the small synthetic apertures considered in this study, generalization to larger apertures requires explicit modeling of time dependence:

57

via the state transition matrix Φ and a modification of the Kalman update to account for the correlation between previous state errors and measurement noise (Brown & Hwang, 2012).

### 3.4 Measurement Update

When a measurement becomes available from the altimeter or radar imaging system, a Kalman update is performed. To perform a Kalman update, the Kalman gain is first calculated:

58

where *P*^{−} is the error state covariance before the Kalman update and *R* is the covariance of the noise on the available measurement. The matrix *H* varies between *H _{h}* and a concatenation of

*H*and

_{r}*H*, depending on the type of measurement available.

_{c}The Kalman gain is then used to update the error state covariance using the Joseph form:

59

The Kalman gain is used together with the measurement residual to update the estimate of the error state. The measurement residual is defined as the difference between the true measurements, , and estimated measurements, . The quantity contained inside the brackets in Equation (58) is referred to as residual covariance. The update of the error state estimate then becomes:

60

After updating the estimate of the error state vector, the navigation state is corrected according to a rearrangement of Equations (30) through (33):

61

## 4 RESULTS

The system described above was implemented on both simulated and real data. Results of both types of data are given in the subsections that follow.

### 4.1 Simulated Data

A 6DOF inertial navigation and radar processing simulation was created to test the feasibility of the work described in the previous sections. The true flight trajectory was a straight and level flight flying northward. The radar scene was a random field of radar scatterers dense enough to ensure that a few targets would appear in every SAR image. The flight was 50 s long and was split into 51-s subapertures with each subaperture formed during straight flight. The plot in Figure 6 illustrates the general geometry of the simulation. This figure is similar in construct to Figure 1 where the blue line on the left represents the flight trajectory and where the ground targets are placed on the right side of the image. In this case, the targets are represented by a red bounding box, inside of which, the ground targets are randomly placed.

The INS was initialized approximately equal to the truth states, but with estimation errors consistent with the initial covariance matrix defined in Table 1. The values in the initial covariance matrix were produced by the trajectory estimation of the data in the presence of GPS measurements. Such a setup represents a scenario in which GPS measurements were available for a portion of the flight and were subsequently denied, making SAR measurements necessary.

Once initialized, the simulation proceeded as follows:

The INS used noisy accelerometer and gyroscope measurements to propagate the navigation states according to Equation (28). IMU noise parameters are specified in Table 2.

The Kalman filter was updated by noisy SAR and altimeter measurements. The noise parameters for the filter measurements are, again, specified in Table 2.

Throughout the subaperture, the flight computer collected radar data with the parameters specified in Table 3. After one second of flight, a SAR image was formed using the RDA and the estimated vehicle state.

The range and cross-range shift measurements were extracted from the in-flight and reference SAR images and processed in the EKF alongside an altimeter measurement.

Finally, the true trajectory and the estimated trajectory produced by the EKF were compared and estimation errors were calculated.

Within the Monte Carlo framework, Steps 1 through 5 were repeated 200 times.

The results of the simulated Monte Carlo analysis are provided in Figures 7 through 10. Figure 7 shows the residuals for the altimeter, range, and cross-range measurements. It is observed that the residuals were zero mean, white, and properly bounded by the EKF’s estimate of the residual covariance. This result suggests that the design models developed in Section 3.3 adequately captured the sensitivities and uncertainties of the measurements under consideration.

Figure 8 shows the estimation error of the aircraft position, calculated as the difference between the true and estimated trajectories. Figures 9 and 10 show similar information for the aircraft velocity and attitude, respectively. In all cases, the estimation errors were zero mean and properly bounded by the EKF’s estimate of uncertainty.

The EKF was able to either maintain the initial level of errors or substantially improve the estimates, as was the case for north (along-track) and down positions. Similar results were observed for velocity. In the case of attitude, the horizontal components were observable, as indicated by the slight decrease in uncertainty, while the vertical component was not. The lack of heading observability is common and expected for a straight and level flight with measurements sensitive only to position.

In summary, the simulation results show that the proposed navigation and radar system are capable of estimating the aircraft state in the absence of GPS. The measurement residuals were white and zero mean. The estimation error converged and remained bounded over the course of the trajectory. The position was estimated in all dimensions to less than 3 meters. Velocity was estimated to within 0.5 meters per second. North and east attitudes were estimated within 2 milli-radians, and the down component of attitude remained within 7 milli-radians of error.

### 4.2 Real Data

A scenario similar to the simulated data was analyzed for the case of real flight and radar data. As with simulated data, the real flight length was 50 seconds, which was split into 50 sub-apertures, with each sub-aperture formed during straight flight collection periods.

Flight and radar data were collected and provided by the Space Dynamics Lab at Utah State University in Logan, Utah. The data were collected on the FlexSAR system, a low-cost, high-quality prototyping SAR instrument (Jensen et al., 2016), paired with a NovAtel SPAN CPT7 tactical grade IMU (Novatel, 2020). A GPS-based navigation solution was available for the trajectory, which was taken to be the truth trajectory. This solution was achieved via post-processing in Waypoint software by NovAtel and is accurate to less than 2 cm. Altimeter measurements were synthesized using parameters based on the Honeywell Precision Barometer HPB (Honeywell, 2007).

Tables 4 and 5 list the initial state uncertainties along with the IMU and measurement noise parameters applicable to the flight tests. In contrast with the results in Section 4.1 from a well-controlled simulation environment, results from real flight tests must account for the effects of Earth’s rotation.

While the rotation of Earth impacts the propagation of velocity and attitude, the effects are constant in the Tangent Frame discussed in Farrell (2008). For an aircraft flying straight and level, the attitude of the aircraft is approximately constant. Hence, the effect of Earth’s rotation manifests as a bias in the accelerometer and gyroscope measurements. Given the aircraft velocity, Coriolis accelerations were approximately 0.001 g, less than the accelerometer bias of 0.01 g. The gyroscope bias specification, however, had to be increased to accommodate the 15 deg/hr rotation of the Earth. Table 6 lists the radar parameters specific to the FlexSAR system and aircraft.

For demonstration of the developed EKF on real data, residuals and estimation errors were analyzed for a single flight line. Accelerometer and gyroscope measurements were provided from the SPAN CPT7 system and were used to propagate the navigation state forward in time. At the end of each subaperture, a SAR image was formed and measurements were extracted.

The SAR-based measurements, along with synthesized altimeter measurements, were then processed in the EKF. Finally, estimation errors were computed throughout the flight and are displayed as was the case for the simulation results.

The estimation results for the case of real flight data are shown in Figures 11 through 14. The measurement residuals are shown in Figure 11 for the altimeter, range, and cross-range measurements. Although a small amount of time correlation is observed in the cross-range component, the residuals are observed to be approximately zero mean and adequately bounded by the EKF’s estimate of residual covariance.

The estimation error plots of Figures 12, 13, and 14 are consistent with the simulation results, with the corresponding errors properly bounded by the filter’s estimate covariance. Over the course of a 51-second GPS-denied flight, position errors stayed bounded within approximately 3 meters. Velocity errors stayed bounded within approximately 0.4 meters per second. Attitude errors stayed bounded within approximately 7 milli-radians in the north and east case, and 15 milli-radians in the down case.

To highlight the effect of the radar measurements on estimation errors, Figure 15 has been provided. This figure plots estimation error bounds for north, east, and down positions. Each sub-figure shows two bounds on a logarithmic scale. The dotted line represents the estimation error 3σ bound when no SAR measurements were processed. The solid line shows the same bound when SAR measurements were processed. As expected, the omission of SAR measurements degrades the north and east estimates and they begin to grow without bound. The down estimate stays bounded due to altimeter measurements.

## 5 CONCLUSION

This paper investigates an absolute SAR-image-based method for GPS-denied navigation. Navigation information was extracted from RDA SAR images formed using erroneous navigation state estimates. Range and cross-range measurements were extracted via correlation between online SAR images and known target locations. Measurements were, then, processed in an indirect extended Kalman filter. This method was demonstrated on both simulated data and real data. Estimation using simulated data was shown to have converging errors bounded within 3σ values of the state covariance matrix. Estimation errors using the real data were similarly bounded. In both cases, errors in position were estimated within approximately 3 meters of truth.

These results further conclude that using SAR images in GPS-denied situations is a viable solution for GPS-denied navigation.

### 5.1 Future Work

The SAR-based measurements in this paper are based on identifying the location of the cross-correlation peak between the online generated and reference SAR images. The precision with which this was achieved, and the resulting measurement noise, was dependent on the presence of salient features in the area imaged by the radar. On simulated data, a level of measurement noise was added to model errors in locating the center of the correlation peak. In the case of real data, however, the measurement noise changed based on the prominence of reflectors and the uniqueness of the scene. Further investigation is required to determine the relationship between SAR image content and the resulting range and cross-range measurement noise.

Any implementation of this research in a real system requires knowledge of the area of GPS denial, whether that be terrain knowledge or target location knowledge. In this research, it was assumed that geo-referenced SAR images were available to correlate against. Other methods of obtaining area information should be explored for a real-time system such as previously collected SAR images, optical satellite images, and synthesized SAR images using known terrain and target locations, digital maps, etc.

## HOW TO CITE THIS ARTICLE

Lindstrom, C., Christensen, R., Gunther, J., & Jenkins, S. (2022) GPS-denied navigation aided by synthetic aperture radar using the Range-Doppler Algorithm. *NAVIGATION, 69*(3). https://doi.org/10.33012/navi.533

## PRODUCT ENDORSEMENT DISCLAIMER

Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favor by Utah State University, the Space Dynamics Laboratories, Sandia National Laboratories, the Department of Energy, or the US government.

## CONFLICT OF INTEREST

The authors declare no potential conflict of interests.

## ACKNOWLEDGEMENTS

This research was funded in part by Sandia National Laboratories under Grants 202136 and 202854. Raw SAR and navigation data was provided by the Space Dynamics Laboratory at Utah State University.

## Footnotes

**Present address**Randall Christensen, 4120 Old Main Hill, Logan, UT 84322-4120

This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.