Summary
The Autonomous and Resilient Management of All-source Sensors (ARMAS) framework monitors residual-space test statistics across unique sensor-exclusion banks of filters (known as subfilters) to provide a resilient, fault-resistant all-source navigation architecture with assurance. A critical assumption of this architecture, demonstrated in this paper, is fully overlapping state observability across all subfilters. All-source sensors, particularly those that only provide partial state information (altimeters, TDoA, AOB, etc.), do not intrinsically meet this requirement. This paper presents a novel method to monitor real-time overlapping position state observability and introduces an observability bank within the ARMAS framework, known as stable observability monitoring (SOM). SOM uses a monitoring-epoch stability analysis to provide an intrinsic awareness to ARMAS of the capabilities of the fault detection and exclusion (FDE) functionality. We define the ability to maintain consistent all-source FDE to recover failed sensors as navigation resilience. A resilient FDE capability is one that is aware of when it requires more sensor information to protect the consistency of the FDE and integrity functions from corruption. SOM is the first demonstration of such a system for all-source sensors that the authors are aware of. A multi-agent 3D environment simulating both GNSS and position and velocity alternative navigation sensors was created and individual GNSS pseudorange sensor anomalies are utilized to demonstrate the capabilities of the novel algorithm. This paper demonstrates that SOM seamlessly integrates within the ARMAS framework, provides timely prompts to augment new sensor information from other agents, and indicates when framework stability and preservation of all-source navigation integrity are achieved.
1 INTRODUCTION
Introduced in 2018, the Autonomous and Resilient Management of All-Source Sensors (ARMAS) framework has provided a generalized framework for the real-time management of heterogeneous, asynchronous all-source sensors (Jurado & Raquet, 2019). This framework is resilient to corruption from mis-modeled, uncalibrated, and faulty sensors, accomplished by combining sensor validation, fault detection and exclusion (FDE), recalibration, and remodeling modes into a single architecture. Sensor-agnostic all-source residual monitoring (SAARM) was designed to provide all-source FDE and navigation integrity functions within the ARMAS framework (Jurado et al., 2020b). In this context, all-source navigation resiliency is the ability to maintain consistent all-source FDE operations with the ability to recover failed sensors. The pluggable Bayesian filters provided by the Scorpion estimation architecture afforded the needed flexibility to spawn, propagate, and remove estimation filters on the fly (Kauffman et al., 2020). SAARM required the designer to maintain a set of unique navigation subfilters (each unique subfilter excludes measurements from a different sensor) to maintain resilience to a single simultaneous sensor failure. A primary assumption of the FDE and integrity functions is its ability to maintain overlapping position state observability. This paper presents a novel method to monitor real-time overlapping position state observability and introduces an observability bank within the ARMAS framework. These additions to ARMAS use real-time observability analysis at the Layer 2 subfilter level (each unique Layer 2 subfilter excludes measurements from two different sensors). This is used to provide a timely indication to augment the framework with new sensor data, thus protecting the consistency of ARMAS FDE and integrity functions from corruption. The cumulative residual-space test statistic epoch approach espoused by this paper for all-source FDE differs from previous sub-subfilter approaches that used snapshot methods (e.g., solution-separation) to provide GNSS FDE (Call et al., 2006).
2 BACKGROUND
2.1 Autonomous Resilient Management of All-Source Sensors (ARMAS)
The ARMAS framework was designed to gracefully recover from multiple types of failure modes (bias, model mismatched, and/or sensor miscalibration) while attempting to maintain a consistent, uncorrupted navigation estimate. ARMAS employs a set of Scorpion pluggable extended Kalman filter (EKF) estimators to address the following nonlinear navigation problem:
1
where x is a N × 1 state vector of a vehicle’s position, velocity, and attitude. The measurement error states vector α is of dimension M × 1, u is the control input vector, G is an (N + M) × Q linear operator, and w is a Q × 1 white noise process defined by a Q × Q continuous process noise strength matrix, Q.
State estimates are propagated through optimally combining the state process model, sensor-specific calibration parameters, and measurement updates from j = 1… J available all-source sensors. The measurement model for the j-th sensor at time step k is described by:
2
where h[j] is the nonlinear measurement function for the j-th sensor, α[j] is an L × 1 subset of α which contains additional error states needed to process sensor measurements, p[j] is a P × 1 user-selectable model parameter vector for h[j], and vk is a Z × 1 discrete white noise process with its covariance defined by matrix .
The Z × 1 measurement residual for sensor j, is defined by:
3
where , and
are estimated quantities. Assuming white Gaussian noise, the measurement residual from Equation (3) is expected to follow the distribution:
4
5
where is the (N + M) × (N + M) state estimate covariance matrix at time tk and
is the Z × (N + M) Jacobian of h[j].
Sensors are initialized in one of two modes: trusted or untrusted. Untrusted sensors are required to enter a sensor Validation Mode prior to being brought into Monitoring Mode (Jurado et al., 2020a). In the Validation Mode, the ARMAS framework employs a likelihood function to monitor the statistical distribution of a user-defined monitoring period composed of recent Kalman pre-update residuals. A χ2 test statistic is used to detect excursions outside a user-defined threshold across the sampling period. Sensors in Validation Mode are excluded from impacting the main state estimates using a Schmidt partial update (Brink, 2017). Trusted sensors are directly brought online into Monitoring Mode. In Monitoring Mode, sensor measurements are allowed to update the main state estimates. The ARMAS framework employs the same pre-update residual likelihood function used in the Validation Mode to monitor sensor performance. A detailed explanation of Monitoring Mode, including FDE and integrity functions, is given in Section 2.2.
Once a fault is detected, the sensor is no longer trusted and is quarantined from affecting the core navigation state estimate, . The ARMAS framework attempts to reinitialize the sensor via Validation Mode. If this fails, ARMAS attempts to repair and recover the faulty sensor via two separate modes: sensor calibration and remodeling. In Calibration Mode, user-selectable sensor parameters, p[j] and/or α[j], are estimated using residual monitoring from trusted sensors that have an observability of x. If there is a single calibration parameter, the ARMAS framework attempts to correct the calibration using residual monitoring and sends the sensor back to Validation Mode. If linked extrinsic calibration parameters exist (e.g., camera lever arm and camera orientation within p[j] or α[j]), they are estimated individually and sequenced based on the convergence of the state covariance matrix to maintain state observability.
If the recalibrated sensor fails to pass sensor validation, the sensor enters Remodeling Mode where the ARMAS framework attempts to modify the measurement model, h[j], based on 1...S user-defined measurement models. S concurrent filters are spawned (each with a unique measurement model) and an epoch of measurement residuals is gathered against the core navigation estimate x. The winning sensor measurement model is selected based on which filter best matches the prescribed distribution from Equation (4) during the residual epoch. The sensor then enters Validation Mode. If the Remodeling Mode does not result in a new model selection and Resilient Sensor Recovery (RSR) is activated, the sensor periodically re-enters Validation Mode after a user-selectable time period in an attempt to overcome a temporal anomaly (Jurado & Raquet, 2019). Figure 1 is a state transition diagram depiction of these modes. The result is a framework compatible with heterogeneous, asynchronous all-source sensors with the benefit of resilience against various sensor calibration, modeling, and temporal faults.
ARMAS framework state diagram (Jurado & Raquet, 2019): A sensor begins at point O (origin) and is trusted or untrusted. SAARM and the new contribution SOM reside within the Monitoring (M) Mode.
One assumption that is not explicitly discussed in Jurado and Raquet (2019) is that ARMAS requires overlapping state observability (Ham & Brown, 1983) to detect anomalous sensor behavior. As discussed above, the system monitors Kalman pre-update residuals between sensor measurements and subfilter estimates to continuously judge whether sensor measurements adhere to the distribution prescribed by the sensor model. Anomalous sensor behaviors (e.g., bias, gain, model mismatch, and high noise) are only observable if there are other sensors with comparable observability in the state estimate. If anomalous behavior is detected, the ARMAS framework attempts to recover the sensor through recalibration, remodeling, and re-validation. Without overlapping state observability, it is impossible to determine if a sensor is misbehaving or if it can be re-validated. The criticality of this assumption for ARMAS is highlighted in the following analysis.
2.2 Sensor-Agnostic All-Source Residual Monitoring (SAARM)
SAARM assumes a system form of:
6
SAARM estimates system states with J separate subfilters. At time t = tk, the system state vector and state estimation covariance matrix are defined by and
for j = 1…J separate subfilters.
Each of these subfilters is informed by a subset of I – 1 sensors. At t = tk, the i-th sensor provides measurements given by:
7
where h[1] is the nonlinear measurement function, u(tk) is the control input function, and v[i](tk) is a discrete white noise process of dimension Z × 1 defined by covariance matrix R[í](tk). The pre-update measurement estimate for sensor i from filter j is defined by:
8
where the estimated covariance matrix is defined by:
9
Using Equation (8) and Equation (9), the pre-update residual vector between sensor i and filter j, r[i,j], and its covariance matrix, , are defined as:
10
11
Fault detection relies on computing a moving average of recent residual-space test statistics formed by pre-update residual vectors from Equation (10) and Equation (11). The ARMAS framework is designed to detect any sensor behavior that is inconsistent with the stated measurement model within the limitations of the stated significance level, alpha. Although not examined in this paper specifically, the ARMAS framework provides additional options for the user to provide candidate models and/or calibration schemes that could be used to validate and recover a failed sensor. The likelihood function focuses on a single residual-space statistic derived from the Mahalanobis distance, d, given by:
12
where μ and Σ are the mean and covariance of a Zi-dimensional Gaussian distribution. It is known that a sum of T independent d2 distances follows a χ* distribution with Z degrees of freedom (De Maesschalck et al., 2000) given by:
13
14
The set of pre-update residuals is known to be a zero-mean Gaussian sequence (Maybeck & Siouris, 1980). The fault detection test for T pre-residuals is composed of the following hypotheses:
15
16
where α is the probablity of false alarm and M is the number of averaged pre-residual samples. H0 refers to the hypothesis in which the fault is not present in filter j. H1 refers to the hypothesis in which a fault is present in filter j. The resulting hypothesis test forms the basis of the fault detection algorithm.
Once a fault is detected, an agreement of all other subfilters is utilized to exclude the faulty sensor. With J = I subfilters, SAARM can only exclude single faults within each residual monitoring epoch (i.e., T-sample moving average). In this scenario, each subfilter is informed by a different subset of I − 1 sensors (i.e., each subfilter is missing a single sensor). SAARM also assumes that all states are observable by all subfilters. In addition to J = I subfilters, a main filter is maintained to generate a full navigation state estimate for user output. Accordingly, cross-covariance terms between the main filter and any other filters are not used for any computation. For this scenario, SAARM provides an axiom for fault exclusion: Under the assumption that, at most, one sensor can fail simultaneously, at least one of the J subfilters will be completely unaffected by faulty measurements (Jurado & Raquet, 2019).
The fault consensus is tallied in a T-matrix of dimension I × J and uses the following convention:
17
Figure 2 shows the relationship between I sensors and J subfilters required for fault agreement sensor exclusion. The rows correspond to the i = 1…I sensors and the columns correspond to the j = 1…J subfilters. Each row contains measurements, Z[i], and the measurement covariance matrix, R[i], from the i-th sensor. Each column contains the estimated measurement, , and its covariance matrix,
.
SAARM T-matrix for i = 1…I all-source sensors (Jurado et al., 2020b)
Based on the stated convention, a fault is declared when T contains a single non-zero entry (i.e., at least one subfilter detected H1). After a fault is declared, SAARM waits for an agreement from the remaining subfilters until a single fault-free subfilter remains. It is assumed that the last remaining fault-free subfilter is the one that does not contain the faulty sensor. After fault exclusion, the fault-free subfilter is elevated to Main Filter status and the pre-update residual monitoring epoch is restarted with I − 1 total sensors and J − 1 subfilters. Each newly spawned subfilter now contains I − 2 sensors. The faulty sensor is removed from Monitoring Mode and follows the state diagram shown in Figure 1. If Sensor 3 were faulty, for example in Figure 2, each subfilter that includes that sensor would report a fault (all across row three) and only Subfilter 3 would remain consistent. Filter 3 would then be promoted to the Main Filter level and a new bank of FDE subfilters must be populated. Of note, SAARM is able to detect the occurrence of multiple simultaneous faults but is only able to provide multiple fault exclusion when initialized with additional subfilter layers.
In summary, SAARM provides all-source sensor FDE and integrity for various sensor fault types. To provide resiliency to a single fault, the ARMAS framework is required to instantiate and maintain a quantity of subfilters equal to the quantity of all-source sensors. A separate main filter is maintained strictly for user output. Fault identification is based on a sequence of χ* statistical tests of pre-update measurement residuals. Fault exclusion is based on a subfilter agreement approach that is tallied in a novel T matrix. If every subfilter has position state observability, then SAARM provides a method for all-source position integrity via the union of all subfilter position covariance estimates. This integrity concept is based on the assumption that the framework is able to maintain at least one uncorrupted subfilter. The next section describes the motivation for a novel layer in the ARMAS framework used for real-time observability analysis.
2.3 Motivation for Stable Observability Monitoring (SOM)
The ARMAS framework with SAARM was originally conceived and simulated with basic linear 2D position and velocity sensors and assumed fully overlapping state observability within the FDE layer. All-source sensors, particularly those that only provide partial state information, do not intrinsically exhibit this characteristic. In collaborative navigation scenarios, retention of autonomy is desirable as long as a stable, resilient solution can be maintained. Another key motivation for stable observability monitoring (SOM) is the ability to determine when to augment with collaborative information and develop a method to measure the sufficiency of the collaborative information. In early 2020, the ARMAS framework was applied to a flight test data set (Appleget et al., 2021; Figure 3). The flight was conducted by the Autonomy and Navigation Technology (ANT) Center at the Air Force Institute of Technology (AFIT) on October 12, 2018, at Camp Atterbury, Indiana, where a small unmanned aerial system (sUAS) took off and landed at Himsel Army Airfield (HAA). During the 27-minute data set, the aircraft flew patterns at approximately 250 and 100 meters above the local surface. The aircraft used the ANT Center’s Scorpion framework (Kauffman et al., 2020) to provide a GNSS/INS-coupled truth navigation solution. This analysis consists of individual pseudorange measurements extracted from six Global Positioning System (GPS) satellite vehicles (SVs) for nonlinear processing in ARMAS as individual sensors.
sUAS flight test data from October 12, 2018, at Camp Atterbury, Indiana
For analysis, we configured the ARMAS framework with a sensor package consisting of six individual pseudorange sensors, one for each visible SV. This means each filter in the FDE layer was equipped with a unique combination of five sensors. The latter half of the flight test data set contains numerous pseudorange sensor dropouts. During analysis, it was observed that sensor dropouts tended to cause spurious behavior in ARMAS. This behavior occurred when less than five SVs were visible, meaning each Layer 1 subfilter was unable to perform a stable position state estimate with less than four SVs. Further analysis reveals that SAARM is unable to provide an agreement to identify a failed sensor when the FDE subfilter layer loses overlapping position state observability. In other words, SAARM can detect a sensor fault but cannot exclude the faulty sensor if even a single FDE layer subfilter loses position state observability due to dropout, poor geometry, etc. Since the initial simulation of the ARMAS framework was performed with fully overlapping position state observability in the FDE layer, this deficiency was overlooked.
Consider a scenario where SV, GPS 15, as part of a six-SV constellation is excluded by ARMAS due to a simulated pseudorange bias at t = 100 sec. Figure 4 shows a local observability analysis for this scenario with the remaining observability subfilters, FDE layer subfilters, and Main Filter. Note there are 10 Layer 2 subfilters remaining (from the original 15) in the observability layer, corresponding to the five remaining SVs (after GPS 15 is excluded). Note that each Layer 2 subfilter in the observability layer is informed by exactly four unique pseudorange sensors until GPS 15 is removed. Since four unique pseudoranges are required to constrain a stable 3D position solution with clock bias estimation, the unbounded position state covariance matrix indicates a complete loss of observability after t = 100 sec. As mentioned, a single pseudorange sensor dropout in this scenario would result in the loss of position state observability, evidenced by an increase in the position state covariance estimate. If an additional sensor failure occurs, the decision-making FDE layer of the ARMAS framework would struggle to provide a subfilter agreement to determine the culprit sensor due to a reduction in position state observability. This analysis forms the genesis of local observability monitoring at the Layer 2 sub-subfilter level to preserve the consistency of the Layer 1 subfilter FDE and integrity functions of ARMAS.
Trace of a-posteriori Layer 1 subfilter position covariance simulated single GPS 15 pseudorange sensor failure
Trace of a-posteriori Layer 2 subfilter position covariance simulated single GPS 15 pseudorange sensor failure
3 OBSERVABILITY LAYER 2 SUB-SUBFILTER BANK
In the previous section, we began the motivation for monitoring observability at a layer deeper than the FDE and integrity operations to preserve estimation consistency and framework resiliency. Similar previous sub-subfilter approaches have been devised using solution-separation snapshot methods to provide GNSS-based FDE. A ubiquitous example is Honeywell’s Inertial/GPS Hybrid (HIGH) concept which was commercially successful (Call et al., 2006). The uncorrupted subfilter guarantee provided by the ARMAS framework for a single simultaneous fault enables SAARM to extend a guarantee for all-source position integrity (Jurado et al., 2020b). The previously stated fault exclusion axiom is, thus, extended:
Assuming at least one of the subfilters is informed entirely by properly modeled, uncorrupted sensors, then at least one subfilter contains consistent state estimation error statistics (Jurado et al., 2020b). If the states of interest in each Layer 2 subfilter are observable and stabilizable, then each Layer 1 subfilter inherits these properties.
This means that the physical region encompassed by the position covariance estimates of all Layer 1 subfilters contains the true navigation state within the statistical significance of the fault detection tests. That said, SAARM requires overlapping position observability across all Layer 1 subfilters to perform consistent FDE operations and guarantee the preservation of at least one uncorrupted subfilter for position integrity. For SAARM to guarantee position state observability at the Layer 1 subfilter level, an additional layer of subfilters is required (Figure 6). Each unique subfilter in the observability layer excludes measurements from two sensors. The purpose of this layer is to provide a means for observability analysis one layer deeper than the decision-making FDE layer to maintain resiliency to a single simultaneous sensor fault.
ARMAS framework with novel observability layer for resiliency to one simultaneous sensor failure
For example, if a failed sensor is excluded, the FDE layer will be repopulated with new subfilters, each missing a single unique sensor and the failed sensor. Prior to the sensor exclusion, a subset of the observability layer contains the set of filters needed to spawn the new FDE subfilter layer after the exclusion. The purpose of monitoring the observability one layer deeper than the decision-making FDE layer is to provide a mechanism to warn the user in the event that a single sensor failure could jeopardize the consistency of the FDE and integrity operations. This warning comes in the form of a user observability warning to add an additional sensor to the framework. If only a single subfilter in the FDE layer loses overlapping position observability, SAARM is unable to provide the subfilter agreement required to identify and exclude the failed sensor. Once overlapping position observability is lost in the FDE layer, the ARMAS framework can no longer guarantee at least one consistent uncorrupted subfilter that is required to preserve solution integrity. Due to potentially variable Fisher information available from all-source navigation sensors, it is critical that a resilient all-source navigation framework contains a method to monitor observability prior to the corruption of the decision-making FDE layer.
A single simultaneous sensor failure is defined as a single failure within the time-span of a single ARMAS monitoring epoch. To maintain resilience to F simultaneous sensor failures, the number of concurrent Layer 2 subfilters in the observability filter bank, N, required to monitor observability for I sensors is:
18
As one might expect, the processing requirements to monitor each Layer 2 subfilter position state covariance are non-trivial. For example, an eight-sensor system requires 28 concurrent SAARM Layer 2 subfilters for resiliency to a single simultaneous sensor failure. When summed with the Main Filter and eight traditional subfilters, a total of 37 concurrent estimation filters must be maintained. This method monitors overlapping position observability at the processing expense of factorial growth in the required quantity of concurrent estimation filters. A major benefit of this approach is evident in the event of a sensor failure. Since a subset of the observability filter bank will form the new FDE layer, maintenance of these filters in the observability layer eliminates the FDE and integrity operation downtime normally required during FDE layer re-initialization. Additionally, if we monitor the magnitude of the state estimate variance in each Layer 2 subfilter, we can determine which sensors provide critical Fisher information about our state(s) of interest prior to fault detection and exclusion events.
4 STABLE OBSERVABILITY MONITORING (SOM)
The ability to maintain stable a-posteriori estimates of system states is a primary indicator of overall estimator stability (Ham & Brown, 1983). A primary goal of observability analysis is to measure the influence that measurements have on system states (Maybeck & Siouris, 1980). Observability analysis has been applied to fused estimation with a variety of approaches including information matrix (Hong et al., 2008), error covariance analysis (Tang et al., 2009), and others (Li et al., 2013b; Hermann & Krener, 1977; Le Cadre, 1998). It is well understood that a discrete linear time varying system is globally observable if the rank of its observability matrix M is full rank (i.e., the same rank as quantity of states) for all time indices k. The degree of local observability can be defined as the measure of the singularity of M over a finite set of k (Chen, 1991). For linear systems, the observability Gramian can be obtained as a solution of the the Lyapunov equation (Hag Elamin & Taha, 2013).
Directly related to the observability Gramian is the Fisher information matrix which is a measure of the certainty of the state estimate due to measurement data alone (Li et al., 2013a; Powel & Morgansen, 2020; Roy et al., 2009). The discrete recursive definition of the Fisher information matrix F is:
19
where the Fisher information contained in a single update at tk is HT(tk)R(tk)−1H(tk), which is the same term used to generate:
20
This relationship can be leveraged for observability analysis in a nonlinear estimator. If the system model is stochastically controllable and observable, then is uniformly bounded from above (Maybeck & Siouris, 1980). Stabilizable states have a unique positive-definite
(Maybeck & Siouris, 1980). By monitoring post-update covariance matrices over time, we can ascertain if the signal-to-noise ratio (SNR) in the system enables stabilized estimation. For the purpose of this paper, we focus on the stability of the position states because we are particularly interested in preserving the consistency of the navigation integrity solution provided by the FDE sublayer in the ARMAS framework.
The user-defined monitoring epoch for the sum of Mahalanobis distances in Equation (13) adjusts the sensitivity of the SAARM test within the ARMAS frame-work (Quartararo & Langel, 2020). This is a moving average of recent residual-space test statistics formed by pre-update residual vectors from Equation (10) and Equation (11). The length of monitoringTime (M Δt) is an ARMAS framework tuning parameter that is used to adjust for detection sensitivity for temporal anomalies. The ARMAS monitoringTime parameter is designed to contain a sufficient quantity of samples to meet central limit theorem (Rouaud, 2017) criteria for the desired a, known as the Type I error rate (probability of false alarm). The pluggable estimation architecture provided by Scorpion enables propagation of multiple layers of subfilters. We recorded and monitored the post-update position covariances in each n = [1…N] Layer 2 subfilter in the observability bank. An observability flag Ok was set for Layer 2 subfilter n for tk according to:
21
where is the most recent post-update position covariance matrix for Layer 2 subfilter n,
is the post-update position covariance matrix for Layer 2 subfilter n exactly M samples prior to tk, and Ppos, max is a user-defined limit for maximum steady-state position state estimate covariance. The trace is the sum of the diagonal elements of the matrix Pk, which represent variances of the system state estimates. If the
converges, then the individual position estimate variances also converge. When applying Equation (21), it is important to ensure that the units are identical across the grouped states.
By measuring the difference between the trace of post-update position covariance matrices, we can determine if the position state information contained in the HT(tk)R(tk)−1H(tk) terms in the previous M measurements have resulted in a stable mean estimate of all position elements in that subfilter. The user sets Ppos, max as a tuning parameter for acceptable steady-state covariance estimation. To maintain resilience to a sensor failure, a user prompt to augment the ARMAS framework with an additional sensor is triggered if at least a single Layer 2 subfilter observability test is set to 1 (Equation [22]). The newly added sensor would directly enter Monitoring Mode if it was considered trusted or would need to pass through sensor validation if untrusted.
22
where N is the quantity of Layer 2 subfilters.
Once a new sensor is successfully added into ARMAS Monitoring Mode, each Layer 2 subfilter gains another sensor. The results of Equation (21) are ignored until the newly requested sensor completes an ARMAS monitoring period after entering Monitoring Mode. If post-update variance stability is regained after the requisite ARMAS monitoring period, then Equation (21) will be set to 0 for each stable Layer 2 subfilter and the observability warning is rescinded. This method flags the presence of an information deficiency with respect to the estimated states of interest in real time across multiple Layer 2 subfilters in a novel observability bank that is intended to preserve the consistency of the FDE and integrity operations in the ARMAS framework.
5 MAXIMUM STATE ESTIMATE COVARIANCE LIMIT
The maximum state estimate covariance limit is simply an upper bound for state estimate covariance and is designed to set a minimum steady-state information threshold for the framework. The validation gate employed by SAARM is a moving window of residual-space test statistics in the form of Mahalanobis distances. The power of SAARM’s chi-squared distributed hypothesis test is dependent on the ARMAS framework’s ability to produce a stable and consistent pre-update residual covariance matrix, Σ (see Equation [14]). By operating SOM’s state estimate covariance monitoring scheme in the Layer 2 sublayer (See Figure 6), we are able to peek forward at framework stability in the event of a single unknown sensor failure.
Since the pre-update residual covariance matrix is a function of the state estimate covariance and the observation model in Equation (9), the post-update state estimate covariance P+ is a direct indicator for estimator observability and stabilizability (Maybeck & Siouris, 1980). The least stabilizable Layer 2 subfilter is informed by exactly one fewer sensor than the least stabilizable Layer 1 subfilter. Monitoring the Layer 2 subfilter level allows for a sensor augmentation request before a potential loss of stability in FDE operations. Since Ti-matrix exclusion operations employed by SAARM require a unanimous fault agreement to exclude a failed sensor (see Equation [17]), it is particularly important that the stability of the Layer 1 subfilter estimates is ensured. Furthermore, since the ARMAS framework’s navigation integrity is provided by the union of the Layer 1 subfilter position covariance ellipses, position integrity can be stabilized if the position states are selected as the SOM states of interest.
6 SIMULATION
This section describes a set of four example 3D scenarios designed to assess the impacts of SOM on ARMAS FDE operations and untrusted sensor validation. Individual SVs were arranged in random stationary GNSS constellations in a local tangential frame (LTF). In scenarios 1–4, we initialized three aircraft operating a standard EKF, ARMAS, and ARMAS-SOM (see Equation [21]) with four, five, six, and seven trusted pseudorange sensors, respectively. We assumed the remaining untrusted SVs would be available for augmentation via SOM. The availability of additional untrusted sensor information can be analagous to offboard augmentation in a collaborative navigation scenario. Each constellation contained one SV directly overhead and nine SVs evenly distributed in azimuth with discrete random uniform elevations between 45 and 63.4 degrees (See Figure 7). Each scenario consisted of two sensor anomalies introduced at fixed times: (a) a growing pseudorange bias (linear ranging ramp) from t = 240 to t = 330 sec on a trusted sensor and (b) validation of an untrusted pseudorange sensor with a constant 40-meter bias at t = 360 sec. The growing pseudorange bias on the trusted sensor was used to assess FDE in Monitoring Mode (i.e., how large the bias is before it is detected and excluded). The constant bias was used to measure the probability of detection in Validation Mode after recovery from sensor exclusion.
Sample of 10-SV stationary constellation skyplot with discrete random uniform elevations and one satellite directly overhead
Consider a 3D example with a single vehicle obtaining multiple navigation solutions from an EKF within the ARMAS framework:
23
where xp is the vehicle’s position (m), xv is the vehicle’s velocity (m/s), xa is the vehicle’s acceleration (m/s2), and τa = 90 seconds is a time constant associated with a first-order Gauss-Markov (FOGM) process. A 3D white noise process is given by w(t) where E[w(t) w(t + τ)T] =Qδ(τ) and:
24
The model from Equation (23) was used to generate randomly initialized vehicle trajectories for each trial. Initial velocities and accelerations were normally distributed with σaccel = 1 ×10−2 (m/s2) and σvel = 5 (m/s). Figure 8 shows 300 sample truth trajectories for this scenario.
300 runs of sample truth trajectories
The vehicle was initialized with state and covariance estimates:
25
26
The aircraft received discrete measurements from a constellation of stationary satellite vehicles (SVs). Although there were 10 SVs (labeled GPS 1–10), the aircraft were initialized with a trusted subset of this constellation varying from four to seven SVs. An additional fixed satellite with a random uniform elevation between 45 and 63.4 degrees was introduced at t = 360 sec and was corrupted with a constant 40-meter pseudorange bias. This constellation was designed to provide coverage at high elevation angles between approximately 45 degrees and 63.4 degrees with a single satellite directly overhead (Figure 7).
Individual pseudorange measurements are performed according to Equation (27):
27
where ρi is the pseudorange to SV i with fixed coordinates (XSV, YSV, ZSV), estimated user coordinates are (Xu, Yu, Zu), and an estimated GNSS receiver clock bias is bu. The pseudorange measurement covariance RSV = 102 m2. A receiver clock bias bu is independently estimated as an additional state in each EKF. All coordinates are expressed in the LTF.
Pseudorange measurements, , performed according to Equation (29) for the estimated position states (Xu, Yu, Zu, bu) within
. The measurement Jacobian H is:
28
where:
29
is the distance from the platform to an SV.
In addition to initial pseudorange sensors, additional unbiased pseudorange sensors could be requested automatically by SOM at any time during the scenario. The other two approaches (legacy ARMAS, EKF) are limited to the sensors provided during initialization. In this respect, SOM has a clear advantage over the two legacy approaches. The point of this analysis is to show that ARMAS-SOM can detect a threat to navigation resilience, provide timely sensor augmentation, and successfully preserve the consistency of the FDE and validation operations in the ARMAS framework. In the middle of the scenario, an insidious growing position bias was injected into a single trusted pseudorange sensor to test the ARMAS FDE process. The size of the position bias at exclusion was recorded. Near the end of the scenario, an untrusted pseudorange sensor with a fixed bias was added into validation at t = 360 sec to test ARMAS sensor validation.
7 NUMERICAL RESULTS
The following section presents the results for a single aircraft operating three different FDE strategies (EKF, ARMAS, and ARMAS-SOM equipped) in four scenarios with four, five, six, and seven initial trusted pseudorange sensors. We assumed the remaining untrusted SVs were available for augmentation via SOM. The maximum position state estimate covariance threshold was set to Ppos, max = diag3×3 (202 m2) (see Equation [21]) for all runs.
In each scenario, a single trusted pseudorange sensor experiences a sensor anomaly (linear ramp) from t = 240 to 330 sec and an untrusted biased sensor (40 meters) is added at t = 360 sec. The EKF simply trusts all information provided, has no ability to detect faults nor perform sensor validation, and is included only as a performance baseline. The ARMAS framework is equipped with FDE capabilities and validation of untrusted sensors. The ARMAS recalibration and remodeling modes were not active. ARMAS-SOM includes all of the aforementioned ARMAS capabilities and adds the ability to request additional untrusted pseudorange sensors at any time. SOM monitors the stability of the state covariance estimates in the observability subfilters (Layer 2) according to Equation (21). Since each observability layer subfilter excludes two sensors, the minimum number of pseudorange sensors at the user output level (Layer 0) is six. If one of the trusted sensors is excluded, the quantity of sensors at the top level must remain at no less than six to ensure stability at the Layer 2 subfilter level.
7.1 Scenario 1
In Scenario 1, the aircraft was initialized with four trusted pseudorange sensors receiving information from a random stationary constellation of SVs in the LTF. A summary of the results is shown in Table 1. Clearly, ARMAS-SOM outperforms both ARMAS and the EKF, especially in terms of residual sum of squares (RSS) error and detection rate for a biased sensor. This was expected because ARMAS-SOM augments itself with two additional untrusted pseudorange sensors using the ARMAS validation process prior to the sensor anomaly event at t = 240 sec. Once the spoofed sensor had been excluded, ARMAS-SOM requested one additional untrusted sensor to stabilize the observability layer prior to validation of the untrusted biased sensor. This resulted in a total of three added pseudorange sensors for a total of seven sensors. With the ARMAS framework, the exclusion of the spoofed sensor resulted in only three pseudorange sensors, which was insufficient to provide a stable 3D position estimate with clock bias estimation. This is evidenced by the large growth in estimated standard error at approximately t = 245 sec in Figure 9. Figure 10 shows the mean RSS error and standard deviation for 1,000 Monte Carlo trials. It is clear that ARMAS does not recover well from the exclusion of the spoofed sensor with a bias detection rate of 0.01, clearly outperformed by ARMAS-SOM with a detection rate of 0.99. The EKF simply trusts all information provided and the navigation solution is carried off by the linear ramp sensor anomaly.
Scenario 1 Results: RSS Error, GPS4 Exclusion Magnitude, Detection Rate, and Quantity of Augmented Sensors
Scenario 1 state estimation error for one run with a sensor anomaly from t = 240 to 330 sec and a biased sensor added at t = 360 sec
Scenario 1 mean 3D RSS error ± 1-σ for 1,000 runs with a sensor anomaly from t = 240 to 330 sec and a biased sensor added at t = 360 sec
7.2 Scenario 2
In Scenario 2, the aircraft was initialized with five trusted pseudorange sensors. A summary of the results is shown in Table 2. ARMAS-SOM outperformed both ARMAS and the EKF. Figure 12 shows the improvement in estimation performance achieved by ARMAS-SOM sensor augmentation at approximately t = 70 sec.
Scenario 2 Results: RSS Error, GPS4 Exclusion Magnitude, Detection Rate, and Quantity of Augmented Sensors
Scenario 2 Mean 3D RSS error ± 1-σ for 1,000 runs with a sensor linear pseudorange ramp bias from t = 240 to 330 sec and a biased sensor added at t = 360 sec
This is also visible at approximately t = 70 sec with the standard error estimates in Figure 11. Once the spoofed sensor was excluded, ARMAS-SOM requested one additional untrusted sensor to stabilize the observability layer prior to validation of the unstrusted biased sensor. This resulted in two added pseudorange sensors for a total of seven sensors. With the ARMAS framework, the exclusion of the spoofed sensor resulted in four pseudorange sensors, which is sufficient to provide a stable 3D position estimate with clock bias estimation at the Main Filter level. Figure 10 shows the mean RSS error and standard deviation for 1,000 Monte Carlo trials. It is clear that the ARMAS and ARMAS-SOM performance was closer and, while the ARMAS framework maintains a stable solution, it has difficulty with proper validation of the untrusted biased sensor with a detection rate of 0.58 and was out-performed by ARMAS-SOM with a detection rate of 0.99. The EKF simply trusts all information provided and the navigation solution is, thus, exploited by the biased information.
Scenario 2 state estimation error for one run with a linear pseudorange ramp bias from t = 240 to 330 sec and a biased sensor added at t = 360 sec
7.3 Scenario 3
In Scenario 3, the aircraft was initialized with six trusted pseudorange sensors. A summary of the results is shown in Table 3. ARMAS-SOM outperformed both the ARMAS framework and the EKF. Figures 13 and 14 show that ARMAS and ARMAS-SOM performance is nearly identical until ARMAS-SOM augments with an additional untrusted sensor at approximately t = 310 sec after spoofed sensor exclusion. This resulted in one added pseudorange sensor for a total of seven sensors. With the ARMAS framework, the exclusion of the spoofed sensor resulted in five pseudorange sensors at the Main Filter level, which is sufficient to provide a stable 3D position estimate with clock bias estimation in each Layer 1 subfilter (each containing four pseudorange sensors). With an increase in information, it is clear that ARMAS performance is closer to ARMAS-SOM than in Scenario 2. ARMAS maintained a stable solution but had difficulty with the proper validation of the untrusted biased sensor with a detection rate of 0.85 and was outperformed by ARMAS-SOM with a detection rate of 1.00. The EKF simply trusts all information provided and the navigation solution is exploited by the biased information.
Scenario 3 Results: RSS Error, GPS4 Exclusion Magnitude, Detection Rate, and Quantity of Augmented Sensors
Scenario 3 state estimation error for one run with a sensor linear pseudorange ramp bias from t = 240 to 330 sec and a biased sensor added at t = 360 sec
Scenario 3 mean 3D RSS error ± 1-σ for 1,000 runs with a sensor linear pseudorange ramp bias from t = 240 to 330 sec and a biased sensor added at t = 360 sec
7.4 Scenario 4
In Scenario 4, the aircraft was initialized with seven trusted pseudorange sensors. A summary of the results is shown in Table 4. ARMAS-SOM performed identically to the ARMAS framework and both outperformed the EKF. Figures 15 and 16 show that ARMAS and ARMAS-SOM performance is practically identical. ARMAS-SOM did not request any sensor augmentation for a total of seven pseudo-range sensors. The EKF simply trusts all information provided and the navigation solution is exploited by the biased information.
Scenario 4 Results: RSS Error, GPS4 Exclusion Magnitude, Detection Rate, and Quantity of Augmented Sensors
Scenario 4 state estimation error for one run with a sensor linear pseudorange ramp bias from t = 240 to 330 sec and a biased sensor added at t = 360 sec
Scenario 4 mean 3D RSS error ± 1-σ for 1,000 runs with a sensor linear pseudorange ramp bias from t = 240 to 330 sec and a biased sensor added at t = 360 sec
8 PAPER SUMMARY AND FUTURE WORK
This paper addresses a critical vulnerability of the Autonomous and Resilient Management of All-Source Sensors (ARMAS) framework and provides a convenient method to monitor real-time navigation resilience and eliminate subfilter respawn downtime in the event of a sensor failure. This method presents a novel observability bank operating in a layer containing a full combination of unique subfilters which each exclude measurements from two sensors. These additions to the ARMAS framework provide real-time stable observability analysis via monitoring of the Layer 2 subfilter a-posteriori covariance matrices.
The ARMAS framework was originally developed with linear 2D position and velocity sensors that provided fully overlapping position observability. Initial analysis of ARMAS with GNSS pseudorange data from a sUAS flight test at Camp Atterbury, Indiana, showed that ARMAS operations could become inconsistent if the FDE layer subfilters lose overlapping position estimation observability. SOM monitors each Layer 2 subfilter for both observability and stabilizability.
To maintain resilience to a single simultaneous sensor failure, we must assume that a single sensor may fail at any time. Since the observability bank contains a subset of subfilters that will form the new FDE layer after a sensor exclusion, the observable and stabilizable properties guaranteed by SOM are inherited by the newly formed FDE layer. Furthermore, SOM provides the user with a timely warning to augment with additional sensor data and provides a notification when the augmented sensor information is sufficient for resilience to a single simultaneous sensor failure. A Monte Carlo analysis of four example scenarios proved that a loss of overlapping position observability in the FDE layer could result in an inability to exclude a failed sensor and inadvertent validation of a corrupted sensor, resulting in undetected corruption of the main navigation solution. SOM is shown to provide instrinsic awareness about underlying overlapping observability assumptions made by ARMAS. With sensor augmentation, these assumptions can be preserved to guarantee ARMAS framework resilience to a single simultaneous sensor failure and is proven by the preservation of the ARMAS FDE and validation processes.
HOW TO CITE THIS ARTICLE
Gipson, J. S., & Leishman, R., C. (2022). Resilience monitoring for multi-filter all-source navigation framework with assurance. NAVIGATION, 69(4). https://doi.org/10.33012/navi.550
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.