Abstract
This study proposes three novel integrity monitoring algorithms based on Bayesian Receiver Autonomous Integrity Monitoring (BRAIM). Two problems of integrity monitoring for land-based applications for GNSS challenging environments are explored: requirements for sufficient measurement redundancy and the presence of large biases. The need for measurement redundancy was mitigated by using BRAIM. This enabled the employment of a Fault Detection and Exclusion (FDE) algorithm without the required minimum availability of six measurements. To increase the estimated integrity, a Spatial Feature Constraint (SFC) algorithm was implemented to constrain solutions to feasible locations within a road feature. The performance of the proposed FDE+BRAIM, SFC+BRAIM and FDE+SFC+BRAIM algorithms was evaluated for GPS and multi-sensor data. For the non-Gaussian measurement error distribution and under the test conditions, the best achieved probability of misleading information was of the order of magnitude 10−8 for road-level requirements. The results provide an initial proof-of-concept for non-Gaussian non-linear multi-sensor integrity monitoring algorithms.
- Bayesian Receiver Autonomous Integrity Monitoring
- GNSS
- Local Positioning System
- multi-sensor positioning
- Particle Filter
- Spatial Feature Constraint
1 INTRODUCTION
Integrity of the positioning system solution can be defined as a measure of trust one can put in the value of the estimated position (European GNSS Agency, 2015; Salós et al., 2010; Zhu et al., 2018). Imparato (2016) writes that, practically, integrity means “a guarantee of safety.” As such, Global Navigation Satellite System (GNSS) position integrity monitoring was first used in civil aviation (Brown, 1988, 1992; Ochieng et al., 2003). Integrity has now become a key performance metric for intelligent transport systems (ITS) given the ongoing efforts made to develop more robust integrity monitoring algorithms for land-based applications in GNSS-challenged environments (Imparato et al., 2018; Zhu et al., 2018).
However, “guarantee of safety” cannot be given without any risk of misleading information associated with it. This risk exists due to the different error sources that impact the GNSS signal and needs to be limited to a specified tolerable level that differs depending on the application.
Unlike civil aviation, land-based operations are more affected by biases such as multipath due to the presence of reflective surfaces like buildings, vehicles and trees (Hofmann-Wellenhof et al., 2001; Ward et al., 2006). Furthermore, due to the inability of GNSS signals to propagate through obstacles, the availability of GNSS data is reduced in urban environments. Because of that, algorithms like the Receiver Autonomous Integrity Monitoring (RAIM) algorithm developed for civil aviation may not be appropriate for such challenging environments. A review of literature has been carried out by Zhu et al. (2018) where different classic RAIM algorithms have been discussed from a perspective of land-based applications. One of the main highlighted problems of classic RAIM algorithms is that they require sufficient measurement redundancy to estimate integrity. When it comes to GNSS data, a minimum of four measurements are required for position estimation, five for integrity estimation and fault detection, and a minimum of six if fault exclusion is required as well (Hofmann-Wellenhof et al., 2001; Zhu et al., 2018). The effect of this problem has been experimentally shown by Gabela, Kealy et al. (2020) where integrity availability could not be estimated due to insufficient GPS measurement redundancy almost 50% of the time in downtown Melbourne, Australia. When measurement redundancy requirement was satisfied, integrity was available ∼ 2% of the time. In this case, the use of multi-GNSS would reduce the occurrence of insufficient measurement redundancy. However, in severe urban canyons found at the centers of some large cities, degradation of GNSS-only solutions might still occur.
To mitigate the problem of insufficient measurement redundancy, Bayesian RAIM (BRAIM), as shown by Pesonen (2011), has been adapted here with a Particle Filter (PF) as an underlying position estimator. Unlike the classic RAIM algorithms where integrity is estimated in the measurement domain, BRAIM estimates integrity in the position domain. This is relevant since the integrity is estimated based on an a posteriori error distribution, which is always available, as opposed to the indirect estimation through observations as in classic RAIM algorithms (Pesonen, 2011). The performance of BRAIM presented by Pesonen (2011) has been evaluated on simulated GNSS data for six satellites. In this paper, the performance of the BRAIM is evaluated on the real-world GPS and multi-sensor system data collected in Sydney, Australia. The multi-sensor system is based on the fusion of real-world GPS data and real-world terrestrial ranges measured from the Local Positioning System (LPS). LPS is set up as a network of static anchors or infrastructure nodes used to aid positioning of users on a road. The nodes of the LPS network can be equipped with a variety of sensors capable of measuring ranges to the dynamic nodes. For example, Ultra Wide Band radios can be used for ranging by measuring time difference of arrival (TDOA), or Wi-Fi access points can be used for ranging by measuring received signal strength. In (Gabela, Kealy et al., 2020), we have shown the advantage of such a system compared to stand-alone GNSS in terms of integrity availability in the urban environment.
Although BRAIM does not require measurement redundancy, the fusion of GNSS pseudoranges and terrestrial ranges should positively affect the a posteriori distribution (shown in Gabela et al., 2019) and, consequently, the integrity. The fusion of GNSS and terrestrial ranges (as defined and used in this paper) in integrity monitoring of land-based applications is not common, and we were able to find only two studies that have done this (J. Liu et al., 2020; Xiong et al., 2019). It is also worth mentioning that similar efforts of ensuring sufficient measurement redundancy previously have been made for civil aviation. For example, Pervan (1996) and Enge (1999) demonstrate navigation integrity method for aircraft precision landing using GNSS and ground-based pseudolites, which ensure sufficient measurement redundancy. To the authors’ best knowledge, the most common multi-sensor integrity solutions for land vehicles are based on the integration of GNSS, Inertial Measurement Units (IMU), and odometer or speedometer (Han et al., 2020; W. Liu et al., 2020; El-Mowafy & Kubo, 2017; Toledo-Moreo et al., 2010). Maaref et al. (2018) propose a system based on cellular long-term evolution (LTE) signals and IMU (without GNSS) for integrity monitoring in urban environments. The absence of GNSS in their system is valid in terms of applicability of the system in the complex urban environment where GNSS signals are sparse. When it comes to remote areas where it is fair to assume that cellular signals weaken and that the network density of the LPS (used in this paper) decreases, the fusion with GNSS is necessary since it will probably become a primary source of reliable measurements.
Having mitigated the challenges of measurement availability and insufficient redundancy in the urban environment, the effect of large biases created by multipath needs to be addressed. Although the BRAIM estimates integrity no matter the number of measurements, the a posteriori distribution is still negatively affected by large biases (Ward et al., 2006). We hypothesize that integrity estimate and availability will be improved by employing a Fault Detection and Exclusion (FDE) algorithm in addition to multi-sensor BRAIM (i.e., FDE+BRAIM algorithm). The FDE algorithm would, as in some classic RAIM methods, such as (Zhu et al., 2018), use measurement residuals to detect and exclude multiple faulty measurements. A comparison of different FDE methods is provided by Zabalegui et al. (2020) and Knight and Wang (2009) (e.g., subset testing, sequential local test, forward-backwards test, Danish method). J. Liu et al. (2020); Xiong et al. (2019); Han et al. (2020); Maaref et al. (2018); El-Mowafy et al. (2020) propose and implement different FDE methods for the purpose of integrity monitoring. However, a lot of available literature requires measurement redundancy of at least six measurements to remove one fault (Han et al., 2020; El-Mowafy et al., 2020) or is only able to consider one fault at each time-stamp (J. Liu et al., 2020; Maaref et al., 2018). An FDE method capable of excluding a single fault even if classic RAIM is not possible is proposed by J. Liu et al. (2020). This is necessary for the urban environment where measurement redundancy cannot be expected (Gabela, Kealy et al., 2020). Cooperative Integrity Monitoring is proposed by Xiong et al. (2019) in addition to the FDE algorithm that is capable of better detection of GNSS faults than classic RAIM, and of detecting multiple faulty cooperative measurements.
In this paper, multiple fault FDE is possible without limitation of the number of measurements that can be removed. Because classic RAIM depends on sufficient redundant measurements, the exclusion of faulty measurements usually stops when there are less than six measurements left. However, when the FDE and BRAIM algorithms are used, the exclusion of faulty measurements can proceed past this point since the problem of having sufficient measurements is addressed by using the BRAIM algorithm.
Given that the vehicles are constrained to roads, a Spatial Feature Constraint (SFC) algorithm (from Gabela, Majic et al., 2020) was implemented to further remove the effect of faulty measurements and to decrease the integrity risk. Unlike the classic Map Matching (MM) algorithm where a position solution is snapped back to the spatial feature, SFC eliminates the particles (i.e., hypotheses) of the PF that are not within the spatial feature. The particles bounded by the spatial feature are re-weighted. Usually, the GNSS integrity and the integrity of MM are considered separately (Li et al., 2013). On the contrary, Toledo-Moreo et al. (2010) propose a method where both positioning and MM are performed simultaneously by the PF, thus including both processes in the integrity estimate. In addition to MM where classic maps are usually used, recently, 3D maps have been used for characterizing/detecting GNSS errors in the urban environment and aiding the integrity monitoring (El-Mowafy et al., 2020; Kbayer & Sahmoudi, 2017). This paper will use a 2D map to represent spatial features, and the position integrity estimated by the proposed algorithms will also include the integrity of the SFC algorithm.
Next, the integrity requirements are considered. Given that they depend on the application, generalized applications are considered for performance evaluation in this paper: road-level and lane-level positioning. These applications require levels of integrity that guarantee the knowledge of the road the vehicle is on and the lane the vehicle is in, respectively. The minimum required integrity probability used in this paper is 1−1⋅10−7/per epoch with required horizontal (2D) alarm limits of 5 𝑚 and 1.1 m for the road- and lane-level applications, respectively (ARRB Project Team, 2013; Ochieng et al., 2003). As a prerequisite to integrity monitoring, it is important to have a positioning system capable of satisfying the stringent integrity requirements. As mentioned before, BRAIM is implemented with PF for which we have demonstrated that the average expected accuracy of the multi-sensor PF is 0.27 m (Gabela et al., 2019). Given that the positioning system used in this paper is capable of achieving positioning accuracy well under the required horizontal alarm limit, it is expected that the frequency of misleading information will be reduced.
This paper aims to explore integrity capabilities of three novel algorithms: FDE+BRAIM (a combination of FDE and BRAIM), SFC+BRAIM (a combination of SFC and BRAIM) and FDE+SFC+BRAIM (a combination of FDE, SFC and BRAIM). It represents the initial body of work done for the development of these algorithms and provides an initial proof-of-concept. The capabilities of novel algorithms will be compared to the existing BRAIM (Pesonen, 2011). Compared to (Pesonen, 2011), where simulated stand-alone GNSS test data were used, in this paper, the proposed algorithms and the BRAIM will be evaluated using the fusion of real-world GPS and terrestrial range data. In addition to this, the paper explores the difference in integrity monitoring performance when terrestrial ranges are fitted with a more appropriate distribution compared to when commonly made Gaussian assumption is made.
The main research questions are: What is the best achievable probability of misleading information for the multi-sensor positioning system based on PF? What is the rate of success of the proposed methods? How does the probability of misleading information and integrity availability change depending on the algorithm conditions? Are proposed algorithms sufficient for road-level or lane-level applications?
The main contribution of this paper is the development of three novel integrity monitoring algorithms that can improve the integrity estimation and reduce the false alarm rates compared to the existing ones. Another important contribution of this paper is that the performance analysis is done for fitted real-world non-Gaussian LPS measurement errors. The commonly made Gaussian error model assumption is not made in addition to not linearizing the measurement model. Furthermore, the proposed FDE algorithm can detect and eliminate multiple faults without the need for retaining the limit of six redundant measurements, which has constrained previous algorithms. Another novelty is that SFC integrity monitoring is performed simultaneously with the position integrity monitoring with the use of BRAIM. Lastly, the experimental evaluation of the proposed methods was done with a prototype multi-sensor positioning system where GNSS and terrestrial ranging devices are integrated for the purpose of positioning and integrity monitoring. Although prototype multi-sensor positioning systems can be found in the literature (e.g., Gunning et al., 2019; Toledo-Moreo et al., 2010), a prototype based on the integration of GNSS and a non-GNSS ranging device has not been found in the literature.
The rest of the paper is organized as follows. First, the mathematical framework is presented for multi-sensor positioning and integrity monitoring. The next section describes the data and criteria necessary for the algorithm evaluation. The results are presented in Section 4, followed by the discussion in Section 5 and then a Conclusions section.
2 MATHEMATICAL FRAMEWORK
The implementation of the multi-sensor PF is first explained. Following the PF, a definition of BRAIM is given since all proposed algorithms are based on it. Finally, FDE+BRAIM, SFC+BRAIM, and FDE+SFC+BRAIM algorithms are explained and corresponding pseudocodes are provided.
2.1 Multi-sensor positioning
Due to the non-linearity of the measurement model based on the fusion of the GNSS and terrestrial range data, it has been experimentally shown in (Gabela et al., 2019) that PF performs better than EKF. Because of that, a PF is used in this paper as an underlying positioning algorithm given that the same multi-sensor positioning system is employed.
Within the state vector 𝑋k, 10 states are being estimated: three-dimensional (3D) user’s position 𝑟k at time epoch k, 3D user’s velocity 𝑣k at time k, 3D user’s acceleration 𝑎k at time k and GNSS receiver clock bias given in units of length at time k (by multiplying the clock bias with the speed of light). The presented algorithm is for a single GNSS constellation. Hence, making it into a multi-GNSS algorithm would require an additional clock bias or incorporating satellite constellation time offset as shown by Wang et al. (2011).
1
As shown in Equation 2, the measurement vector 𝑍k can consist of 𝑁𝐼𝑁 number of terrestrial ranges (i.e., LPS ranges) 𝑑k to the infrastructure (i.e., anchor) nodes, and 𝑁𝑆 number of received satellite pseudoranges 𝜌k. Hence, as in (Gabela et al., 2019), user’s (i.e., dynamic node) state vector is updated with the LPS ranges and GNSS pseudoranges. At every time instant k, total number of available measurements is 𝑁𝐼𝑁 +𝑁𝑆.
2
A constant acceleration model in Equation 3 is chosen to describe the changes of a state vector, which are given in Equation 4 as a state evolution (i.e., system model). This is chosen due to use of the dynamic node data with low and stable accelerations. If needed, a constant velocity model can be defined by using the first formula in Equation 3.
3
4
ṙ denotes velocity, or change of position 𝑟 over time, and denotes acceleration, or change of velocity 𝑣 over time. The process noise is denoted with 𝜔k. In this case, process noise is assumed to be normally distributed with zero mean and covariance matrix 𝑄k. Covariance matrix consists of the acceleration noise 𝜔𝑎 and GNSS receiver clock bias noise with variance 𝜔𝑐𝑏, i.e., In the case of a constant velocity model, 𝑄k would consist of velocity noise in addition to 𝜔𝑐𝑏. 𝑋k−1 is the user’s state vector from a previous time epoch k−1. Finally, 𝐹k is the transition matrix calculated as
5
where 𝛿𝑡 denotes the time increment between two epochs and 𝐼 is an identity matrix of appropriate dimensions. In this paper, an increment of 1 𝑠 is used.
A set of 𝑁ℎ number of particles (i.e., hypotheses) is formed as where each particle ℎ is assigned a weight in addition to the predicted state . This is done due to unavailability of general closed-form expression in PF for a posteriori distribution of the user’s state (unlike for an EKF). The a posteriori distribution is approximated by the set of 𝑁ℎ particles with associated weights (Gabela et al., 2019). As in (Sottile et al., 2011), the importance density is chosen to be the a priori density , which is drawn from the system model. In practice, this means that a set of particles is propagated through the system model using Equations 4 and 5. Particle weights are then updated based on the measurement likelihood as shown in Equation 6. In this paper, 𝑁ℎ = 300000 was used. The initial particle weights are uniformly distributed (i.e., each particle is given the same weight) and drawn from a random Normal distribution around the initial state vector.
6
where h ∈ 1, 2,.., Nh. denotes the likelihood of the measurements 𝑍k, which is calculated for every particle h. Since the positioning system is based on the fusion of multi-sensor data (LPS ranges and GNSS pseudoranges), measurement likelihood is calculated as a product of the Probability of Density Functions (PDF) for different data types. Examples of this previously have been shown by Gabela et al. (2019) and Sottile et al. (2011). Furthermore, as in (Sottile et al., 2011) and as shown in Equation 7, it is assumed that LPS and GNSS ranges are statistically independent of each other (within and across each measurement type). If 𝑝𝑖 denotes the probability of the LPS range 𝑖 and 𝑝𝑗 denotes the probability of the GNSS pseudorange 𝑗 where 𝑖 ∈ 1,2,…,𝑁𝐼𝑁 and 𝑗 ∈ 1,2,…,𝑁𝑆, the resulting measurement likelihood for particle h is as follows,
7
𝑝𝑖 is calculated as a difference between measured range and the predicted range from particle h to the infrastructure node 𝑖. The predicted range is calculated for every particle, as follows,
8
The coordinates of the particle h at time epoch k are . The coordinates of the infrastructure node 𝑖 are . Since the infrastructure nodes are static, their positions have been predetermined and are given in Earth-centred Earth-fixed (ECEF) frame, which is Cartesian global reference coordinate system.
𝑝𝑗 is calculated as a difference between measured GNSS pseudorange and the predicted GNSS pseudorange from particle h to the satellite 𝑗. The predicted pseudorange is calculated for every particle, as follows,
9
The coordinates of the GNSS satellite 𝑗 are . denotes the GNSS receiver clock bias for particle h. The positions of the satellites over all epochs have been calculated based on the available ephemeris data and are given in ECEF.
As shown in (Gabela et al., 2019), 𝑝𝑖 can be assumed to be normally distributed with zero mean and variance (Equation 10), or a better approximation of the actual error distribution can be used. In (Gabela et al., 2019), a better approximation of the LPS measurement error distribution was found to be the three-component GMM (PDF is given in Equation 11 and the component values are shown in Table 1). The details of the error distribution fitting can be found in (Gabela et al., 2019).
10
11
where the GMM components 𝑐 are: component weight 𝑤𝑐, mean 𝜇𝑐 and variance , for 𝑐 ∈ 1…𝑁𝐶.
𝑝𝑗 is assumed to be normally distributed with zero mean and variance . In this paper, 𝜎𝑆 = 5.7 m was used according to the published 1𝜎 User Equivalent Range Error (UERE) in (US Department of Defense, 2020).
12
After the particle weights are updated (Equation 6), they are then normalized so that is true. Finally, the user’s state at time epoch k is estimated as
13
Before continuing the state estimation for time epoch k+ 1, according to the method described by Gordon et al. (1993), resampling is done by drawing a random sample from the uniform distribution and setting equal weights for the new samples (i.e., particles). Different methods exist for resampling the particle weights to avoid “degeneracy of the weight.” Overviews of different methods can be found in (Li et al., 2015) and (Douc & Cappe, 2005).
2.2 Integrity monitoring algorithms
There are many different versions of RAIM algorithms used for the integrity estimation. An overview of different RAIM algorithms is given by Zhu et al. (2018) where it has been shown that they are often adaptations of classic RAIM methods. The biggest disadvantage of applying classic RAIM algorithm (e.g., WLS RAIM) and similar algorithms in the urban environment is the determination of the integrity in the measurement domain based on the test statistic calculated from the available measurements. As measurement unavailability often occurs in such challenging environments, integrity estimate also becomes unavailable (Gabela, Kealy et al., 2020). Alternatively, algorithms such as BRAIM estimate integrity in the position domain by solving “the state-space estimation problem by finding the a posteriori distribution of the state ” (Pesonen, 2011). This means that the integrity estimate will be available even if the measurements are not. However, without any measurements, the a posteriori distribution, and the integrity estimate, would mainly be based on the a priori state distribution, which will affect the trustworthiness of such a result.
This section will first present BRAIM as presented by Pesonen (2011). This will be followed by the novel FDE+BRAIM, MM+BRAIM and FDE+MM+BRAIM algorithms.
2.2.1 BRAIM
BRAIM algorithm estimates the probability of misleading information 𝑝𝑀𝐼. 𝑝𝑀𝐼 is the probability of the position error being out of bounds of the required alarm limit (AL). It can be estimated by performing integration of the aposteriori distribution over a set (as shown in Equations 14 and 15) (Pesonen, 2011). The BRAIM algorithm and equations below have been first presented by Pesonen (2011).
14
where
15
Equation 15 shows that is a subset of a set of all 𝑁ℎ particles for which the distance from the particles and estimated position is smaller than the required 𝐴𝐿. Hence, if a particle h is part of the subset, its position is within the bounds of AL from the estimated position, and its weight is summarized toward the 1−𝑝𝑀𝐼. 𝛿 denotes a Dirac delta function. Index 1 ∶ 3 indicated that the first three states corresponding to the 3D position as shown in Equation 1 are used to calculate the distance.
Equation 14 can be simplified as shown in Equation 16. Here, it is clear that 1−𝑝𝑀𝐼 is directly calculated as a sum of particle weights for those particles in a subset of particles 𝐽, where 𝐽 is . Unless all the particles are within AL from the estimated position, 1−𝑝𝑀𝐼 will never be equal to one.
16
In addition to AL, integrity risk is another integrity requirement. Once the 𝑝𝑀𝐼 is estimated, it is compared with the required integrity risk (IR) probability 𝑝𝐼𝑅. More on this in Section 3. The pseudocode for BRAIM is given in Algorithm 1.
Given the way 𝑝𝑀𝐼 is calculated, it is suggested that the particle set is treated as the complete representation of the a posteriori distribution. PF is usually considered to be a sub-optimal estimator because it is not analytically tractable, and the a posteriori distribution is represented by the numerically propagated particles (Gabela et al., 2019). As discussed in (Gabela et al., 2019), the larger 𝑁ℎ is, the closer the a posteriori distribution is to the analytically tractable solution, and PF approaches the optimal estimator. In this paper, a large sample of 𝑁ℎ = 300000 is used. However, in the future, an analysis of the sampling uncertainty as a function of 𝑁ℎ is needed.
Algorithm 1 currently keeps running even after integrity becomes unavailable. If this algorithm were to be implemented for a real-world ITS application, the user would either stop the operation and re-initialize the PF or would switch to an alternative positioning system if available. In this paper, this was not done, as one of the main goals was to demonstrate how 𝑝𝑀𝐼 changes with the new incoming measurements.
The presented BRAIM is similar to the multiple hypotheses approach in (Pervan et al., 1998). In this paper, every particle represents a hypothesis with the assigned weight given the measurement likelihoods. The approach in (Pervan et al., 1998) is “founded on the direct evaluation of integrity risk under the unified consideration of all single-element failure hypotheses and the no-failure hypothesis.” As here, 𝑝𝑀𝐼 (or in their case 𝑝𝐼𝑅) is defined as the likelihood that unknown position error exceeds 𝐴𝐿. As in this paper, Pervan et al. (1998) determine integrity unavailability when 𝑝𝑀𝐼 exceeds the required integrity risk.
Gupta and Gao (2019) present a RAIM method based on PF where each particle is considered to be a hypothesis as well. Because the PF is not analytically tractable and the particles in it are numerically propagated, the PF is able to “hypothesize” about the correctness of different measurement subsets through the process of weighting the particles based on the likelihood of the measurements. As explained by Gupta and Gao (2019), BRAIM is able to test multiple hypotheses and their likelihoods by distributing integrity risk, or as used in this paper, 𝑝𝑀𝐼, among all possible hypotheses. This, in turn, provides a robust estimation of 𝑝𝑀𝐼 since multiple hypotheses are considered and not just one with respect to the presence of faulty measurements (as is done in the classic RAIM methods) (Gupta & Gao, 2019). Hence, the BRAIM method will be able to deal with presence of faulty measurements in a more robust manner due to its use of multiple hypotheses.
An additional difference from classical RAIM methods is the fact that 𝑝𝑀𝐼 is estimated and not the protection level PL. However, there are RAIM methods such as the one presented by Salós et al. (2014) that have adapted classical RAIM methods to the urban environment by equating PL and AL. The idea of this method was to increase the integrity estimate availability since PL could not exceed AL by definition (i.e., they were equated). With that, Salós et al. (2014) have made the integrity estimate always available (i.e., 𝐴𝐿 = 𝑃𝐿). Because of this, previously constant probability of false alarm became variable and dependent on the available measurements. Although different, a parallel could be drawn with the BRAIM method where 𝑝𝑀𝐼 is adaptable depending on robust multiple hypotheses and PL can, for practical purposes, be considered to be equal to 𝐴𝐿.
2.2.2 FDE+BRAIM
Despite BRAIM’s advantage of robust 𝑝𝑀𝐼 estimate, the estimate may benefit from integrating the BRAIM and a simple FDE algorithm. FDE algorithms are always paired with integrity methods as part of the pre-screening process aimed at removing faulty measurements before an integrity availability estimate is made.
Due to estimating integrity in the position domain, the integrity estimate is always available and it does not depend on measurement availability/redundancy. However, error sources like multipath can cause big outliers in available measurements that are then introduced into the particle weight update process and consequently, the position and integrity estimation. Although (Gabela et al., 2019) shows that the implemented PF (see Section 2.1) deals with larger errors better than the EKF, they still affect the positioning performance and, therefore, the integrity. Hence, to minimize the instances of misleading information that can happen due to larger positioning errors, before estimating 𝑝𝑀𝐼, an FDE is performed in the measurement domain. It is expected that the FDE algorithm (i.e., FDE+BRAIM) will improve the overall integrity and reduce the number of false negatives/positives.
The chosen FDE algorithm is a simple outlier test described by Martineau (2008) and Jiang et al. (2011). Some small modifications have been made here to adapt to the PF framework presented in Section 2.1. Instead of using the least-squares estimated position, the PF estimated position is used. The difference between the vector of measurements Zmeasured and predicted measurements Zpredicted is given as,
17
Zpredicted is calculated using the Euclidean formula for 3D distance between user’s estimated position (from state vector ) and appropriate infrastructure node or a satellite. The residual vector 𝑟𝑒𝑠 can be calculated as follows,
18
where 𝐼 denotes the identity matrix and 𝐻 denotes a measurement matrix.
19
20
21
As in (Gabela et al., 2019), measurement matrix H describes the linear relation between the measurements (i.e., the LPS ranges and GNSS satellite pseudoranges) and the state estimate . Due to the non-linearity of the measurement model (as shown in Equations 8 and 9), measurement matrix 𝐻 consists of Jacobian matrices derived for every range (Equation 20) and pseudorange measurement (Equation 21). The Jacobians are the first-order partial derivatives of Equations 8 and 9 with respect to estimated state . Equation 22 shows the test statistic for fault detection calculated based on the measurement residual 𝑟𝑒𝑠 and degrees of freedom 𝐷𝑂𝐹.
22
𝐷𝑂𝐹 = 𝑛 − m for 𝑛 = 𝑁𝐼𝑁 +𝑁𝑆 measurements and m states (i.e., in this case it is a part of the state vector: 3D position and receiver clock bias).
The threshold is calculated as,
23
where 𝑇𝑐ℎ𝑖 denotes a normalized chi-squared distribution threshold calculated using the 𝐷𝑂𝐹 and the probability of false alarm (FA) 𝑝𝐹𝐴 as the probability of exceeding the threshold under normal conditions, which incorrectly alerts the user. Since 𝑝𝐹𝐴 is related to continuity risk, a value 𝑝𝐹𝐴 = 10−5 as defined in (European GNSS Agency, 2015) was chosen. 𝜎𝑜 is the a priori standard deviation of the measurements.
In case 𝑡𝑒𝑠𝑡𝑠𝑡𝑎𝑡 ≤ 𝑇ℎ, the fault has not been detected. Otherwise, the fault has been detected. Once the fault has been detected, the measurement with the highest residual 𝑟𝑒𝑠 is removed and residuals, the test statistic and the threshold are recalculated. This process is repeated as long as the 𝐷𝑂𝐹 > 0 and 𝑡𝑒𝑠𝑡𝑠𝑡𝑎𝑡 > 𝑇ℎ. Due to the use of multiple measurement systems (i.e., LPS and GNSS), choosing a common value of 𝑇ℎ and comparing it with the test statistic calculated based on the joint 𝐻 matrix will result in FDE being either too loose or too stringent for one or both measurement types. In this paper, to avoid this issue, separate test statistics and threshold values have been determined for satellite and LPS measurements. For both types, 𝜎𝑜 = 1 was used. 𝐷𝑂𝐹 values were also determined separately. Then, the fault presence was detected separately for each type as explained above.
If any faults are detected and faulty measurements excluded, the PF a posteriori state is re-estimated since the measurement likelihood function and, consequently, particle weights change. Now, 𝑝𝑀𝐼 is calculated and integrity availability assessment is made. The pseudocode for FDE+BRAIM is given in Algorithm 2.
The chosen method of FDE is the simplest method of fault detection and exclusion. It should be noted that the aim of this paper is to demonstrate a proof-of-concept for the proposed ideas and algorithm combinations and not to propose a robust FDE method for these two types of measurements. The proposed concept of combining FDE with BRAIM can work with any FDE algorithm. An analysis of different FDE algorithms was conducted by Knight and Wang (2009) where they compared the outlier test with 10 different robust FDE methods. FDE methods designed for PF would be ideal since linear approximations and Gaussian assumptions would not be made. For example, Wang et al. (2018) use a bank of auxiliary PFs to detect faults.
2.2.3 SFC+BRAIM
The previous algorithm, FDE+BRAIM, in addition to not being dependent on measurement availability/redundancy as a result of the nature of BRAIM, also attempts to mitigate the problem of measurement outliers in the measurement domain (FDE). However, there is a third component that needs to be considered: the implementation of the BRAIM algorithm (the effect of summarizing the particle weights as shown in Figure 1). Because the integrity is assessed based on the sum of weights for the particle subset determined as in Equation 15, the integrity is going to directly depend on the measurement likelihood of those particles. In theory, looking at Equation 16, correct classification of integrity availability depends on having as many particles as possible within the AL from the estimated position. Otherwise, those particles that are bounded by AL and closer to the true position need to be assigned a higher weight. Additional information necessary for increasing of particle weights is derived from the Spatial Feature Constraint (SFC) method.
To achieve this, the fact that most vehicles drive on the roads was used. A simple SFC algorithm as presented in (Gabela, Majic et al., 2020) was applied by utilizing the map of the road network. As in (Ray et al., 2018), the measurement likelihood can now be redefined with a binary weighting scheme,
24
where particles that are outside of the predefined road polygon get assigned a very small or zero likelihood 𝜖 and the particles that are within the road polygon get assigned the likelihood calculated according to Equation 7. In this paper 𝜖 = 0. Particle weights can be updated based on the measurement likelihood as shown in Equation 6, and consequently normalized to sum up to 1.
The example of how SFC works is shown in Figure 2.It is expected that by effectively eliminating the particles outside of the road network (𝜖 = 0) and increasing the weights of the particles within the road network, the weight of some particles bounded by AL will also increase. Hence, the integrity estimate is expected to be improved. Algorithm 3 presents the pseudocode for SFC+BRAIM.
The quality of map data is a problem to be considered for the purpose of global scalability of an MM or SFC integrity monitoring algorithm (Toledo-Moreo et al., 2010). Gabela, Majic et al. (2020) show the effect of using real maps, that are often affected by problems such as missing attribute data or mapping errors, on the positioning and integrity estimation. By utilizing the Open-StreetMap data in (Gabela, Majic et al., 2020) as an example of globally available map data, we have experimentally demonstrated the capabilities of utilizing currently available maps for globally applicable integrity monitoring.
To avoid the problems shown in (Gabela, Majic et al., 2020), in this paper, a map created specifically for the test area is used (to be explained in Section 3). Additionally, in order to not base the proposed method on the assumption of a “perfect map,” a buffer area was created around the map to account for the errors in the map. In this paper, that accuracy was assumed to be 1m. By doing this, the 𝑝𝑀𝐼 of the SFC process will decrease because more particles will be within the map feature. Theoretically, this will increase 𝑝𝑀𝐼 (it will get worse) since particles and their weights (i.e., hypotheses) will be dispersed further than ones when a “perfect map” is used. This should provide a more realistic position integrity estimate, i.e., 𝑝𝑀𝐼 estimate. In results, SFC method implemented with this added buffer will be denoted with “map+ 1m.”
As mentioned earlier, 𝜖 in Equation 24 can be assigned a very small or zero likelihood (the latter is done in this paper). Because of that, proposed SFC method presents many opportunities for defining the buffer areas. Here, the maximum assumed map error was 1m, which defined the buffer area around the “perfect map.” In a different implementation, multiple buffer areas could be defined around the “perfect map” to better reflect the map error distribution. As the distance from the “perfect map” increases, the likelihood parameter could be decreased incrementally.
2.2.4 FDE+SFC+BRAIM
Finally, to mitigate the problems addressed previously, FDE, SFC and BRAIM are combined into the FDE+SFC+BRAIM algorithm. SFC binary weighting scheme is implemented (see Algorithm 3) during the re-estimation of particle weights 𝑤k and user’s state in the Algorithm 2. It is expected that the results of this implementation will provide the best results since both the measurement outlier mitigation (FDE) and increase of particle weights that are within the road feature (SFC) will be implemented in addition to the BRAIM algorithm.
3 EXPERIMENTAL VALIDATION AND PERFORMANCE EVALUATION METRICS
The previous section has defined the methodology used to estimate the integrity of multi-sensor positioning systems based on the PF algorithm. This section deals with the problem of initial algorithm validation, which has been identified as one of the challenges in (European GNSS Agency, 2015). First, real-world data are collected, which is followed by the description of criteria and metrics used to evaluate the performance of the defined algorithms.
3.1 Data collection for algorithm validation
The performances of the algorithms proposed in Section 2 are experimentally evaluated on real-world data collected in September 2018 in Sydney, Australia. This section provides an overview of the experimental setup relevant to this paper and briefly describes the sensors that were used. More details about the full measurement campaign and sensors can be found in (Gabela, Smith et al., 2020).
In addition to the GNSS receiver, the dynamic node (as shown in Figure 3) was equipped with a Wireless Ad-hoc System for Positioning (WASP) ranging device. These data are considered for the integrity monitoring algorithm performance evaluation. Surveying-grade Leica Viva GS15 GNSS receivers (more information in (Leica Geosystems, 2015)) were used to measure raw GNSS data. GPS code pseudoranges L1 C/A uncorrected for atmospheric effects are later used for stand-alone GPS and multi-sensor GPS+WASP position and integrity estimation. A Wireless Sensor Network (WSN) platform WASP was used for ranging to the anchor nodes of the LPS. Developed by Commonwealth Scientific and Industrial Research Organisation (CSIRO), WASP units determine ranges based on the measured Time-Of-Arrival (TOA) of the wideband pulse. Details of the WASP development and performance can be found in (Sathyan et al., 2011).
All anchor nodes were equipped with WASP units. The network of anchor nodes was set up around the improvised road while the dynamic node (i.e., user) moved along that “road” (Figure 4). Figure 5 shows the data collection area with the user’s trajectory (red) and anchor nodes network (white pins). The network of 14 anchor nodes spans on the area of approximately 30 m × 60 m. The road polygon (i.e., map feature) used for SFC was created by vectorizing the satellite image around the improvised road (polygon shown in Figure 5). On average, the velocity of the dynamic node was ∼ 1 m∕𝑠 (see Figure 6). During the data collection, 12 to 14 WASP measurements and nine GPS pseudoranges were available at each epoch (Gabela, Smith et al., 2020). The ECEF positions of the anchor nodes, as well as the user’s ground truth trajectory, were determined with the sub-centimeter precision by post-processing the dual-frequency GPS data using the open-source software package RTKLib (http://www.rtklib.com/).
3.2 Performance evaluation metrics
As shown in Algorithm 1, BRAIM, FDE+BRAIM, SFC+BRAIM and FDE+SFC+BRAIM provide binary results: “Integrity is available!” and “Integrity is not available!” In the algorithms presented in this paper, AL and IR are the integrity requirements, where AL is used to bound the particles for integrity estimation and IR is used to “decide” if the integrity is available or not. IR 𝑝𝐼𝑅 can be defined as a maximum allowed probability that the estimated position will exceed the required AL. If the estimated probability of misleading information 𝑝𝑀𝐼 is smaller than the required 𝑝𝐼𝑅, the integrity is estimated to be available (Pervan et al., 1998). Otherwise, integrity is unavailable.
The integrity value and its availability are estimated in real-time. To evaluate the capability and performance of presented algorithms, the integrity and position estimate will be compared to the otherwise unavailable ground truth (i.e., position error PE).
Given the binary output of the integrity monitoring in real-time, when compared to PE, there are four possible outcomes:
Integrity is available. Integrity is correctly classified as available when 𝑃𝐸 ≤ 𝐴𝐿 and 𝑝𝑀𝐼 ≤ 𝑝𝐼𝑅 are true.
Integrity is not available. Integrity is correctly classified as unavailable when 𝑃𝐸 > 𝐴𝐿 and 𝑝𝑀𝐼 > 𝑝𝐼𝑅 are true.
False negative (i.e., false alarm). Integrity is incorrectly classified as unavailable when 𝑃𝐸 ≤ 𝐴𝐿 and 𝑝𝑀𝐼 > 𝑝𝐼𝑅 are true. The user is alerted when it should not have been.
False positive. Integrity is incorrectly classified as available when 𝑃𝐸 > 𝐴𝐿 and 𝑝𝑀𝐼 ≤ 𝑝𝐼𝑅 are true. The user is not alerted when it should have been.
The example of false negative and false positive can be given on an anti-collision system. In the case of the false negative, the user is informed that the integrity is not available when it should be, which causes the vehicle to brake or stop unnecessarily. In the case of the false positive, the user is not alerted about the integrity unavailability, which could result in collision.
In this paper, one of the ways the performance will be evaluated is through the comparison of different estimated values of the probability of misleading information. In addition to this, the rate of integrity availability will be estimated, as described in this section, with four different outcomes in mind: integrity available, integrity unavailable, false negative/false alarm, false positive. It is common to use the Overall Correct Detection Rate (OCDR) as in (Binjammaz et al., 2016). However, this rate includes both the correct identification of availability and the unavailability of integrity. In this paper, rates of availability and unavailability will be presented separately, as our aim is not just to increase OCDR but also to increase availability. Further, in addition to reducing 𝑝𝑀𝐼 and improving integrity availability, the aim of the implementation of algorithms like the FDE and SFC is to reduce false negative and false positive rates.
4 RESULTS
The goal of this section is to present the performance of the proposed integrity monitoring algorithms (i.e., FDE+BRAIM, SFC+BRAIM and FDE+SFC+BRAIM) and compare it with the existing BRAIM algorithm. The performances will be evaluated in terms of the estimated probability of misleading information. The values of 5 m and 1.1 m are chosen respectively for road- and lane-level horizontal AL (𝐻𝐴𝐿) (according to ARRB Project Team, 2013) with IR of 1⋅10−7/𝑝𝑒𝑟 𝑒𝑝𝑜𝑐ℎ and 𝑝𝐹𝐴 = 1⋅10−5 (according to European GNSS Agency, 2015).
The error distribution of GPS measurements is assumed to be Gaussian with zero mean and standard deviation of 5.7 m (US Department of Defense, 2020). The error distribution of WASP measurements is either assumed to be Gaussian with zero mean and standard deviation of 0.9 m, or three-component GMM defined in Table 1 is used as a better approximation of measurement error distribution (Gabela et al., 2019). Acceleration noise and GNSS receiver clock bias noise were both assumed to be Gaussian with zero mean and variances randomly drawn from the standard normal distribution N (0, 1) at every time epoch for all particles. Every PF was run 10 times with 300000 particles. The proposed algorithms were validated offline and not in real-time with the incoming stream of data. This also means that there was no need to have a system in place to relay the predetermined global coordinates of LPS nodes to the user. In a real-time system, this would have to be done through internet access to a database containing information about infrastructure nodes or alternatively using protocols such as the Dedicated Short Range Communication (DSRC) (Ndashimye et al., 2017).
The experiments were conducted on GPS data and the fusion of GPS and WASP data (i.e., GPS+WASP). Since the data were collected in the environment without any obstacles (as detailed in Section 3), some large errors were injected into the measurements. For timestamps 1-50 and 40-90, ranging errors of 25 m and 10 m were injected for two different LPS nodes, respectively. This created an overlap of 10 𝑠 (timestamps 40-50) where two large errors were injected into the measurement vector. In addition to injecting errors for two LPS nodes, errors for two satellites were injected. Two satellites with lowest elevations (in this experiment, elevation for both is under 20◦) were chosen. For timestamps between 80-120, a random error of size 3𝜎𝑆 to 6𝜎𝑆 (as seen in Knight & Wang, 2009) was injected for both of the satellites. For 10 𝑠 (timestamps 80-90), three large injected errors were present in the system.
The goal of this section is to study the performance of the proposed methods through the change of 𝑝𝑀𝐼 over time. Consequently, the integrity estimation was not terminated when 𝑝𝑀𝐼 exceeded the IR.
Although positioning accuracy is not the focus of this paper, it is briefly presented and discussed for better understanding of algorithms and their effects on the positioning. This is followed by the discussion of the results of different integrity monitoring algorithms implemented for the road- and lane-level requirements.
4.1 Positioning performance
As expected, implementation of different algorithms for integrity monitoring has resulted in different horizontal positioning errors (HPE). Table 2 details all the average HPEs, standard deviation of the HPEs and the maximum HPEs for GPS only and GPS+WASP (under Gaussian assumption and for GMM) data sets for every proposed algorithm. All values in the table are calculated based on all 10 runs of each of the PFs. Although Section 2 presents algorithms capable of estimating 3D PE and AL, in this paper, results are presented for the horizontal positions.
For GPS data, an improvement of the average HPE can be observed with the implementation of all novel algorithms. The average HPE for BRAIM is 4.32 m, which is due to the injected errors. The best performance is observed for the FDE+SFC+BRAIM algorithm (with both the “perfect map” and map with error buffer). Here, map error did not significantly affect positioning performance. The average HPE for the FDE+SFC+BRAIM algorithm is comparable to the performance of PF in (Gabela et al., 2019) for the same GPS data used in this paper when no errors were injected. In the case of GPS+WASP data (when Gaussianity of WASP data is assumed and when GMM is used), the average HPE drops to ∼ 0.30 m and does not change significantly depending on the algorithm. On average, in this paper, different integrity monitoring algorithms did not have a significant impact on positioning performance. As it was shown in (Gabela et al., 2019), the proposed positioning system based on multi-sensor fusion of GPS and WASP data is capable of achieving positioning accuracy required for the road- and lane-level applications (ARRB Project Team, 2013). Standard deviations and maximum errors of the GPS+WASP when WASP actual error distribution is approximated with GMM are larger than the ones when Gaussianity is assumed. This is likely due to GMM assigning particles that have larger errors with a higher probability (Equation 11) due to the “fat tails” of the GMM distribution. This may also be due to large errors injected into data as described in Section 4. Nevertheless, more research into this is necessary in the future.
4.2 Integrity performance
This section presents the results of integrity monitoring for the required 𝐻𝐴𝐿 = 5 m (road-level) and for the required 𝐻𝐴𝐿 = 1.1 m (lane-level). The results of integrity monitoring will be presented in terms of the change of estimated 𝑝𝑀𝐼 (i.e., the integrity risk estimate) depending on the algorithm used for GPS only and for GPS+WASP multi-sensor data. The performance of GPS+WASP data is tested under the Gaussian assumption and for the GMM approximation for WASP measurements. Even though three-component GMM (Equation 11) with parameters in Table 1 was shown to be a better approximation of WASP error distribution, it is valuable to see how 𝑝𝑀𝐼 is affected by the commonly made assumption of Gaussianity.
Table 3 presents the results of the integrity monitoring for 𝐻𝐴𝐿 = 5 m. The median value was chosen as a metric of change of estimated 𝑝𝑀𝐼 since it provides a better idea of the typical integrity value. This is demonstrated in Table 3 where a difference of two to six orders of magnitude is observed between the average and median 𝑝𝑀𝐼 for GPS+WASP data. In these cases, the average value is skewed by high values of 𝑝𝑀𝐼 (e.g., maximum 𝑝𝑀𝐼 column). For example, when the majority of integrity estimates are in the order of magnitude 10−10 and the lowest estimate is in the order of magnitude 10−3, average integrity in order of magnitude 10−5 is not a good representation of the integrity performance.
The results presented for GPS data show improvement of the value of the median estimated 𝑝𝑀𝐼 with the implementation of different methods. The best result is, as expected, shown for FDE+SFC+BRAIM algorithm. It cannot be said that the improvement is significant and it does not satisfy the required IR of 1⋅10−7. The values shown in Table 3 indicate that implementation of proposed methods with GPS only is not feasible for any ITS application requiring HAL of 5m or better.
The performance of GPS+WASP data (under Gaussian assumption) is shown in Figure 7. The left y-axis shows change of median integrity estimate 𝑝𝑀𝐼 (blue circle marker) depending on the implemented algorithm, which are always shown on the x-axis. With the fusion of GPS and WASP data, the required level of integrity (i.e., 𝑝𝑀𝐼) is achieved. The implementation of different methods has resulted in notable improvement of the integrity estimate. Median 𝑝𝑀𝐼 for BRAIM is 3.31 ⋅ 10−10, which improves to 1.33 ⋅ 10−12 when both FDE and SFC are implemented with the BRAIM algorithm (i.e., FDE+SFC+BRAIM). Notably, FDE algorithms have not significantly affected the median 𝑝𝑀𝐼. The SFC algorithm shows the biggest impact on the estimated 𝑝𝑀𝐼. As expected, 𝑝𝑀𝐼 increases when map with buffer is used. The impact of accounting for map error was not significant in this experiment for the given map error model as defined in Section 2.2.
The right y-axis of Figure 7 shows the worst estimated integrity (orange diamond marker) which is also shown in the maximum 𝑝𝑀𝐼 column of Table 3. This value is important as it shows the capability of proposed algorithms to mitigate the effect of measurement outliers (i.e., FDE) and capability to improve integrity by improving the weights of particles within the map feature (i.e., SFC) (as discussed in Section 2). The worst estimated 𝑝𝑀𝐼 for GPS+WASP data is 8.26 ⋅ 10−3 for the BRAIM algorithm. With the implementation of FDE, this is improved to 1.84 ⋅ 10−6, which is further reduced to 2.97 ⋅ 10−7 and 1.52 ⋅ 10−6 once SFC was added (i.e., for FDE+SFC+BRAIM).
Figure 8 shows the result of integrity monitoring for the positioning system where ranging errors are distributed with three-component GMM (noted with GPS+WASP(GMM)). Although the average HPE of this positioning system is comparable with the one where Gaussianity is assumed, the integrity estimation is not. Compared to the results for the GPS+WASP data under Gaussian assumption, the integrity estimate is worse by three to four orders of magnitude. The assumption is that due to the long and fat tails of the three-component GMM distribution, the measurements with larger errors have higher probability and the particles that are further away from the estimated position have larger weights than in the case of Gaussian assumption. This is important for BRAIM methods since only weights of particles bounded by HAL are used for 1−𝑝𝑀𝐼 estimation. However, to confirm this explanation, a study of a posteriori distributions resulting from the Gaussian and the GMM assumptions is necessary in the future.
The best median 𝑝𝑀𝐼 is estimated for the SFC+BRAIM algorithm (with “perfect map”) where 2.94 ⋅ 10−8 was achieved. The worst estimates of 𝑝𝑀𝐼 are significantly larger than the ones for the Gaussian counterpart. Unlike the previous results where implementation of FDE has improved (slightly) median 𝑝𝑀𝐼 and significantly reduced the worst 𝑝𝑀𝐼, here, the opposite happens for FDE+BRAIM. The way to rectify this may be to implement an FDE method designed for PF.
Lastly, the performance of the proposed methods for 𝐻𝐴𝐿 = 1.1 m is shown in Table 4. The best achieved median 𝑝𝑀𝐼, when Gaussianity of LPS ranges is assumed, is 2.86 ⋅ 10−2 for FDE+SFC+BRAIM algorithm. When Gaussianity of WASP ranges is not assumed, the best achieved median 𝑝𝑀𝐼 is 3.80 ⋅ 10−3. It should be noted that the differences between median integrity estimates for different algorithms are not large, as they were for 𝐻𝐴𝐿 = 5m (Table 3). The results in Table 4 demonstrate that the current implementation of the BRAIM and proposed algorithms is not appropriate for lane-level applications. This is probably due to actual error behavior being worse than necessary to achieve 𝐻𝐴𝐿 = 1.1 m. To achieve this level of integrity, other studies have used Precise Point Positioning (PPP) (Gunning et al., 2019) or Real Time Kinematic (RTK) (El-Mowafy & Kubo, 2017). Additional data such as Inertial Measurement Unit data may be necessary as both of these methods have fused GPS with it.
5 DISCUSSION
So far, the estimated probability of misleading information has been presented and discussed for different algorithms, different measurement error distributions, and for two different generalized applications (road- and lane-level). This section will demonstrate and discuss the availability of integrity as defined in Section 3. The effect of different 𝑝𝑀𝐼 estimations, as shown in the previous section, needs to be viewed in terms of integrity availability from a point of view of ITS applications. Although our goal is to provide an initial assessment of the proposed algorithms, integrity availability rates are important metrics that need to be considered. According to El-Mowafy and Kubo (2017), to ensure reliable positioning performance, integrity availability rate should exceed 99%. This rate was achieved by El-Mowafy and Kubo (2017) for applications with 𝐻𝐴𝐿 = 0.5 m to 1m with the IR of 1⋅107. However, the availability rate depends on the application. For autonomous vehicles, it would likely have to be higher than 99%.
Given the results of integrity monitoring for GPS only for road-level requirements and the results for GPS+WASP for lane-level requirements, the false alarm rate for both exceeds 99%. The availability of those two cases will not be discussed here. As mentioned earlier, this is likely due to the measurement error behavior being worse than necessary to satisfy the application requirements. Furthermore, integration with different sensors, or use of PPP or RTK, may be beneficial to the integrity estimates. This was demonstrated for road-level requirements where 𝑝𝑀𝐼 improved significantly when GPS was integrated with WASP.
Figure 9 shows the integrity availability assessments for GPS+WASP data with the assumption of Gaussianity for required 𝐻𝐴𝐿 = 5 m. As shown in Section 3, four outcomes of integrity monitoring when compared to HPE are: integrity available, integrity not available, false alarm (false negative) and false positive. The integrity unavailability and false positive rates are 0% and are not shown in the Figure 9.
Integrity availability rate for GPS+WASP data is shown to increase with every new algorithm implementation. When comparing FDE+BRAIM and SFC+BRAIM algorithms in Figures 7 and 9, it is clear that SFC affects BRAIM more than FDE. Implementation of the FDE algorithm has improved integrity availability of BRAIM for 1.73%, while ∼ 6.4% improvement was shown when SFC was implemented with BRAIM. Integrity availability exceeding 99% (i.e., 99.89%) was achieved once FDE+SFC+BRAIM was implemented (with use of buffered map errors). This means that for FDE+SFC+BRAIM, 99.83% of time integrity is correctly classified as available due to HPE being lower than the horizontal AL (HAL) and estimated integrity 𝑝𝑀𝐼 being better than the IR 𝑝𝐼𝑅. The rest of the time, integrity is incorrectly classified as unavailable (i.e., false alarm). As shown in Table 2, HPE never exceeds the HAL, however, integrity estimate exceeds IR in some cases (as shown in Table 3). False alarm rates decrease when the FDE and SFC algorithms are integrated with BRAIM due to their effect on the magnitude of the estimated integrity.
The integrity availability assessment for GPS+WASP (where the WASP error distribution is approximated with GMM) is shown in Figure 10. Due to the lower 𝑝𝑀𝐼 estimates, integrity availability is significantly reduced when GPS+WASP(GMM) is used. The best integrity availability is 64.58% and the lowest false alarm rate is 35.42%, which is achieved for SFC+BRAIM algorithm. It should be noted that integrity is correctly classified as unavailable for one (0.06%) and two (1.11%) time instants when FDE+SFC+BRAIM is used with the map with buffer errors and the “perfect map” without errors, respectively. Due to its magnitude, it is not shown in the figure.
The results presented in this section and the one before indicate that the GPS+WASP data under the Gaussian assumption appears much better than GPS+WASP when WASP measurement distributions are approximated with GMM. The results for the Gaussian assumption are invalid since GMM is a better fit to the WASP error distribution (Gabela et al., 2019). As discussed in (Gabela et al., 2019), GMM has fatter tails. Thus, the GMM distribution represents large, rare-event errors more appropriately. Therefore, it is fair to conclude that the GPS+WASP integrity assessment based on the GMM distribution of WASP is a better approximation of the actual integrity and is safer to use. In a case of large rare-event error, if Gaussianity is assumed, the algorithm may be overconfident and suggest integrity availability when it is not present.
6 CONCLUSIONS
This paper aimed to test the capabilities of novel FDE+BRAIM, SFC+BRAIM and FDE+SFC+BRAIM integrity monitoring algorithms with stand-alone GPS and multi-sensor PF as underlying position estimators. The performances of the proposed algorithms were compared to the capabilities of the existing BRAIM. Furthermore, as the continuation of the work done in (Gabela et al., 2019), where the effect of linear approximation and Gaussian assumption for WASP measurements was tested for positioning performance, we have now demonstrated the effect of the Gaussian assumption on the integrity performance.
First, the results demonstrate that GPS-based integrity monitoring algorithms do not achieve the required integrity for road-level positioning requirements. Although the achieved accuracy met the required HAL of 5 m, the estimated probability of misleading information was too high (order of magnitude of 10−1) in comparison to the required IR of 1⋅10−7. This challenge was approached using multi-sensor fusion.
Results for the multi-sensor positioning system (i.e., GPS+WASP), where all the measurement error distributions are assumed to be Gaussian, showed high integrity levels for all four algorithms. Using the BRAIM and the FDE+BRAIM algorithms, levels of 10−10 have been achieved. With the addition of SFC to the BRAIM and FDE+BRAIM algorithms, the integrity of 10−12 magnitude was achieved. As it was expected, the FDE and SFC algorithms were successful in improving the overall integrity.
When Gaussianity is not assumed and a better approximation of the actual WASP measurement error distribution is used, the estimated values of 𝑝𝑀𝐼 increase (i.e., integrity estimate is worse). For BRAIM and FDE+BRAIM, integrity levels of 10−7 have been achieved, and this is improved by an order of the magnitude when SFC algorithm is used.
The difference in these results brings up an important point about the commonly made Gaussian assumption. In this instance, because fitted three-component GMM has “fat” tails, it is theoretically able to represent large errors better than the assumed Gaussian distribution. It can be concluded that, if the Gaussian assumption is made, the algorithm may be overconfident in the presence of large errors. When it comes to integrity monitoring, this is precisely what we want to avoid. Consequently, the levels of integrity estimated when Gaussianity is not assumed for WASP are “safer” and provide a more trustworthy approximation of integrity. Multi-sensor systems are a very common solution to the problem of integrity monitoring in urban environments. The Gaussian assumption is rarely questioned. It is especially important to do this for non-traditional signals such as ranging devices like WASP and UWB. All future work will be based on the fitted error distributions to provide robust integrity estimates given the safety implications of the ITS applications.
The capability of the same algorithms for multi-sensor positioning system was tested for lane-level integrity monitoring (HAL of 1.1 m and the same integrity risk). The results show that the proposed multi-sensor positioning system paired with proposed integrity monitoring algorithms is not appropriate for lane-level integrity monitoring. For this, further studying of error distributions is necessary, as well as the potential integration of PPP or RTK GPS measurements, cooperative network measurements, IMU, odometer, signals of opportunity, etc.
Finally, based on the integrity availability analysis, the practicality of the proposed methods needs to be mentioned. As an initial body of work, this paper aimed to provide a proof-of-concept for the proposed methods. We have been successful in that since our novel methods improved the integrity estimates when compared with existing BRAIM. In particular, since levels of estimated 𝑃𝑀𝐼 on the order of 10−8 were achieved, and conclusions have been drawn regarding the appropriate error distributions. It is important to note that the given results have been produced under the test conditions and assumptions as presented in the paper. To further confirm this claim more testing needs to be done with different data and in different conditions. Analysis of the integrity availability in the previous section indicated that we still need to work on the practicality of the algorithms. While they would be sufficient for payment-critical and regulatory-critical requirements (see European GNSS Agency, 2015), they are still not practical for safety-critical applications.
Our future work will focus on improving this for the proposed multi-sensor system under the non-Gaussian approximation. A more robust FDE method will be employed that will consider the risk that the wrong measurements were excluded. All proposed algorithms need to be extensively tested and validated on different and many real-world data sets before a claim is made about their practicality for safety-critical applications. Furthermore, these algorithms need to have a capability to reinitialize after the integrity was estimated to be unavailable. This was not done in this paper for the purpose of studying the 𝑝𝑀𝐼 estimation.
HOW TO CITE THIS ARTICLE
Gabela J, Kealy A, Hedley M, Moran B. Case study of Bayesian RAIM algorithm integrated with Spatial Feature Constraint and Fault Detection and Exclusion algorithms for multi-sensor positioning. NAVIGATION. 2021;68(2):333–351. https://doi.org/10.1002/navi.433
- Received June 25, 2020.
- Revision received March 30, 2021.
- Accepted April 16, 2021.
- Copyright © 2021 Institute of Navigation
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.