Abstract
Precise point positioning (PPP) uses precise satellite orbits, clock corrections and biases derived from a global network of reference stations to enable accurate positioning worldwide. Natural Resources Canada’s Canadian Spatial Reference System (CSRS) PPP is a free Web service offering automated PPP processing. A critical factor limiting the adoption of PPP in many applications is the convergence time needed to reach centimeter-level accuracies. To address this issue, CSRS-PPP now implements PPP with ambiguity resolution (PPP-AR). This feature required the development of new algorithms, such as sequential normal stacking for least-squares filtering/smoothing, and the weighted integer decision concept for ambiguity validation. New satellite product lines (ultra-rapid, rapid, final) also have been deployed to enable PPP-AR processing with various latencies. This analysis demonstrates that sub-centimeter horizontal accuracies can be obtained in less than one hour for both static and kinematic modes. Using product lines with longer latencies is beneficial, although improvements are typically within the reported uncertainties.
1 INTRODUCTION
The Canadian Spatial Reference System Precise Point Positioning (CSRS-PPP) service was launched in 2003 (Mireault et al., 2008). It allows users of Global Navigation Satellite Systems (GNSS) to collect data in the field and upload this data to Natural Resources Canada (NRCan) servers. Within minutes, users receive an estimate of their position, or trajectory, along with quality estimates and a report for visual quality control.
CSRS-PPP uses precise satellite orbits, clock corrections, equipment delay bias estimates and Earth orientation parameters derived from a global network of GNSS receivers to determine accurate user positions. This strategy differs from online positioning services based on differential positioning, that is, using nearby base stations (Ghoddousi-Fard & Dare, 2006). This positioning service offered by NRCan has been used more than five million times since its inception. In 2020, the system processed more than 2,500 daily user requests from approximately 7,000 active users. Nearly 83% of submissions were processed in static mode, and 43% contained 24 hours of observations. For datasets shorter than 24 hours (56%), the mean session length was 2.9 hours. The geographical distribution of users suggests that CSRS-PPP is being used not only for surveying purposes, but also for the oil and gas industry, geophysical analyses, glaciology, road construction, and more (Klatt & Johnson, 2017). Despite the overall usage of the service, the convergence time required to reach centimeter-level accuracies is a limiting factor for further adoption of PPP in many applications.
In the past decade, algorithms for PPP with ambiguity resolution (PPP-AR) have matured from experimental to operational solutions (Geng et al., 2019; Loyer et al., 2012; Schaer et al., 2018; Strasser et al., 2019). Ambiguity resolution offers significant benefits for users by transforming ambiguous carrier-phase observations into precise ranges. As a result, centimeter-level accuracies often can be obtained more rapidly (Choy et al., 2017). Furthermore, due to satellite geometry, resolving carrier-phase ambiguities is especially beneficial for the longitude (East) component (Ge et al., 2008; Santerre, 1991).
On October 20, 2020, NRCan released CSRS-PPP version 3, implementing the PPP-AR methodology. The purpose of this paper is to describe the main algorithms supporting this service: the functional and stochastic models, the estimation process, and the ambiguity resolution strategy. These algorithms were developed to cope with challenges associated with an online positioning service, such as reliability and timeliness in delivering the results. Next, the different product lines (i.e., ultra-rapid, rapid and final) enabling PPP-AR are described. A comparison of CSRS-PPP versions 2 and 3 is then presented in both static and kinematic modes for various session lengths, and the performance achieved with the different products is analyzed. The conclusion briefly introduces the roadmap for future releases of CSRS-PPP.
2 IMPLEMENTATION
This section describes the current implementation of PPP-AR algorithms supporting CSRS-PPP version 3. It covers the functional and stochastic models, estimation process, and integer ambiguity estimation and ambiguity validation strategies.
2.1 Functional and stochastic models
CSRS-PPP uses uncombined carrier-phase (𝐿) and code (𝐶) observables described, in units of meters, as:
1
2
where the overbar symbol indicates misclosures. The superscript 𝑠 designates a satellite and subscripts 𝑔, m and 𝑓 represent GNSS, modulation and frequency band, respectively. The direction cosine vector between the satellite and user is expressed as 𝒆𝒙 and multiplies the vector of corrections to a priori coordinates (𝛿𝒙). An independent receiver clock parameter (𝑑𝑇) is set up for every satellite system. This distinction is essential, since satellite clock corrections for different GNSS can be based on different reference clocks. Even with a common reference clock, receiver-induced, constellation-specific equipment delays can also occur. The residual tropospheric zenith delay (𝛿𝑇𝑧) is multiplied by the wet Vienna Mapping Function 1 (VMF1) (Boehm et al., 2006), denoted . Tropospheric gradients in the North and East directions (𝐺𝑁 and 𝐺𝐸) are also included in the model, scaled by the gradient mapping function (
) and a trigonometric function of the azimuth (𝑎) (Chen & Herring, 1997). The slant ionospheric delay (𝐼) is multiplied by a frequency-dependent scaling factor:
, with 𝐹 being the frequency of carrier 𝑓. The carrier-phase ambiguity (𝑁) is multiplied by the carrier wavelength (𝜆). The receiver phase and code biases are symbolized by 𝑏 and 𝐵, respectively. It should be noted that phase biases are considered to be independent of modulation, as specified in the RINEX standard (IGS, 2021). In the event that phase observations have non-integer offsets among modulations of the same frequency, as determined by a simple comparison of measurements from the RINEX file, ambiguity resolution is deactivated. Error sources such as observation noise and multipath are neglected here for simplicity.
All quantities on the right-hand side of Equations (1) and (2) are parameters to be estimated in the PPP filter. The first column of Table 1 provides an overview of the parameters included in the solution at a given epoch. A typical dataset, containing dual-frequency GPS and GLONASS carrier-phase and code measurements on one modulation per frequency, would have 59 active parameters if we consider 15 visible satellites. Unobserved ambiguities are removed from the system as soon as possible, and slant ionospheric delay parameters are kept for up to five minutes to enable faster re-convergence (Geng et al., 2010).
The misclosures in Equations (1) and (2) are obtained as:
3
4
The a priori range between the satellite and receiver positions is denoted by . The satellite clock correction is expressed as 𝑑𝑡𝑠, and the satellite phase and code biases are
and
, respectively. The a priori tropospheric zenith delay for the dry and wet components, mapped into the slant direction for each satellite (
), is computed using the VMF1 grids following Kouba (2008). The last terms in Equations (3) and (4) include corrections for other physical effects such as: antenna phase center variations (Schmid et al., 2005), carrier-phase windup (Wu et al., 1993), special relativity and Shapiro effects (Ashby, 2003), as well as Earth tides, ocean tides and pole tides (IERS Conventions, 2010). In the current implementation, higher-order ionospheric corrections (Bassiri & Hajj, 1993) are not being applied.
Due to linear dependencies between the receiver clock and code-bias parameters, additional constraints must be set up in the filter to remove rank deficiencies. For each GNSS constellation, the receiver code biases for one modulation on each of the first two frequency bands (e.g., C1W and C2W for GPS) are fixed to zero. This process is performed at every epoch so that different modulations can be selected if the list of tracked signals changes from one epoch to the next. All phase-bias parameters are also fixed to zero on the forward run to remove the rank deficiency with carrier-phase ambiguity parameters. Additional details on this aspect are provided in Subsection 2.3.
It is well known that frequency division multiple access (FDMA) causes inter-frequency biases for GLONASS. Since ambiguity resolution is not attempted for this system, apparent inter-frequency phase biases can be absorbed by the ambiguity parameters. For code observations, setting up one inter-frequency code bias (IFCB) parameter per satellite and signal is a rigorous means of modelling this error source. However, this leads to a significant increase in the number of parameters. To avoid this drawback, an alternate method is used in which GLONASS IFCBs are determined using an on-the-fly running average of GLONASS code residuals (Kozlov et al., 2000). The computed values are then removed from the GLONASS code observations and, as a consequence, the functional model of Equation (2) also applies to GLONASS.
The stochastic model uses nominal standard deviations at zenith of 0.0035 m for carrier-phase and 0.6 m for code measurements. These values are pessimistic, considering the tracking performance of modern geodetic receivers, but may be optimistic for low-cost receivers or users located in challenging environments. The impact of an incorrect stochastic model on ambiguity resolution is discussed in Subsection 2.4. Standard deviations at zenith are increased by 1/sin(𝑒) where 𝑒 is the elevation angle of the satellite above the horizon. The weight of GLONASS code measurements is further reduced by a factor of two to account for a possible mis-modeling of inter-frequency code biases. Since satellite clock corrections are provided at 30-second intervals, measurements can be contaminated by clock interpolation errors when observations are collected at a faster rate. For this reason, observation standard deviations are increased using the satellite clock’s Allan deviation (Hesselbarth & Wanninger, 2008). This clock characterization allows assigning more weight to observations associated with stable satellite clocks, since it is expected that interpolation errors are smaller for these satellites.
Identifying outliers and carrier-phase cycle slips is critical for accurate positioning. For this purpose, a geometry-based, multi-step approach using time-differenced observations is utilized. First, code observations are validated for blunders. Second, large cycle slips are flagged on a satellite-by-satellite basis. Finally, all satellites are used together in an integrated adjustment to identify smaller cycle slips. This process is detailed by Banville and Langley (2013).
2.2 Estimation
Since its inception, CSRS-PPP uses a sequential least-squares filter with the addition of process noise (Kouba & Héroux, 2001). Smoothed parameter estimates in kinematic processing were obtained from a simplified summation of forward- and backward-reduced normal equations (Kouba, 2004). In CSRS-PPP version 2, a Rauch-Tung-Striebel (RTS) smoother (Rauch et al., 1965) replaced this algorithm, leading to slightly improved kinematic solutions. However, given the possibility of numerical instabilities in the RTS smoother (Vaclavovic & Dousa, 2015), version 3 adopts a different approach, herein termed sequential normal stacking.
The adopted sequential normal stacking process is equivalent to a batch least-squares adjustment but carried out sequentially. A sequential adjustment simplifies outlier detection, since blunders can be eliminated at their epoch of occurrence rather than contaminating parameters from several epochs. The approach differs from the traditional least-squares with back-substitution, since it allows for process noise to be added directly within the normal equations (Ge et al., 2006). It also distinguishes itself from sequential least-squares and Kalman filtering in the sense that it is carried out in normal space rather than covariance space. It improves on the numerical stability of the RTS smoother by avoiding subtractions, and it is simpler to implement than the square root filter (smoother), as it does not require orthogonalization schemes. Furthermore, it allows recovering the full covariance matrix of epoch parameters, which was not possible in the original single-parameter elimination proposed by Ge et al. (2006). Having access to epoch covariance matrices is critical for ambiguity resolution.
Sequential normal stacking requires both forward and backward runs to obtain the final estimates and covariance matrices for the parameters. On the forward run, at epoch i, the normal equations are defined as:
5
where 𝑵𝑖 and 𝑼𝑖 are the epoch’s normal matrix and vector, respectively, and vector 𝒙𝑖 contains corrections to a priori parameters estimated at the current epoch. Matrix is the predicted normal matrix and will be described hereafter. The normal matrix and vector can be further decomposed into:
6
7
where 𝑨𝑖 is the design matrix containing the partial derivatives of the misclosures of Equations (1) and (2) with respect to the estimated parameters, and vector 𝒘𝑖 con-
tains these misclosures. Matrix 𝑷𝑖 is the diagonal weight matrix, which depends on the precision of the observables, discussed in the previous section. To save memory and processing time, the design matrix, weight matrix and misclosure vectors are never explicitly formed; rather, the system is built directly into the normal matrix and vector of Equation (5). At the first epoch, matrix contains initial constraints on the parameters. The values adopted for these constraints are presented in Table 1.
Initial parameter constraints and process noise. ng is the number of constellations, ns the total number of visible satellites, nf the number of frequencies per constellation, and nm the number of modulations per frequency
The system of Equation (5) is solved at every epoch, and outliers are identified and removed using the detection, identification and adaptation (DIA) procedure (Teunissen, 1990). Once the epoch processing is complete, the a priori parameter values, misclosures, observation standard deviations and partial derivatives are saved to a file for the backward run. This approach reduces computational load by minimizing duplicate processing, at the expense of increased disk space usage.
The predicted normal matrix for the next epoch is produced by adding process noise following the detailed algorithm provided by Ge et al. (2006). Table 1 provides the process noise values for each parameter type. Parameters are also removed or added to provide consistency with active parameters at the next epoch. The predicted normal matrix
is saved to a temporary file for the backward run. Since a priori parameters are updated at every epoch, there is no need to store the predicted normal vector.
Once all epochs have been processed, the final parameter estimates and covariance matrices are obtained, in reverse chronological order (i.e., the backward run), from the combination of three systems of normal equations:
8
9
10
At each epoch, the normal matrix (𝑵𝑖) and vector (𝑼𝑖) are formed again following Equations (6) and (7) using the partial derivatives, standard deviations and misclosures saved on the forward run. The smoothed epoch covariance matrix is obtained using two additional normal matrices: the predicted normal matrix saved on the forward run (), and the predicted normal matrix from the backward run (
). The latter is initialized with the null matrix and, for subsequent epochs, contains the sum of the epoch normal matrices 𝑵𝑖, but only using epochs following epoch i to ensure that data are only used once. Process noise is also added to
at every epoch, in a similar fashion as for
on the forward run.
The smoothed epoch estimates require vector . Similar to matrix
, it is equal to the null vector at the first epoch of the backward run and is successively built by accumulating normal vectors 𝑼𝑖. In this case, process noise is added to this vector from one epoch to the next, following the algorithm of Ge et al. (2006). For the simple addition of normal vectors in Equation (10) to work, a priori values for the parameters must match. Since both
and 𝑵𝑖 are based on the same a priori values, we apply the following correction to
(Brockmann, 1997):
11
where Δ𝒙0 contains the difference between the forward and backward a priori estimates. An example providing additional insights into sequential normal stacking is included in the Appendix.
2.3 Integer ambiguity estimation
To improve accuracy and reduce position convergence times, CSRS-PPP version 3 implements ambiguity resolution. This process can be divided into two concepts: 1. the estimation of carrier-phase ambiguities having integer properties, and 2. the validation of integer ambiguities. This section explains the first concept, while ambiguity validation will be described in the next section.
As previously explained, sequential normal stacking requires a forward run where parameters are estimated using a sequential filter. Then, back-substitution is performed on the backward run to obtain the final parameter values. To avoid singularities, phase-bias parameters are fixed on the forward run. However, to obtain integer ambiguities, these constraints are removed on the backward run and an ambiguity datum is defined (Collins et al., 2010). This concept is implemented by fixing, a priori, one ambiguity per phase-bias parameter. These datum ambiguities propagate into phase-bias and ambiguity parameters, thereby revealing their integer properties. A single ambiguity datum is required for the entire solution unless all ambiguities are re-initialized simultaneously. In this case, new datum ambiguities are selected again.
Ambiguity resolution is only performed during the backward run to ensure that all information is available prior to resolving ambiguities. Starting with the last epoch successfully processed on the forward run, ambiguity validation is attempted using the algorithm described in the next section. To further improve the reliability of ambiguity validation, integer ambiguities are only deemed resolved when a minimum of four satellites have simultaneously resolved ambiguities. Ambiguities resolved as integers are then removed from the system of Equation (8) after adjusting their a priori values using a correction similar to Equation (11). The remaining ambiguities are left as float values in the system of normal equations.
Subsequent epochs are then combined in reverse order (from last to first epoch) using Equations (8) to (10). When the current epoch contains a new ambiguity (with respect to the previous epoch on the backward run), ambiguity validation is again performed using all unresolved ambiguities observed at the current epoch. This sequential ambiguity resolution is not theoretically optimal: Attempting ambiguity resolution on all ambiguities within the session simultaneously should yield better results. However, it is subject to significant computational load constraints for datasets with numerous (e.g., hundreds or more) ambiguity parameters. The sequential algorithm adopted adds negligible processing time and typically provides results identical to the optimal algorithm.
For longer sessions, it is likely that the selected datum ambiguities are not observed for the whole duration of the dataset. As ambiguities from other satellites are resolved, the datum is maintained and implicitly propagated through all epochs. Even when no ambiguities are resolved and datum ambiguities are removed from the normal equations, the system is still solvable, since phase biases are modeled as constant parameters. However, in this case, ambiguities are more likely to be resolved in their single-differenced form (Dach et al., 2007). This implementation further acts as a safeguard when biases contaminate datum ambiguities, since single-differenced ambiguities cancel such biases.
Hence, solely when no datum and uncombined resolved ambiguities remain in the normal equations, CSRS-PPP version 3 forms single-differenced ambiguities:
12
13
where vector contains the float ambiguities and is obtained from Equation (10). Similarly, the ambiguity covariance matrix
can be extracted from Equation (9). Each row of matrix 𝑫 has two non-zero entries consisting either of + 1 or -1 values, allowing for the definition of single-differenced ambiguities. For single-differenced ambiguities identified as integers, constraints are added to Equation (8) using pseudo-observations with large weights (Brockmann, 1997). Finally, since the addition of constraints associated with resolved ambiguities incorporated new information into the system of equations, the sequential normal stacking process is repeated one last time to obtain the final parameter estimates and their covariance matrix.
2.4 Ambiguity validation
Ambiguity validation plays a critical role in ensuring that reliable estimates are obtained from the positioning filter. This task is especially challenging for PPP due to the ionosphere (Collins et al., 2012). Without precise external ionospheric corrections, a PPP user must rely on noisy code observations to estimate this error source. Consequently, successful ambiguity validation requires multiple epochs, which adds complexity to stochastic modelling. When neglecting time-correlated errors, the covariance matrix obtained from GNSS data processing is known to become overly optimistic (El-Rabbany & Kleusberg, 2003). This issue is exacerbated in an online positioning service, since various receiver types, data sampling rates and data collection environments lead to heterogeneous stochastic properties. It is well known that changes in the stochastic model impact ambiguity validation (Teunissen, 1997).
Partial ambiguity resolution is also essential for PPP-AR, since ambiguities observed for a short duration will be poorly defined and are likely to corrupt validation of the full set. Several approaches to partial ambiguity resolution were proposed in the literature. Sequentially excluding satellites allows eliminating problematic ambiguities but can be computationally expensive (Li & Zhang, 2015). Using the integer bootstrapping success rate to identify a subset of fixable ambiguities (Cao, 2009) is a theoretically rigorous approach but can be impacted by imperfections in the stochastic model. Another approach to partial ambiguity estimation is to compute a weighted average of integer candidates, as initially proposed by Blewitt (1989). When GNSS observations follow a normal distribution, the weights can be obtained from the probability density function of the multivariate Gaussian distribution (Teunissen, 2005). However, due to unmodelled errors, it is likely that misclosures do not follow a normal distribution, which impacts the validity of this definition for the weights (Duong et al., 2020).
From the above discussion, it is clear that the stochastic model plays a critical role in ambiguity validation and that partial ambiguity resolution is essential to a successful identification of integer vectors. For these reasons, a new approach called Weighted Integer Decision (WID) is implemented in CSRS-PPP version 3. As with most ambiguity validation methods, it uses the ambiguity residual norm, defined as:
14
The ambiguity residual norm can be computed for any integer vector and is a measure of its proximity to
in the metric of the ambiguity covariance matrix. The integer vector minimizing Equation (14), herein labelled
, is the integer least-squares solution (Teunissen, 1995).
Specifically, the WID algorithm begins by identifying a set of relevant integer vectors , denoted by Θ. Then, for each ambiguity parameter, a scan of all vectors in Θ is performed. If a single integer value is common to all integer vectors, then this integer is deemed validated. Since the scan is performed independently for each ambiguity, partial ambiguity resolution is enabled.
The success of the method relies on the definition of the subset Θ of integers. If the subset is too large, a unique identification of integer values will be challenging. Alternatively, considering too few integer vectors could result in an incorrect validation. The definition of Θ is based on the concept of weights, defined as:
15
where 𝛼𝑗 is a scaling factor to be defined. The subset Θ is defined by accepting all integer vectors satisfying:
16
with 𝑤1 the weight associated with and 𝜖 a user-defined threshold value.
Selecting the scaling factor 𝛼𝑗 as 𝛼𝑗 = 0.5 implies weights derived from the multivariate Gaussian distribution. However, since the stochastic model is often unknown and misclosures do not always follow a normal distribution, adopting an empirical scaling factor is justified. The objective of 𝛼𝑗 is, therefore, to prevent an overly optimistic ambiguity covariance from corrupting ambiguity validation. From extensive testing using hundreds of real-world datasets, a scaling factor was defined to reduce the occurrence of incorrect fixes:
17
with
18a
18b
where 𝑟 is the number of float ambiguities and Ω1 is the ambiguity residual norm associated with vector . When the covariance matrix is overly optimistic, Ω1 is typically large, and the exponential function in Equation (15) converges very quickly to zero, which effectively shrinks the subset of integer vectors Θ. In this case, 𝛽 acts as a safeguard by scaling down all ambiguity residual norms and minimizing the risk of incorrect validation. The proposed scaling factor is not theoretically rigorous and, therefore, not optimal. Nonetheless, its performance for practical ambiguity validation was deemed satisfactory.
By defining 𝜖, all elements are available for ambiguity validation based on the WID approach. Using Equations (15), (16) and (17), it is possible to determine a threshold value for Ω𝑗:
19
Having identified and its ambiguity residual norm, the subset of relevant integer vectors Θ can be defined from Equation (19). Based on our experience, a value of 𝜖 = 0.01 provides an acceptable balance between including enough integer vectors to identify resolvable ambiguities and keeping the processing load manageable.
Since the ambiguity residual norm of Equation (14) is not affected by a reparameterization of the ambiguities, the LAMBDA method (Teunissen, 1995) is used to improve the computational efficiency of the integer vector search. In addition, the critical value of Equation (19) is used to constrain the search space. When more than 20,000 integer vectors satisfy Equation (19), the search process is aborted to minimize the computation load, and float ambiguity estimates are kept in the filter. While additional ambiguities could, at times, be fixed by applying the WID algorithm directly in the decorrelated space provided by LAMBDA, the scan for unique ambiguity candidates is nevertheless performed in the original ambiguity space to avoid the bookkeeping complexity associated with fixing linear combinations of ambiguities.
3 PRODUCTS ENABLING CSRS-PPP
Depending upon the time of data submission, there are three options for PPP corrections products that can be used by CSRS-PPP: ultra-rapid, rapid and final. Their name refers to their latency, which is given in Table 2. Under normal operating conditions, users can obtain a PPP solution approximately 60 minutes after the last epoch of data collection in the field by using ultra-rapid products. This delay is defined by the time required to download GNSS data from globally distributed stations and process the data to estimate precise satellite orbits, clock corrections, and equipment delay biases. The rapid solution, available daily, generally includes more stations than the ultra-rapid solution. Both the ultra-rapid and rapid solutions are generated by NRCan. The satellite clock corrections are based on the decoupled-clock model of Collins et al. (2010), leading to what is often referred to as “integer clocks.” The combined use of these clock corrections, along with corresponding satellite phase and code biases, is what allows for ambiguity parameters to converge to integer values on the user end.
Satellite orbit, clock, bias and Earth orientation parameters products used in CSRS-PPP
Prior to CSRS-PPP version 3, the final products consisted of the IGS combined orbits and clock corrections (Johnston et al., 2017). Since these products do not include GLONASS information, the NRCan final GLONASS solution was incorporated into the IGS products to provide a multi-GNSS solution. Since the IGS does not currently produce combined satellite clock corrections enabling PPP-AR, an in-house combination is performed (Banville et al., 2020). The combined products are similar to the IGS solution in terms of quality but preserve the characteristics of integer clocks provided by analysis centers such as NRCan, the Center for Orbit Determination in Europe (CODE) (Schaer et al., 2018) and the Centre National d’Études Spatiales (CNES) (Loyer et al., 2012). The strategy for including GLONASS information has been maintained in version 3. When the IGS provides operational multi-GNSS products supporting PPP-AR, CSRS-PPP will likely switch to using those products. Code biases are obtained from a daily combination of differential code biases provided by NRCan (Ghoddousi-Fard, 2014), CODE (Villiger et al., 2019), the Chinese Academy of Science (CAS) (Wang et al., 2016), and the German Space Operations Center (DLR) (Montenbruck et al., 2014).
4 EXPERIMENTAL RESULTS
To characterize the benefits of releasing the new version of CSRS-PPP for the user community, four aspects are evaluated: the WID algorithm for ambiguity validation, the accuracy and convergence time offered by PPP-AR, the performance of the various product lines, and an analysis of the day-boundary crossing.
4.1 Ambiguity validation
Even though a detailed evaluation of the WID algorithm is out of the scope of this paper, its application is presented using data collected on August 29, 2018, at station FRDN in Fredericton, New Brunswick, Canada (see Figure 4). At first, only GPS data are processed, in static mode, from 00:00:00 to 02:00:00 GPS Time. For the purpose of our demonstration, only satellites available at the first epoch are considered, since ambiguities of rising satellites would require partial ambiguity resolution to be implemented for all validation tests considered. This dataset is selected since the float solution takes a long time to reach centimeter-level accuracy. The exact cause for this slow convergence is not clear, but it constitutes an interesting case for ambiguity validation assessment.
Ambiguity validation has received much attention in the past decades, and a plethora of methods were proposed. We herein limit our comparison to three common approaches, although we recognize that better ones may exist. The first approach uses the bootstrap estimator to define a lower bound on the success rate (SR) (Teunissen, 1999), and integer ambiguities are accepted if the success rate is greater than 99.9%. The last two approaches are the ratio test (RT) (Euler & Schaffrin, 1991) and the difference test (DT) (Tiberius & de Jonge, 1995). Since there is no theoretical foundation supporting the definition of critical values for these tests, we selected a value of two for the ratio test and 15 for the difference test. The WID algorithm is also included, and its critical value is defined as per Equation (19).
Figure 1 illustrates the slow convergence of the float solution to centimeter-level horizontal accuracies. The integer least-squares solution (ILS) is computed at every epoch from the float solution and shows jumps as integer ambiguities switch from one pull-in region to another (Teunissen, 1998). The success rate rises to over 99.9% in 30 minutes, which leads to the acceptance of the ILS solution while it is in the incorrect pull-in region. The same applies to the difference test (33 minutes) and the ratio test (45 minutes). Once ambiguities are incorrectly resolved, the solution remains biased until the end of the session. When ambiguity validation is performed using the WID algorithm, the float solution is maintained until about 01:50, when integer ambiguities are finally accepted to the correct values.
Horizontal error for station FRDN on August 29, 2018 (with reduced geometry), obtained using different ambiguity validation tests: integer least-squares (ILS), the success rate (SR), the ratio test (RT), the difference test (DT) and the weighted integer decision (WID)
To gain more insights into the validation process, Figure 2 shows the two shortest ambiguity residual norms (Ω1 and Ω2) and the critical values from selected validation tests. The drop in values at 01:33:30 is associated with a satellite being removed from the solution as it is observed below the elevation angle mask. At this point, the shortest ambiguity residual norm is associated with the incorrect integer vector for a short period of time (refer to Figure 1), leading to a rapid increase in Ω1 until the float estimates lie in the pull-in region of the correct integer vector a few minutes later. From Figure 2, it is apparent that the difference test will lead to faster validation as the norms increase, since the critical value is very close to Ω1. The ratio test appears to offer a wider margin, although the critical value was below Ω2 from 00:45 to shortly before 01:00, leading to an incorrect validation of the ambiguities. The WID critical value is more conservative: It does not correspond to a constant difference or ratio, but rather adapts itself as a function of the magnitude of the shortest ambiguity residual norm.
Ambiguity residual norm and critical values for ambiguity validation
When adding GLONASS to the above example, geometry is improved and convergence time of the float solution is significantly reduced. Note that GLONASS ambiguities are not resolved and kept as float parameters in the PPP filter. Figure 3 shows the horizontal errors obtained with the various validation methods. The ratio test exceeds the critical value of two very rapidly (six minutes), fixing ambiguities to incorrect integers. All other validation tests perform well, with the difference test resolving ambiguities first (15 minutes), followed by the WID and success rate tests around the 24-minute mark. This example confirms that ambiguity validation based on the WID test is not overly conservative and allows rapid ambiguity resolution when the model allows for it.
Same as Figure 1, with the addition of GLONASS satellites
4.2 Accuracy and convergence
The performance of CSRS-PPP version 3 is evaluated from a client’s perspective by comparing results against version 2. Hence, differences in results originate from a combination of new algorithms and products. Data from 20 Canadian active control stations (CACS) are used, as shown in Figure 4. One day per week is processed over the 52-week period covering the year 2018, for a total of 1,015 datasets. This number takes into consideration rejections due to lack of observations or missing reference coordinates from the IGS combined daily solution. Only GPS and GLONASS observations are processed, since other constellations are not yet supported by the CSRS-PPP service, and ambiguity resolution is only attempted for the GPS constellation.
Canadian stations used for the CSRS-PPP evaluation
Figure 5 compares solution convergence between CSRS-PPP versions 2 and 3 in static mode. The errors displayed represent the root mean square (RMS) errors computed from all 1,015 datasets, divided into the latitude, longitude and height components. Since ambiguity resolution is only performed at the end of the session, convergence is displayed by connecting RMS values obtained from sessions of 0.25, 0.5, 0.75, 1.5, 3, 6, and 12 hours. The benefits of ambiguity resolution are especially clear for shorter sessions, before the float solution has fully converged. However, ambiguity resolution is not achieved consistently for 15-minute sessions, which explains why improvements in terms of percentage are typically lower than sessions with a duration of 30 minutes or more. With high-quality GNSS receivers and antennas, and good tracking conditions, sub-centimeter horizontal accuracies are obtained in less than 45 minutes in version 3, thanks to ambiguity resolution, while close to three hours are necessary with version 2. For 1.5-hour sessions, an improvement of approximately 40%, 70%, and 20% is seen for the latitude, longitude, and height components, respectively. Even though ambiguity-fixed solution accuracy increases for longer sessions, the improvement over a float solution becomes less significant, as the latter also converges to millimeter accuracy. After 24 hours (not shown on this plot), the longitude component has an RMS error of 2.0 mm for version 2 and 1.2 mm for version 3. Even though the improvement is small in absolute terms, it still amounts to a 40% reduction of error. Additional details regarding the performance of daily solutions are presented in Figure 10.
Convergence of the RMS errors between CSRS-PPP versions 2 and 3
Figure 6 demonstrates positioning results in stationary kinematic mode, assuming independent position estimates from epoch to epoch. For sessions of one, three, and 24 hours, the RMS error of the latitude, longitude, and height components are computed for each dataset. The 90th percentile of these RMS errors over all sessions is then computed and reported in Figure 6. While there is a clear reduction in RMS errors for all components, the longitude again benefits the most from ambiguity resolution, with an 84% improvement for one-hour sessions. Once PPP solutions have converged, positioning errors are driven mainly by geometry and observation noise/multipath. This can explain why longer sessions do not yield significantly better results than shorter sessions with ambiguity resolution.
Comparison of 90th percentile errors between CSRS-PPP versions 2 and 3 in stationary kinematic mode
4.3 Kinematic test
The performance of PPP-AR in kinematic mode depends largely on signal tracking quality. In several applications, obstructions cause cycle slips in carrier-phase observations, leading to a significant degradation of the solution. Therefore, generalizing the performance of CSRS-PPP in such applications is not possible.
For demonstration purposes, we present kinematic results from a vehicle obtained under good sky coverage with some low-elevation obstructions caused by mountains. The drive was conducted in Alberta, Canada, on October 26, 2020, from 15:50:00 to 19:17:30 GPS Time and covered a distance of approximately 137 km. GPS and GLONASS data were collected by five receivers installed on the roof of a car, as shown in Figure 7. The trajectory includes four short static periods between 10–30 minutes. The GNSS equipment used in the test is presented in Table 3. Two Leica GS16 (labeled B and R in the test) and a Trimble R10 (labeled #3) integrated units were set up, in addition to two u-blox F9P receivers (labeled WHI and RED) connected to external patch antennas. Even though data were collected at a 1 Hz sampling rate, only 30-second intervals are considered for this test.
Experimental setup for the kinematic drive test
Equipment used in kinematic drive test
Data were processed using CSRS-PPP versions 2 and 3. As previously described, these versions use different satellite clock and bias products, and ambiguity resolution is only attempted in version 3. Table 3 gives the percentage of GPS ambiguities resolved for each solution. This quantity is defined as the number of carrier-phase observations with fixed ambiguities divided by the total number of phase measurements. The percentage of ambiguities resolved for the u-blox F9P receivers is typically much lower, because this receiver type does not track the C2W signal and, as a consequence, only provides single-frequency data for GPS IIR satellites. When a mix of single- and dual-frequency data are provided for a constellation, CSRS-PPP discards single-frequency satellites. Therefore, solution geometry from u-blox F9P receivers is currently degraded, which impacts ambiguity resolution capabilities. The F9P-WHI receiver performed better than the F9P-RED, since the latter rejected an additional epoch, which led to shorter data arcs and compromised ambiguity resolution.
The tracking conditions of the R10#3 solution are depicted in the ambiguity status plot of Figure 8. Float ambiguities are represented in olive, fixed ambiguities in green, datum ambiguities in cyan, and cycle slips are depicted by vertical red lines. Several ambiguity resets occurred about 30 minutes into the session, and integer ambiguities could not be identified confidently for the Trimble R10 prior to this interruption, leading to a lower percentage of ambiguities being resolved. Note that all GLONASS ambiguities are float, since ambiguity resolution is currently performed only for GPS.
Ambiguity status plot for the R10#3 solution
Since no reference trajectory is available, our focus is on the repeatability of the horizontal fixed baseline length between receivers. These baselines are computed from independent PPP-derived positions and should not be confused with differential baseline processing such as real-time kinematic (RTK). Even though the car travelled in mountainous terrains, it was verified that the slope of the terrain did not produce tangible effects on the horizontal baselines. Figure 9 presents the baseline estimates for the receiver pair GS16B and GS16R for all epochs involved. The baseline length provided by version 3 is clearly more stable, especially between 16:30 and 17:30. Overall, the standard deviation of all epochs is reduced from 11.9 mm to 6.9 mm, an improvement of 42%.
Derived baseline between receivers GS16B and GS16R using PPP solutions from CSRS-PPP versions 2 and 3
Standard deviation improvements for all baselines are presented in Table 4. For all receivers where ambiguity resolution was, to a certain extent, successful, a reduction near 40% is observed. For the F9P-RED solution, where no ambiguities could be resolved, small differences between solutions can be justified by the different products used, by the different parameterizations in both versions (version 3 includes phase biases in the functional model), and by other minor changes in the software.
Improvement in horizontal baseline lengths between CSRS-PPP version 2 and 3
While these results may not be representative of all kinematic datasets, it shows how, under good tracking conditions, ambiguity resolution can be beneficial for real moving receivers.
4.4 Product lines
Table 2 presents three product lines enabling PPP-AR solutions. To quantify the impact of product latency, a second test is performed using 19 Canadian stations. All stations displayed in Figure 4 are included in this evaluation, except KUUJ, which had been decommissioned. In total, 21 days of data are analyzed, from May 10–30, 2020, for a total of 399 sessions. Datasets are processed with the ultra-rapid (DCU), rapid (DCR), and final (DCF) products. GPS and GLONASS observations are processed using CSRS-PPP version 3, and ambiguity resolution is attempted in all sessions on GPS ambiguities only.
Figure 10 presents the latitude, longitude, and height RMS errors for all product lines in static mode. While there is a benefit in waiting for products with longer latencies, the improvements are typically at the sub-millimeter level, which is clearly within the reported uncertainties. In other words, users can obtain as accurate a solution with ultra-rapid products available within an hour as with final products. Still, the combined solution offered through the final products normally reduces noise, improves robustness, and offers better integration into the reference frame.
Comparison of RMS errors between the CSRS-PPP version 3 ultra-rapid (DCU), rapid (DCR), and final (DCF) product lines in static mode
Figure 11 shows results from a similar analysis in stationary kinematic mode. Again, all product types offer a similar performance, but including more stations in the rapid products (DCR) leads to more precise orbit and clock estimates, which reduces noise over the ultra-rapid (DCU) products. Similarly, the final products (DCF) can further reduce the noise by performing a weighted combination of solutions from several analysis centers. Differences in the 90th percentiles amount to only a few millimeters or less, which is smaller than the typical kinematic standard deviations, which are at the centimeter level.
Comparison of 90th percentile errors between the CSRS-PPP version 3 ultra-rapid (DCU), rapid (DCR), and final (DCF) product lines in stationary kinematic mode
4.5 Day-boundary analysis
A benefit of integer clocks is that they can be precisely aligned from one day to the next by shifting them by an integer multiple of their wavelength. As a consequence, position jumps occurring when observations span two GPS days can be minimized. As an example, data from station ALGO (see Figure 4) were processed from August 14, 2019, 23:00:00 to August 15, 2019, 01:00:00 using CSRS-PPP version 2 (Figure 12a) and version 3 (Figure 12b).
Position errors during a day boundary crossing using: a) CSRS-PPP version 2, b) CSRS-PPP version 3
With CSRS-PPP version 2, position jumps of approximately 6 cm in latitude and 7 cm in longitude can be noticed, even though the receiver maintained continuous lock on all satellites. This is a consequence of IGS clock products not being aligned between two consecutive days, and ambiguity parameters being reset. With CSRS-PPP version 3, no jump can be seen in latitude, and the discontinuity in longitude has been significantly reduced. The latter is caused by satellite orbit interpolation errors associated with the use of two orbit (SP3) files. The consistency between satellite orbits and clocks is recovered after crossing midnight, eliminating the slight divergence experienced prior to the day boundary. Still, the benefits of combining PPP-AR with products aligned for consecutive days are obvious.
5 CONCLUSION
To support increasing user needs for timely and accurate geodetic information, NRCan continues to modernize its openly available CSRS-PPP online positioning service. The latest improvement is the implementation of algorithms enabling precise point positioning with ambiguity resolution. Offering PPP-AR solutions in an online service is complex due to the vast diversity in terms of user equipment and data collection strategies. As a consequence, new features had to be developed.
The sequential normal stacking algorithm was proposed to cope with potential numerical stability issues in the Rauch-Tung-Striebel smoother. This process allows adding process noise and integer ambiguity constraints into normal equations, while providing access to the rigorous epoch covariance matrices needed for ambiguity resolution. Another key component of CSRS-PPP version 3 is the Weighted Integer Decision algorithm for ambiguity validation. The method defines empirical weights leading to the selection of relevant integer ambiguity vectors. Once these vectors are identified, an ambiguity parameter is deemed resolved if a unique ambiguity candidate is common to all vectors. Deficiencies in the stochastic model are compensated by the scaling factor involved in the weight definition.
To enable PPP-AR solutions, integer clock products with phase biases had to be generated. For this purpose, NRCan currently computes its own ultra-rapid and rapid products and performs a satellite clock and phase-bias combination based on the IGS final orbits, which preserves the integer nature of the ambiguities at the user end. It was demonstrated that all integer-clock products offer similar positioning performances. In terms of convergence, CSRS-PPP version 3 offers clear gains for shorter sessions, with RMS error reductions exceeding 20% in all components for static sessions of about an hour. For both static and kinematic one-hour sessions, improvements in the longitude component reach approximately 70–80%. For 24-hour static sessions, the benefit of ambiguity resolution is small in absolute terms, but nevertheless amount to a 40% error reduction in the longitude component for the studied datasets.
While ambiguity resolution reduces PPP convergence times and improves the overall positioning accuracy, reaching centimeter-level accuracies still requires tens of minutes. To further improve convergence times, subsequent CSRS-PPP releases will introduce regional slant ionospheric delay corrections and process signals from additional GNSS constellations and frequencies.
HOW TO CITE THIS ARTICLE
Banville S, Hassen E, Lamothe P, Farinaccio J, Donahue B, Mireault Y, Goudarzi MA, Collins P, Ghoddousi-Fard R, Kamali O. Enabling ambiguity resolution in CSRS-PPP. NAVIGATION. 2021;68(2):433–451. https://doi.org/10.1002/navi.423
ACKNOWLEDGMENTS
The authors would like to thank Paul Gratton, M.Sc. candidate, and Professor Emeritus Gérard Lachapelle, University of Calgary, for sharing the vehicular GNSS data used in the paper. All contributors of data and products to the International GNSS Service (IGS) are gratefully acknowledged since, without them, the CSRS-PPP service would not exist. This paper is published as NRCan contribution number 20200582.
APPENDIX
This appendix presents a two-epoch example of sequential normal stacking. At the first epoch, initial constraints from Table 1 are included in matrix and a priori parameter values are contained in vector
. Using Equations (6) and (7), 𝑵1 and 𝑼1 can be computed, and the epoch estimates are obtained from Equation (5):
A1
where indicates that the normal vector was computed using the parameter values
. This nuance is important, since it allows reconciliating a priori values on the backward run.
For the next epoch, the estimated parameters are propagated such that:
A2
While CSRS-PPP adds process noise directly into the normal equations using the method of Ge et al. (2006), we use an equivalent, but computationally more expensive, formulation for our demonstration:
A3
where 𝑾 is a diagonal process noise matrix. The final estimates of epoch 2 can now be obtained as:
A4
where 𝑵2 and are the epoch normal matrices based on the a priori parameters
. The final covariance matrix for this epoch is:
A5
The backward run allows recovering final parameter estimates and their covariance matrix at epoch 1. This is achieved from Equations (8) to (10):
A6
A7
where superscript 𝑏 identifies the backward run, and
A8
A9
Equations (A8) and (A9) are again equivalent expressions to the in-system operations of process noise addition from Ge et al. (2006). Matrix Δ𝑼 accounts for different a priori parameters and is obtained from Equation (11):
A10
- Received December 9, 2020.
- Revision received April 8, 2021.
- Accepted April 29, 2021.
- Copyright © 2021 Her Majesty the Queen in Right of Canada NAVIGATION © 2021 Institute of Navigation. Reproduced with the permission of the Minister of Natural Resources Canada.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.