Abstract
The advent of assisted GPS/GNSS can reduce the time to first fix (TTFF) dramatically and still provide sufficient positioning accuracy. However, many applications utilize GNSS for timing and assisted GPS/GNSS does not always provide the accurate nanosecond synchronization. Accurate timing is challenging for assisted GNSS due to the lack of diversity in the solution over short time periods. The relatively short ambiguities of the GPS L1 C/A code signal, one millisecond and twenty milliseconds, are too coarse to resolve accurate time reliably. In this paper, the impact of different ambiguity times has been derived/computed and validated. Experimental results show that when the ambiguity time is 200 milliseconds or longer, the least squares process will reliably converge and the typical nanosecond-level timing can be achieved. An example shows how the GPS L1 C/A data parity word (provided every 600 milliseconds) can be used, demonstrating nanosecond-level timing from assisted GPS.
1 INTRODUCTION
The Global Navigation Satellite System (GNSS) is widely used for positioning and timing services. However, the time to get the first positioning solution, i.e., the time to first fix (TTFF), could be relatively long due to low-rate ephemeris modulated on GNSS signals. To solve this problem, the assisted GNSS (A-GNSS) technique has been introduced. The basic idea of A-GNSS is: instead of receiving 40 seconds of data from satellites and decoding ephemeris information, it obtains ephemeris from external sources such as an internet server or it applies extended long-term ephemeris techniques. Then, the A-GNSS technique only needs to receive millisecond-level data to acquire the code phase offset to provide decent positioning services (Van Diggelen, 2009).
Several existing A-GNSS techniques have been proposed, and van Diggelen’s technique proposed in van Diggelen and Abraham (2007) and Van Diggelen (2009) is the one that is representative and is being widely used. As far as timing goes, most A-GNSS techniques aim to achieve a coarse time estimate within 10 milliseconds until their receiver is able to synchronize with the navigation bits. Muthuraman et al. (2012) review two different time-recovery algorithms that achieve the 10 milliseconds accuracy under nominal signal conditions but with weak signal conditions (15 dB-Hz) could not achieve the same accuracy.
Besides, there are other papers that describe different approaches taken with A-GNSS. Weng et al. (2011) propose a method of self-generated assistance data for A-GNSS. Dovis et al. (2008) discuss the acquisition issue of indoor A-GPS. Huang and Akopian (2013) discuss the problem of A-GPS assistance network delay. Otaegui et al. (2006) propose a hybrid architecture of standalone and A-GNSS receiver. Cheong et al. (2014) propose a snapshot receiver with collective detection using MS-based A-GPS techniques. Syrjarinne (2001) explores the assistance that a communications network can provide to a GPS system. Li (2010, 2012) explores the positioning bias and how to eliminate it. Huang et al. (2014, 2015, 2016) measure and model network delays for MS-based assistance delivery. Bissig et al. (2017) detail an efficient collective detection (CD) method to obtain a position fix using one millisecond of data. Vallina-Rodriguez et al. (2013) characterize the inefficiencies and costs in relying on network assistance for several A-GPS mobile devices. Sirola (2001) and Sirola and Syrjarinne (2002) propose a range-fitting cost function over the space of position and coarse time using an iterative approach. Akopian and Syrjarinne (2002) and Syrjarinne (2000) also formulate a time-recovery method but use optimization and extended Kalman filters. Glennon and Bryant (2005) propose two different approaches for accurate time recovery with a focus on the single-stage algorithm. However, the original and all these alternate methods are focused on a position solution and do not provide nanosecond-level time accuracy in all cases.
The motivation for this paper stems from a desire to obtain not just fine-time accuracy (less than 10 milliseconds error), but nanosecond-level accuracy in as little time as possible. In conventional A-GNSS techniques, achieving this accuracy is assumed to be solved when the navigation data bits transitions are known. Several papers discuss the algorithm to synchronize with the navigation bits. For example, the study proposed in Akopian and Syrjarinne (2009) uses navigation bit alignment for time recovery. Also, Agarwal et al. (2002) in their review approach the problem of time-stamp recovery in which they discuss several different approaches including the navigation-bit alignment. However, the papers highlight that without being able to synchronize with the navigation bits, the timing error could be incorrect by multiples of 20 milliseconds.
There has not been a discussion of what the alternative is when the data bit transitions cannot be synchronized accurately. Unlike some A-GNSS research that focuses on implementing external data assistance or investigating acquisition algorithms, this paper mainly focuses on deriving accurate time using A-GNSS. The contributions can be summarized as follows: (1) It provides the generalized methodology to use any ambiguity time in the attempt to achieve nanosecond-level accuracy. (2) It provides results that show the relationship between ambiguity time and likelihood of nanosecond-level accuracy. (3) It provides a method of finding an ambiguity time of 600 milliseconds by leveraging the GPS L1 C/A parity bits.
This problem will be defined throughout this paper as ambiguity resolution, i.e., the ability to correctly identify the ambiguity. For the GPS L1 C/A code, there is an obvious 1 millisecond resolution through the code period which is sufficient, with a series of constraints, to obtain a GNSS position solution. Again, for the GPS L1 C/A code, another level of resolution, 20 milliseconds, can be obtained through the data bit synchronization. This level improves the ability to resolve accurate time but is not always successful. It is recognized that the assistance server or external sources may be able to provide a broader time reference but for many field operations that level is nonexistent or too coarse. In GNSS signals, not all ambiguity times exist, but this paper will simulate the results as if several nonexistent ambiguity times did exist. The algorithm given will be general such that any ambiguity time can be used.
In this paper, the A-GNSS approach as described by van Diggelen will be reviewed. Next, the algorithm to investigate the ambiguity resolution impact will be proposed. Then, experiments will be performed to find the necessary ambiguity time for nanosecond-level accuracy in a majority of situations. Finally, an example of using a parity check algorithm to resolve ambiguity time for Global Positioning System (GPS) L1 CA signal with only the provided IF data stream will be presented that is sufficient for 99% of situations.
2 REVIEW OF A-GNSS RECEIVER
Most often, the goal of A-GNSS receivers is to achieve faster time-to-first-fix (TTFF) and greater availability. Conventional GNSS receivers need to receive the data from the first three subframes of the 30 second navigation frame to solve for time and position. For the GPS L1 C/A signal, it can take as long as 36 seconds to receive these critical elements of the navigation data, which can be a significant delay for time-to-first-fix. With the A-GNSS technique, receivers can solve for position with only milliseconds of IF data, receiving the navigation data message via an alternate source. This means that the receiver can provide position without waiting to receive and decode the whole frame. The contents of the navigation data for the satellites, the precise orbital and clock parameters, also known as the assistance data, are provided via an alternate channel. For the mobile phone, this data is typically provided through the cellular network using the Secure User Plane Location (SUPL) protocol (Open Mobile Alliance, 2012). For other applications, this data can be provided by the Networked Transport of RTCM via Internet Protocol (NTrip) servers available through various servers on the internet (Weber et al., 2005). The use of this assistance data to determine coarse and accuracy position has been established (Van Diggelen, 2009) and that is presented next for review. Missing prior to this work is the ability to determine precise nanosecond-level time using this assistance data.
As explained above, the TOT and time of reception are needed to calculate the pseudoranges. Without either time, the pseudoranges must be estimated and improved on, as described below.
2.1 Five-state least squares approach to coarse-time estimation
To investigate this problem, the coarse-time error shall be defined as the difference between the true receiver time and the estimated receiver time. Looking at the effect of adding a coarse-time error, first consider the pseudorange residuals as used in the least squares algorithm:
1
where 𝑧(𝑘) is the actual pseudorange for satellite k, and is the predicted pseudorange and is given by
2
and is the estimated time of transmission of the satellite signal we are measuring,
is the calculated satellite position at time
, 𝒙𝑥𝑦𝑧0 is the a priori receiver position,
is the satellite clock error in units of length at time
, and 𝑏0 is the a priori estimate of the common bias in units of length. However, the problem is that the time
is in error, so the normal four-state least squares algorithm can not solve for the receiver position and common bias. When
is incorrect, so is
and
. For each one second of error, the pseudoranges would be incorrect on the order of 800 m (Van Diggelen, 2009). There is hope though, as we know the velocities of the satellites, and thus the pseudorange rate. Below in Equation (3), the difference of estimated pseudorange at the correct and incorrect transmission time is shown. The second expression is true over short times with the assumption that the satellites moved in linear tracks.
3
where 𝛿𝑡𝑐 is the coarse-time error, 𝑡𝑡𝑥 is the actual time of transmission, is the coarse-time estimate of the time of transmission,
is the pseudorange rate, 𝒆(𝑘) is the unit vector from the receiver to the satellite k, 𝒗(𝑘) is the satellite velocity vector, and
is the satellite-clock error rate in units of length/time. So now there is a term that takes care of the coarse-time error in the pseudoranges. The state-update vector is now
4
and the pseudorange residuals for each satellite are
5
with 𝜺(𝑘) encapsulating all atmospheric and other errors. Stacking the pseudorange residuals for all satellites leads to the following:
6
with
7
where 𝐾 is the number of satellites. Since there are now five columns of the H matrix, there now must be at least five satellites to solve for 𝜹𝒙. Thus, a closed-form five-state solution has been developed to solve for the position without precise time. However, there is one issue that has yet to be addressed. The measured pseudoranges for each satellite will have a millisecond integer ambiguity error. Explaining and fixing this error will be discussed in the section below. There is also an a posteriori residual check to make sure that the computed position and time actually fit the given satellite positions and is usually not a concern if the a priori error is not too large.
2.2 Fixing the millisecond ambiguity
Since the full pseudorange is not available because the subframes are not decoded, there is a problem with correctly identifying the number of milliseconds of each pseudorange. This is because there is uncertainty in the satellite positions, the common bias, and the satellite clock errors. The real problem occurs when the common bias causes an integer rollover. In Van Diggelen (2009), van Diggelen explores the benign and malign cases in depth. Conventional receivers do not have this problem as it relies on the navigation data to identify the time of transmit, whereas in A-GNSS, the data bits are not decoded. To solve for the millisecond ambiguity, the van Diggelen technique will be described.
The first step is to find the sub-millisecond pseudoranges given by the code phase measurements, either from acquisition or tracking. Next, a reference satellite (usually the strongest) is chosen. Using the ephemeris and the a priori state, the full pseudorange is estimated and the millisecond integer 𝑵(0) is assigned. The zero represents the reference while k will represent any other satellite. Thus, the reference satellite has full pseudorange 𝑵(0) +𝒛(0) where 𝒛(0) is the sub-millisecond pseudorange. The assigned integer 𝑵(0) will be used to reconstruct the the other satellites’ integers in a way that removes the possibility of integer rollover. Keep in mind, the reconstructed full pseudorange has an implicit common bias and is related to the actual geometric range by the following:
8
where 𝒓(0) is the unknown actual geometric range, is the estimated geometric range from the a priori position at the a priori time of transmission, 𝑑(0) is the error in
caused by the error in the a priori position and time,
is the satellite clock error, 𝑏 is the common bias, and 𝜺(0) is the measurement errors that includes atmospheric and thermal errors. The goal is to assign integers 𝑁(𝑘) such that they share the same common bias. If this is possible, then for satellite k the full pseudorange would be
9
where 𝑏 is the same common bias as for the reference satellite. Subtracting satellite k’s pseudorange by the reference satellite’s, the following is found:
10
and solving for 𝑵(𝑘):
11
It is clear that the common bias 𝑏 cancels exactly. And, if the magnitude of (−𝑑(𝑘) + 𝜺(𝑘) + 𝑑(0) −𝜺(0)) is less than 0.5 light-milliseconds (about 150 km), then the correct value of 𝑵(𝑘) is given by
12
where all variables are known, and no matter the value of 𝑵(0), the values for 𝑵(𝑘) will have the same common bias. By solving for all millisecond ambiguities, the position can now be found. However, precise time has yet to be found as the one millisecond ambiguity is not large enough to distinguish the true time of transmission. Essentially, there is no information to calculate the true time of transmission given only the one millisecond ambiguities. The challenge is to find an integer which ensures all measurements have the same common bias. From Equation (12), we can see that the receiver can achieve this goal by calculating 𝑵(𝒌) as shown in Equation (12). For a more in-depth description and visualization of this equation development, refer to Van Diggelen (2009).
2.3 Extended ambiguity time
As mentioned above, one millisecond of ambiguity time is not enough for accurate time recovery as the travel time from satellite to receiver is typically 60 to 90 milliseconds. Therefore, the ambiguity time should be extended if accurate timing service is required for A-GNSS. There are many GNSS constellations and signal structures that we can use to extend the ambiguity time. For example, GPS L2C has 20 milliseconds CM code and 1.5 seconds CL code. But even with the GPS L1 CA code, we can still resolve its ambiguity time to 20 milliseconds by applying the navigation bit synchronization algorithm. Therefore, the following discussions will be focused on using GPS L1 CA code to provide accurate timing service.
However, using the data bit transitions is more challenging. First, there has to be enough data to ensure that a data-bit transition is present. This can be upwards of 100 milliseconds to have a strong probability of transition. If the navigation bit transition is known, then the number of milliseconds from each satellite’s bit transition is calculated. Every bit transition occurs at what the satellite thinks is a multiple of 20 milliseconds when it transmits the signal, i.e., at 1.02 second, 1.04 second, 1.06 second, etc. Therefore, the number of milliseconds from this bit transition gives the amount of milliseconds from a time with a modulus of 20 milliseconds. So, if for a satellite the number of milliseconds away from a bit transition was seven, then the possible transmission times could be 1.013 second, 1.033 second, 1.053 second, and likewise for every 20 milliseconds. The concern is how to pick the correct 20 millisecond time since it is ambiguous. Therefore, another constraint is that the five-state least squares algorithm’s coarse time error from the true time is less than 10 milliseconds. This is not guaranteed for all geometries or situations, especially when atmospheric errors are not mitigated or the carrier-to-noise ratio is low (Muthuraman et al., 2012). Assuming the error is less than 10 milliseconds, the time of transmission for satellite k is estimated as
13
where is the coarse time estimate returned by the five-state least squares algorithm,
is the satellite’s position at time
, 𝑐 is the speed of light, and
is the satellite clock correction.
If the error from the true time is less than 10 milliseconds, then the following equation will round to the correct 20 millisecond:
14
where is the estimated transmission time given by Equation (13), and 𝑁(𝑘) is the integer number of milliseconds from the satellite k’s bit transition. With the correctly rounded 20 millisecond transmission time, the number of milliseconds 𝑁(𝑘) can now be used to find the transmission of the current millisecond (i.e., without sub-millisecond correction) as
15
Finally, to find the transmission time with the sub-millisecond correction, the code phase is used as shown below:
16
where 𝑧(𝑘) is the code phase for the kth satellite in terms of a fraction of a chip. The calculated transmission time is now as close to the correct transmission time that can be achieved and is summarized in the equation below:
17
With this time of transmission, the error in the previously computed transmission time can be found for each satellite:
18
Ideally, this value would be equal for all satellites, but because of slight errors in the code phase or other sources, they have slight variations. However, they should all be within one millisecond of each other. The average of for all satellites is then added to the estimate of coarse time, as shown in the equation below:
19
For even more accuracy, the normal GPS four-state least squares algorithm is then performed with the time estimate and associated satellite positions to further refine the accuracy of the four states (usually on the order of nanoseconds improvement). This results in the nanosecond-level accurate time estimate as comparable to full GPS processing.
Unfortunately, there is a low likelihood that the accuracy of the five-state least squares is within 10 milliseconds (refer to Figure 2). Therefore, a different ambiguity time is required to be a benchmark for the satellite transmission times. All that needs to change for the corrected time of transmission equation is to replace the 20 millisecond modulus time with a different ambiguity time as shown below:
20
where 𝑡𝑚𝑜𝑑 is the ambiguity time in seconds, and 𝑁(𝑘) is the number of milliseconds from that ambiguity. The experiments described later will investigate the ambiguity time where accurate timing can most likely be achieved.
3 IMPACT OF DIFFERENT AMBIGUITY TIMES
Even though the five-state least squares converges and gives a good coarse time, the timing can still be tens of milliseconds off depending on the mitigation of error sources. The problem is that it is very difficult to ascertain if the satellites sent their signals at one millisecond or the next since the pseudorange changes on the order of meters. So the least squares will converge but because of the other errors, it is most likely not the true time.
The big question is: how much ambiguity time is required to ensure that the A-GPS algorithm can find time with nanosecond-level accuracy? In order to investigate this, we discuss how different ambiguities impact timing error accuracy in an experimental approach. As the GPS L1 CA signal has been operating stably and is widely used, the main experiment was performed where the accuracy of the timing was examined with several GPS L1 CA signal data sets.
3.1 Experiment setup
The setup for the entire code structure consists of a GPS L1 software-defined receiver (SDR) (Borre et al., 2007) and an assisted GPS (A-GPS) L1 SDR. For convenience, the traditional SDR will be called the full SDR in the following discussion. A data file is first passed to the full SDR and goes through acquisition, tracking, and position estimation. Importantly, this implies that the file should be longer than 36 seconds as it is the minimal required time length to decode the necessary ephemeris information. After the file is analyzed, the relevant navigation solutions are saved. If time accuracy is not a worry, then only the decoded ephemeris parameters, the acquired satellites, the position, and the first sample time of the file are saved. The number of milliseconds from the start of the file to the first subframe start for each satellite is also saved. This is explained in Section 2.
The saved parameters are then passed to the A-GPS SDR. A detailed description of this interaction is presented in Wang et al. (2019). The position and time are the a priori guesses for the A-GPS algorithm (subject to error offsets). Then, the PRNs given by the full SDR are acquired and the five-state least squares A-GPS algorithm is implemented. The signals are tracked for only three seconds and then the time-corrected five-state least squares algorithm is used.
3.2 Ambiguity time experiments
In order to answer the above-mentioned problem, we present the experimental test results in this section. Basically, it is the search for how much ambiguity time there must be to then converge the time solution within nanoseconds. As an analogy, imagine you go on a walk with signs at specific intervals and a step-counter you reset at every sign. If the signs were a mile apart, then it would be easy to know how far you have walked since you know easily whether you are on your first or second mile and also know how many steps you are after the mile marker. However, it is quite different if the signs were 200 feet apart. It would be very hard to distinguish how many signs you passed but you still know exactly how many steps you are from the last sign. There is a chance of guessing your distance walked correctly, but it is low. With a sign every foot, it would be almost impossible to guess the distance traveled. For A-GPS, the signs are the ambiguity times, and the step counter is represented by the number of milliseconds until the ambiguity time. That is to say: the longer the ambiguity time is, the more probable it is to get accurate time recovery.
In the GPS L1 C/A signal, there are three main ambiguity times built in. They are the millisecond code period, the 20 millisecond navigation bit, and the 6 second navigation message subframe. For this experiment, the milliseconds to the subframe start for each satellite are passed to the A-GPS code. With that, it can be modulated down to any ambiguity time that is a factor of 6,000 milliseconds. This is so that at the six second subframe start, every other ambiguity time is also at that time. Then, an analysis of time accuracy as a function of ambiguity time can be tested.
In order to look at the time accuracy with respect to ambiguity time, the A-GPS SDR results are compared with the full SDR results. It is known that the A-GPS needs external assistance data. In order to achieve this purpose, a two-step approach is proceeded: the IF data is first processed with the conventional GPS SDR to generate assistance data. Then, the same IF data and generated assistance data are processed with the A-GPS SDR. The exact processing steps can be explained as follows: data is first collected and processed with the full SDR. The necessary information for A-GPS is collected as well as the full SDR’s first sample time. This ”first sample time” refers to the receiver’s time of collection of the first sample of IF data, a quantity which can easily be time stamped within an A-GPS system. Also collected are the corrected sampling frequency and the number of milliseconds to the first subframe for each satellite. The identical IF data is then processed with the A-GPS SDR and the algorithm described in Section 2.3 is implemented for the desired ambiguity time. This is repeated for every ambiguity time that is being tested. Further, this is then repeated 200 times with new data collected every 15 minutes to cover as many satellite geometric conditions as possible. An important note is that the first sample time and corrected sampling frequency calculated by the full SDR is calculated using only the results from the first three seconds of data so that it is a better comparison to the A-GPS code.
In Figure 1, the first sample time errors are shown with respect to ambiguity time. The first sample time error refers to the error between the time of the first sample calculated by the full SDR and the time calculated by the A-GPS SDR. Each color represents a single trial.
First sample time error w.r.t ambiguity time
As shown, the ambiguity time chosen makes a large difference in the accuracy of the time estimate with many errors on the order of seconds. As ambiguity time increases, more trials reach nanosecond-level accuracy with 500 milliseconds being enough time for all trials. Interestingly, the error increases in some cases as ambiguity time increases. This is likely due to the transmission time rounding to the incorrect time, which is a bigger mistake the larger the ambiguity. Though as soon as there is a large enough ambiguity time for that data, the error then converges to nanosecond-level error.
In this context, convergence refers to the iterative least squares of the A-GPS SDR reaching nanosecond-level error as compared to the full SDR. Note that in this paper, the convergence for timing is defined such that the timing error between A-GPS and full SDR is less than two nanoseconds. In Figure 2, the cumulative distribution is shown that gives a good visualization of convergence. As can be seen, at 100 milliseconds, it is 91% likely to converge while at 200 milliseconds it is 99% likely.
Cumulative distribution function of ambiguity time of convergence
The results for key tested ambiguities can be seen in Table 1. It is also important to point out that using the navigation bit ambiguity time of 20 milliseconds only results in 13 % probability of convergence.
Probability of having converged given ambiguity time
These results lead to the next question: what causes the least squares to converge at different ambiguity times? One possible metric is the dilution of precision of the fifth state, coarse time, as explained by Muthuraman et al. (2012). Let 𝑫 = (𝑯𝑇𝑯)−1 be the dilution of the precision matrix with 𝑯 being the 5x5 linearized transformation matrix described earlier. The coarse-time dilution of precision (CTDOP) is then
21
Muthuraman et al. (2012) examined the relationship between CTDOP and TOW time error without using the ambiguity convergence technique shown in this paper. With this technique, however, CTDOP can give insights into how long the ambiguity time must be for convergence. Below in Figure 3 is a plot of the coarse time dilution of precision (CTDOP) with respect to the ambiguity time of convergence.
Coarse time DOP w.r.t ambiguity time of convergence
Above, a linear best fit is plotted in red. Although the data is wide spread, the CTDOP does explain the ambiguity time of convergence. With only 10 milliseconds required, the CTDOP is very small and when 500 milliseconds is required, the CTDOP is very large. The CTDOP is more clearly visualized as a 2-D histogram in Figure 4 below. Most of the distribution is within 200 milliseconds and a CTDOP of 0.003.
2-D histogram of coarse time DOP and ambiguity time of convergence
Another possible metric for predicting the ambiguity time of convergence is the number of satellites. However, it might be important to see the relationship between the number of satellites and CTDOP first. Looking at Figure 5, the logarithm of the CTDOP is plotted with respect to the number of satellites with a linear best fit plotted in red:
The logarithm of CTDOP w.r.t number of satellites
This shows that there is an exponential decay in CTDOP as the number of satellites increases. This is an interesting but reasonable result, since the geometry of the satellites directly impacts the coarse time result.
As an important note, it is necessary to see the distribution of the number of satellites that the experiment’s data contains. In Figure 6, the number of satellites for the experiment is shown.
Histogram of number of satellites
This is significant in the fact that most of the cases had a good amount of satellites, and thus, the data does not represent the cases with five or six satellites well.
Looking at the first sample time, Figure 7 shows the distribution of first sample time errors as compared to the full SDR for all trials.
Probability distribution of first sample time error
As can be seen, more than 95% of errors fall under a magnitude of one nanosecond error with none exceeding two nanoseconds. An important note is that these first sample time errors are the errors between the A-GPS SDR and the full SDR. These are the errors once converged, and they do not change as ambiguity time is increased further. Muthuraman’s method 1 in Muthuraman et al. (2012) can achieve similar accuracy some of the time with an ambiguity time of 20 milliseconds. However, in cases where the data bit knowledge is not enough, the method produces errors in multiples of 20 milliseconds.
Clearly, the choice of ambiguity time makes a large impact on the accuracy of the time estimate. These results show that nanosecond-level accuracy can be achieved reliably with an ambiguity time larger than 200 milliseconds.
4 SAMPLE SOLUTION
In the previous section, experiments show that once the ambiguity time can be resolved to larger than 200 milliseconds, nanosecond-level time recovery can be achieved absolutely. Some use cases may allow such levels of timing accuracy to be obtained via the data network. This paper has developed and presented the timing and probability levels of success. However, many field experiments rely on data links with low bandwidth and variable latencies. Thus, sufficiently large ambiguity times must be obtained from the IF data stream itself. It has been shown that the 20 millisecond navigation data bit has a low probability of resolving the desired nanosecond time resolution. It is obvious that data segments of greater than 6s can be used to obtain much better ambiguity times using the decoded frame synchronization. Yet, 6s of IF data could be considered extreme for an assisted solution. Therefore, we have identified an example characteristic that can be used to find a sufficient ambiguity time while minimizing data volume. This example utilizes a parity check algorithm to resolve GPS L1 CA code ambiguity time to 600 milliseconds. Then, we discuss briefly how all GNSS signals have similar structures, which can be exploited.
The GPS NAV navigation message is transmitted in five 300-bit subframes (Kaplan, 2017), as shown in Figure 8. From Figure 8, we can see that each subframe is composed of ten words, while each word is composed of 30 bits. Note that the six-bit parity check bits exist at the end of each word. Since the location of the parity bits are known, the amount of time from the start of the word can be calculated if the location of the parity bits are found. This characteristic leads us to consider using the parity check bits to narrow down the ambiguity time to 600 milliseconds (length of time for a word) for an assisted GPS receiver.
GPS navigation message structure
4.1 Introduction of regular parity checking algorithm
In a conventional receiver, a Hamming code [32,26] is employed for parity checking and error detection (GPS.gov, 2010). The six-bit parity check bits are obtained by using 24 bits of the current word and the last two bits of the previous word by using the following equations:
22
where the H matrix is
23
The elements ’1’ and ’0’ in the matrix H represent if the corresponding bit is used or is not used in the calculation of that row’s parity bit. For example, the parity bit 𝐷25 is the sum of all the bits with a ”1” in the first row of the H matrix and likewise for the five remaining parity bits. Therefore, by checking the consistency between the calculated parity check bits and the decoded parity check bits, the receiver can claim whether or not the parity check passed. If the parity check passed, then it is flagged.
4.2 Proposed approach
Using the above parity check method, we introduce a new approach that can find a large enough ambiguity time by tracking a short period of GPS L1 CA signal. This approach is achieved by finding flags denoting that the parity check passed. As discussed above, the parity check flag repeats every 30 bits. Therefore, by tracking a GPS L1 CA signal for a short period of time, and using this repeating pattern, we can find the exact position of every parity checking bit. Figure 9 shows the schematic of proposed algorithm.
Proposed algorithm
From Figure 9, we can see that the proposed algorithm starts with signal acquisition and tracking procedures. Next, if a phase lock loop (PLL) is used then the prompt output of the I channel can be used to decode the navigation data bits. If a frequency lock loop (FLL) is used, differential bit decoding algorithms can be adopted (Natali, 1986). Then, the decoded bits are sent to the parity checking module. The detailed logic of the parity checking module is illustrated in Figure 9. To summarize, only when two parity check flags are found spaced by 30 bits, i.e., i and i+30, the algorithm claims that parity check bit i is found. Knowing that a bit is the parity checking bit is enough information to calculate the offset from an ambiguity time of 600 milliseconds (since every word is 600 milliseconds). As shown in the previous section, this is more than enough time to reach nanosecond-level accuracy reliably.
The duration of tracking in order to apply the proposed method depends on the starting points and parity check threshold. From the previous analyses, we know that to detect one parity check passed flag, we need to have a data record that includes and
, as well as [𝑑1𝑑2𝑑3 …𝑑23𝑑24,𝐷25,…,𝐷30]. Therefore, the worst case is in the case that data bit starts with
. However, the false alarm probability of detecting one parity check passed flag could be 1.5 % to 3%. This false alarm probability is relatively high when considering a robust application. A detailed analysis for the false alarm probability is presented in the next section. Therefore, to enhance the robustness of the proposed algorithm, we set the threshold of parity check passed flag to two, which means that the algorithm claims that the parity check bits are found when two continuous parity check passed flags are detected.
Figure 10 shows the schematic of the proposed method. As can be seen from the blue line, given a random data starting point, we need at most 61 bits to find one parity check bit. Similarly, to find two contentious parity check flags, we need at most 91 bits of data. When considering the starting bits of the GPS signal might be wrong due to the pull-in process of signal tracking, it would be more robust to get 92 to 94 bits of decoded navigation data for processing, which is 1,840 to 1,880 milliseconds for the GPS L1 CA signal.
Schematic of the proposed method
4.3 Monte Carlo analyses and live data tests
Though the parity check algorithm used in a GPS C/A signal receiver is specifically designed for detecting bit errors, some other undesired bit strings can still pass the parity checking. Therefore, we used both Monte Carlo analysis and live data testing approaches to evaluate the probability of random bits or real navigation bits passing the parity check.
The probabilities for detecting parity check points with 61 bits, 91 bits, and 121 bits are shown in Figure 11. As seen, the probability of detecting one parity check point and two parity check points are relatively high. Note that in this simulation, the separations between two parity check points are not considered.
Detected parity check points within 61 bits, 91 bits, and 121 bits, obtained by Monte-Carlo simulation with random data
When considering the separations between parity check points, the false alarm probability decreases dramatically, as it can be seen in Figure 12. The false alarm probabilities for 91 bits and 121 bits are shown in Table 2.
Detected parity check points within 91 and 121 bits and have 30 bits separations, obtained by Monte-Carlo simulation with random data
False alarm probability obtained by MC simulation when considering 30 bits separations
Table 3 shows the false alarm probability for parity checking obtained by a three-hour GPS L1 CA signal data set. As it can be seen, the false alarm probability for finding one parity check passed flag ranges from 2.22 % to 2.91 % within three hours, which is relatively high when considering a robust assisted GPS application. However, the probability for finding two continuous flags with 30-bit spacing is very low within three hours, ranging from 0 % to 0.08 %.
False alarm probability obtained by live data test
Ultimately, the parity check algorithm shows the ability to achieve an ambiguity time large enough without the expense of excessive data volumes. This allows for reliable nanosecond-level accuracy with just under two seconds of GPS data given an a priori estimate.
5 CONCLUSION
In this paper, a means to obtain nanosecond-level accuracy was developed and presented. In parallel, the requirement of the ambiguity time to reach nanosecond-level accuracy was investigated and presented.
Experiments were then run to test the performance of the first sample time error, a common time marker in field hardware, as a function of ambiguity time. It was found that an ambiguity time of 200 milliseconds converged to nanosecond-level accuracy 99 % of the time. This conclusion can be enlightening for future GNSS signal design. Especially when designing the navigation message structure, taking this into consideration would be beneficial for consumer electronic devices.
Interestingly, the coarse time dilution of precision had a linear relationship with the ambiguity time of convergence (the ambiguity time that achieves nanosecond-level accuracy) and decayed exponentially with an increase in the number of satellites. Most first sample time errors were within one nanosecond while none exceeded two nanoseconds.
Finally, an example using the 600 milliseconds GPS L1 CA signal parity check in each GPS word was described and confirmed to be a reliable and consistent ambiguity time. This allows for reliable nanosecond-level accuracy with a relatively small amount of data. The live data results show that with just under two seconds of data, the algorithm will have a 99% probability to resolve the fulltime ambiguity.
With this technique, field applications that require real-time nanosecond-level timing can be accomplished robustly using an adapted assisted GPS/GNSS approach presented within this paper. That approach involves: (a) leveraging the expanded algorithm to resolve nanosecond-level time and (b) utilizing a few seconds of IF data as opposed to a few milliseconds (A-GNSS receiver), which is still more than an order of magnitude less time than required for decoding the required broadcast navigation parameters (conventional GNSS receiver). Future work can be focused on utilizing new characteristics of other systems/signals (such as Galileo E1 OS) to resolve the assisted GNSS time ambiguity.
HOW TO CITE THIS ARTICLE
Blay R, Wang B, Akos DM. Deriving Accurate Time from Assisted GNSS Using Extended Ambiguity Resolution. NAVIGATION. 2021;68:217–229. https://doi.org/10.1002/navi.412
- Received September 7, 2020.
- Revision received December 16, 2020.
- © 2021 Institute of Navigation
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.