Abstract
This paper proposes a strategy for improving ionospheric detection of threatening conditions, motivated by issues with ground-based augmentation systems (GBASs) at low latitudes. A methodology is developed for a real-time alerting system that monitors the ionosphere state using surrounding ground stations and sends alerts to differential global navigation satellite systems services when threatening conditions are detected. The method is based on time-step gradients and data gap analysis to detect ionospheric disturbances and to declare periods for affected satellite signals to be unavailable. Validation was performed with real data using the largest observed gradients from the Brazilian ionospheric threat model. The results demonstrate that the method is effective, detecting the vast majority of previously known threatening gradients. An availability assessment was also performed to assess for a loss of availability resulting from the implementation of this technique. Applications of the developed technique include the improvement of low-latitude nighttime GBAS availability.
1 INTRODUCTION
A ground-based augmentation system (GBAS) is a safety-critical navigation aid designed to provide precision approach service in low-visibility conditions. One of the key unresolved issues for low-latitude GBAS implementation is a potential loss of integrity due to spatial decorrelation between the corrections provided by a GBAS ground station and airborne receivers. This phenomenon is caused by the influence of the ionospheric layer on global navigation satellite system (GNSS) signals.
Threatening ionospheric gradients that create an integrity risk for a GBAS cannot be detected by a ground station alone in real time. Given this limitation, the most common mitigation procedure for existing GBAS approach service type C (GAST-C) GBAS installations supporting Category I (CAT I) precision approaches is position-domain geometry screening (PDGS) (Lee et al., 2011), which assumes that the worst ionospheric condition (from the viewpoint of integrity risk) defined by a regional ionospheric threat model is always present. Although the conservative assumption of this technique reduces availability, the resulting CAT I GBAS availability is acceptable in mid-latitude regions.
However, the ionosphere in low-latitude regions, including Brazil, has intense dynamics with considerable variability (Sousasantos et al., 2021). The most critical effect found in those regions is signal scintillation due to plasma bubbles in the ionosphere. For an augmentation system such as a GBAS, plasma bubbles are the main cause of gradients in ionospheric delay and can be large enough to compromise the system’s integrity, resulting in unsafe operation.
In order to use PDGS at a GBAS facility in Brazil, the Brazilian Department of Airspace Control (DECEA) supported the development of a GBAS ionospheric threat model for Brazil (Lee et al., 2015; Yoon, Lee et al., 2017). This effort relied on a multidisciplinary group of researchers that evaluated and validated the occurrence of gradients in Brazil from multiple GNSS networks. The results showed that the presence of threatening gradients is a recurrent scenario in Brazil, as reported, for example, by Affonso et al. (2022) and Yoon et al. (2020). The most threatening gradients were found to be more than twice as large as those from the Conterminous United States (CONUS) ionospheric threat model (Datta-Barua et al., 2010). Considering such high gradients in the PDGS can significantly degrade GBAS availability at low latitudes, as assessed by Yoon et al. (2016, 2019).
A means for detecting and alerting GBAS ground facilities about the presence of threatening ionospheric events, including plasma bubbles, would be of great value for guaranteeing integrity with a minimum loss of availability. To address this problem, which prevents full operation of GAST-C GBAS in Brazil (Marini-Pereira et al., 2021), this paper proposes a methodology capable of monitoring the ionosphere state in real time by using a network of ground monitoring stations. The methodology can be implemented with existing or new monitoring stations external to the GBAS but near GBAS facilities.
The technique utilizes time-step gradients as a test statistic along with data gap analysis to detect threatening conditions and to declare periods for affected satellite signals to be unavailable. The method for translating the detection of threatening conditions into alerts to not use specific satellites depends on three different parameters, and 30 different combinations of these parameters were tested. Validation (as an assessment of ionospheric detection capability) was performed with real data using known ionospheric gradients from the Brazilian ionospheric threat model for GBASs. While GBASs are an important application, this method can also be applied to other code- and carrier-phase differential GNSS (DGNSS) systems that are vulnerable to extreme ionospheric gradients, such as real-time kinematic systems.
2 REAL-TIME IONOSPHERIC MONITORING CONCEPT
Augmentation systems for precision air navigation procedures are challenging to implement at low latitudes because of the impact of ionospheric variability (Marini-Pereira et al., 2021). For GBASs, the current architecture for GAST C is not ideally suited to a low-latitude environment and suffers a significant loss of availability (compared with mid latitudes) because of the likelihood of threatening ionospheric conditions. While the use of two signal frequencies and multiple GNSS constellations seems to be the most viable solution, this approach presents other issues, such as greater susceptibility to scintillation and increased noise levels (Felux et al., 2017; Gerbeth et al., 2016; Salles et al., 2021). Moreover, the standards for using dual-frequency augmentation systems are likely to remain unfinished for the next few years, at least for GBAS.
Aiming to detect threatening ionospheric events in real time, the new methodology proposed in this paper makes use of multiple monitoring stations within the area served by a GBAS facility (or another DGNSS) to monitor threatening ionospheric events. The current study is based on the use of existing GNSS monitoring networks. Devising a method that can work in real time with networks of lower-cost receivers faces considerable challenges, including additional complexity. Hence, demonstrating a methodology that is applicable in real time and that is capable of enhancing GBAS availability at low latitudes is one of the core contributions of this paper.
2.1 System Definition
The proposed concept for an eventual alerting system requires a network of monitoring reference stations around a supported facility (for example, a GBAS) within a certain radius, as illustrated in Figure 1. The role of the reference stations is to monitor the ionosphere based on specific parameters and thresholds aiming to detect threatening ionospheric conditions for the supported system. If used to assist a GBAS facility, this process would provide alerts to the supported GBAS facility when the GBAS should make specific satellites unavailable (i.e., stop broadcasting corrections for them). The same approach can be applied to support other DGNSS services.
The proposed methodology is similar to the idea presented by Caamano et al. (2019, 2021) regarding the test statistic and the disposition of monitoring stations around a GBAS facility to issue alerts during threatening conditions. However, the method proposed herein is much less complex than the approach proposed by Caamano et al. (2019, 2021).
The experiments presented in this work were performed by means of data from existing monitoring networks, as suggested by Pullen et al. (2010) when proposing the use of ionospheric error bounds provided by a satellite-based augmentation system (SBAS) covering the region of interest of a GBAS station. In addition, the cited authors suggested that one can apply their proposed strategy in regions outside of good SBAS coverage by using available uncertified monitoring networks. Issues and concerns regarding the use of uncertified networks for an aviation system have been discussed by Pullen et al. (2010), but they are outside the scope of this work, which aims to demonstrate a methodology concept instead of developing a certifiable system. Because the data used in this work come from available data collected by a previously deployed network, the ideal number of monitoring stations was not investigated. As suggested by the results of this work, more monitoring stations around a DGNSS facility will generally improve threat detection.
A related “ionospheric field monitor” (IFM) concept was proposed and implemented by Suzuki et al. (2011) for the certification of a CAT-I GBAS in Japan. The authors cited in this study employed the station-pair method to estimate ionospheric gradients between GBAS reference stations and the IFM, which was located closer to the runway threshold. In this approach, the worst-case range error was simulated and assumed to be detected by the IFM, thereby avoiding induced differential correction errors for specific sets of parameters and narrowing the threat space. The use of the IFM, along with the less sensitive code carrier divergence monitor, reduces the number of undetected cases in the range domain, resulting in smaller worst-case errors in geometry screening simulations (Lee, Luo et al., 2006).
While the present work is based on the time-step method to obtain a test statistic, Suzuki et al. (2011) estimated ionospheric gradients using the station-pair method by applying a Kalman filter to series of single differences for carrier-phase ambiguity solutions, as well as ionosphere-free combinations (Fujita et al., 2010). The approach developed in this paper has several advantages over the approach presented by Suzuki et al. (2011). First, the approach developed herein eliminates the need for the station-pair method, thereby avoiding the requirement to estimate hardware biases for both the satellite and receiver. Second, the proposed approach does not necessitate the use of complex algorithms such as ambiguity resolution and Kalman filtering. The complexity of these algorithms makes it more difficult to provide integrity assurance of their outputs.
Third, the method proposed by Suzuki et al. (2011) relies heavily on carrier-phase measurements, which are significantly affected by scintillation in low-latitude regions, resulting in frequent cycle slips and loss of data. In contrast, the approach in this paper maximizes the utilization of measurements by recovering cycle slips to smooth code-derived measurements and treating data absence as a criterion to alert the DGNSS station. The close relationship between the occurrence of scintillation and data gaps is demonstrated by Marini-Pereira et al. (2023) as a subsequent study using the same methodology employed in this paper, but with a more complete dataset.
Furthermore, the approach developed here has the operational advantage of not requiring additional hardware on airport property. It is difficult to find locations with good sky visibility but low multipath effects for multiple GBAS reference receiver antennas at existing airports, and finding an additional site near each runway end for the IFM compounds this problem. Instead, the approach in this paper relies on multiple reference receivers sited at convenient locations well beyond the property of any airport. This allows more siting flexibility and permits the use of already-existing ground receivers fielded for other applications. This approach also potentially supports multiple GBAS-equipped airports with a single off-airport receiver network.
2.2 Computation of the Test Statistic
The most common approach for ionospheric gradient estimation is the station-pair method (Lee, Pullen et al., 2006; Lee et al., 2007). This method is used to develop GBAS ionospheric threat models because of its strong similarity to the GBAS architecture. When a pair of receivers is used over a given baseline, the pair assumes the same configuration as a GBAS facility and an approaching aircraft (if the aircraft were not moving).
The application of the station-pair method requires reference stations distributed within a distance that depends on the variability of the ionospheric environment under study. Thus, the larger the expected spatial decorrelation, the closer the reference stations should be. As an example, in CONUS, Datta-Barua et al. (2010) selected station clusters with a baseline separation of 15–200 km.
For the Brazilian territory, the results from Lee et al. (2015) and Yoon, Lee et al. (2017) suggest that the ideal baseline for capturing gradients for a threat model (i.e., threatening to the GBAS) is approximately 10 km. However, very few existing monitoring stations meet this constraint. Additionally, the quality of the station-pair gradients depends on the estimate of hardware biases for both receivers, which affect the absolute values of the estimated gradients.
In low-latitude regions, such as Brazil, the ionosphere has enormous variability, and baselines (receiver antenna separations) greater than 50 km may not properly capture spatial-gradient behavior. Because the available reference stations in Brazil are widely separated, the time-step method is an attractive alternative. For this reason, the test statistic chosen for the proposed alerting methodology is based on time-step estimates of ionospheric gradients.
The time-step method was introduced by Datta-Barua et al. (2002), who used Wide-Area Augmentation System (WAAS) reference stations, which are hundreds of kilometers apart. This method was also applied by Yoon et al. (2015, 2020) and Yoon, Kim et al. (2017) as a means to validate gradients estimated by the station-pair method to develop a Brazilian ionospheric threat model for GBAS. The gradient estimate is computed for a single station m and a single satellite n with measurements of slant ionospheric delay separated by a time step Δt and divided by the distance Dipp between the ionospheric pierce points (IPPs) at the epochs considered (t and t – Δt) as follows:
1
Equation (1) has the advantage of eliminating both receiver and satellite hardware bias; thus, there is no requirement for either bias to be estimated. However, the time-step method includes the temporal gradient between the two different epochs in addition to the spatial gradient. The Δt parameter plays a key role in determining these gradients. This parameter directly affects the baseline distance between the IPPs, with longer distances for longer time intervals, and longer time intervals give more temporal decorrelation as well. The effect of varying the IPP separation distance for the vertical ionospheric delay calculation has been discussed by Chang et al. (2019, 2021) with respect to estimating the nominal σvig using the time-step method.
The time-step method, which incorporates the temporal gradient, is influenced by the movement of the IPP. This method has limitations because of the interaction between the actual spatial gradient, the relative motion between the IPP and the spatial gradient, and the angle formed between the IPP motion and the spatial gradient. In briefly comparing the station-pair and time-step methods, it is important to note that the former relies on a fixed baseline and direction between reference stations, whereas the latter can vary based on the direction of the IPP and the distance it covers.
One aspect worth mentioning is that the distances will be longer for low satellite elevations and shorter for high elevations. Because the ionospheric delays involved are in the slant domain, the larger values at low elevations are compensated by larger distances. The same trend occurs at high elevations, with shorter distances for shorter time steps, resulting in a “normalization effect” for gradients computed by this method. In summary, the variation in satellite elevation angle is not directly captured by ionospheric gradients computed by the time-step method.
Interestingly, longer time steps naturally result in longer IPP separations. For the same ionospheric measurements and conditions, shorter time steps generally yield larger gradients, and longer time steps yield smaller gradients. Figure 2 illustrates an example with a 3-h temporal plot of time-step gradients expressed in mm/km, computed with three different Δt values for the same satellite (global positioning system [GPS] pseudorandom noise [PRN] 01), along with the evolution of its elevation angle. The magnitude of the gradients indicates a possible ionospheric irregularity, and the elevation angle shows no direct effect on the gradients. As explained above, when operating on the same ionospheric delay pattern (such as the pattern shown in Figure 2), shorter time steps result in larger gradients, and longer time steps yield smaller gradients. Gradients computed from longer time steps are naturally delayed with respect to gradients based on smaller Δt values, and a “smoothing effect” is noted as time increases. This effect is more evident for larger gradients.
The choice of Δt depends on the length of the baseline to be analyzed. Yoon et al. (2015, 2017b, 2020) used a 60-s step for estimating time-step gradients to validate station-pair measurements. The authors argued that the chosen time corresponds to IPP baselines up to a few kilometers, which is useful for investigating ionospheric gradients in an irregular low-latitude ionospheric environment.
From the presented discussion and the example in Figure 2, it is inferred that the Δt parameter determines the sensitivity to detecting ionospheric events when the time-step method is used to compute gradients. In addition to the three Δt values shown in Figure 2, the behavior of time-step gradients computed with a 15-s step was also analyzed. It was found that 15-s estimates are too sensitive to gradients, resulting in extremely large values (time-step estimates between 2000 and 3000 mm/km for station-pair gradients of approximately 500 mm/km). Because of this oversensitivity, time-step estimates with 15-s steps were not considered for use in the test statistic. It was also found that 60-s estimates overly smooth the results when compared with control data (threat model gradients), generating estimates lower than the station-pair gradients in most cases, with a low sensitivity to irregularities. Considering the behavior obtained for the three time steps, based on several generated plots similar to Figure 2, gradients computed with a Δt value of 30 s seem to be the most suitable for detecting ionospheric irregularities; for this reason, a value of 30 s was empirically chosen as the time interval to compute the test statistic in this work.
2.3 Ionospheric Delay Smoothing
The use of code-derived ionospheric delays is more suitable for real-time applications because these delays are not ambiguous, as are carrier-derived delays. However, the main disadvantage of code-derived measurements is the higher noise level. For this reason, a smoothing filter, which is widely used in GNSS receivers and applications, was implemented to smooth the code-derived measurements using carrier-derived measurements. A Hatch smoothing filter was applied directly to raw ionospheric estimates Iρ. The filtered code-derived ionospheric delay Iρs smoothed by the carrier-derived ionospheric delays Iφ is implemented as follows:
2
where k is the epoch index, v = k when k < Ns, and v = Ns when k ≥ Ns. Ns is the maximum number of samples allowed by the filter within the time at which the filter reaches steady state and thus has constant multipliers for both the code and carrier-derived terms, which can also be expressed as a function of a given time constant τ and the receiver sample rate Ts:
3
The time window of the filter, also called the filter time constant, defines the level of smoothing of the measurements because it directly affects the maximum number of samples that influence the filtered result. Longer time constants result in more smoothing of code measurements and thus more noise reduction. However, a high degree of smoothing is less sensitive to rapid variations in the measurements. Such undetected fast changes may represent features of interest to be studied or identified. For this reason, the choice of τ is a trade-off between noise reduction and the ability to capture sudden significant variations. In this work, the chosen time constant for the smoothing filter is 300 s. Considering the 15-s sampling interval of the available data (see Section 4), the filter relies on the last 20 epochs to smooth the current ionospheric delay.
Equation (2) shows that the smoothing filter relies on the difference of carrier-derived measurements between two consecutive epochs. If a cycle slip occurs, the filter performance is clearly degraded, contaminating all subsequent measurements. Hence, a cycle slip strategy is required for the application of the filter. The implemented cycle slip detector is applied directly to the carrier-derived ionospheric delays. The algorithm developed for this task is based on Subirana et al. (2013), with additional steps to maximize data availability and prevent erroneous and noisy data due to the filter reset.
Unfortunately, cycle slips are frequent in low-latitude ionospheric environments, especially when weaker signals such as semi-codeless GPS L2 signals are used. Hence, the smoothing of code-derived measurements via carrier-derived measurements becomes more complex and requires frequent resets of the filtering process. For this reason, as detailed in Section 5, in this study, the ionospheric delay measurements are consistently smoothed without waiting for filter convergence, and the test statistic is computed only after 90 s of continuous smoothed ionospheric delay.
3 DATA PREPARATION
The preparation steps involved in the proposed methodology are depicted in Figure 3, which provides a general view of the processing flow to generate test statistics for each satellite at a single monitoring station in the vicinity of a GBAS facility or DGNSS ground station. From the data collected by the monitoring receivers, code- and carrier-derived ionospheric delays are computed. Although not required, all employed measurements are calibrated from the estimated hardware biases of the satellites and receivers. Because the method is intended to be applicable in real time, the bias error estimates for a given day are proposed to be computed with data from the day before.
To achieve trustworthy results and avoid false detections, a screening process is performed on the code-derived ionospheric delays to eliminate clearly erroneous data. Using the “cleaned” output of this procedure, the code delays are smoothed by the carrier delays.
Finally, the proposed strategy for the alerting methodology takes advantage of the time-step method outputs as test statistics to detect ionospheric irregularities. The time-step gradients are analyzed against a set of parameters and logical criteria that determine when alerts are issued and how the detected channels (satellites) should be handled in a downstream GBAS or DGNSS.
At first glance, the process of estimating ionospheric delays, applying the smoothing filter, and computing spatial gradients is a straightforward task, if one is using highly stable receivers that generate good-quality measurements. However, using geodetic receivers from uncertified networks not designed for operational monitoring introduces additional complexity and requires specific issues to be solved. In particular, spurious data and receiver glitches are often found in the data measured by receivers from the networks available for this work.
Such issues are primarily concentrated at nighttime, during the occurrence of ionospheric irregularities. Data gaps associated with receiver losses of lock, spikes, large jumps, and frequent cycle slips are among the major concerns. In the analysis of post-processed data, many of these difficulties can be adequately handled by a variety of techniques, for example, by performing moving average smoothing or polynomial fits to the measurements using data with fit intervals from seconds or minutes before and after each current epoch (Jung & Lee, 2012). However, dealing with these problems becomes more complex in real-time applications, especially in the low-latitude ionospheric environment.
Hence, a strategy was developed to improve the data quality and maximize its availability when spurious measurements or cycle slips are found. These steps can be applied in real time and were designed to mitigate the impact of erroneous and noisy measurements on the smoothing filter and its subsequent results.
The processing strategy for detecting erroneous data is based on defining thresholds to eliminate outliers classified as unrealistic measurements or data spikes, the latter being defined by two consecutive jumps above a given threshold. In the real-time approach, the spike detection technique delays the detection by twice the data sample interval, as two consecutive epochs are required to analyze consecutive jumps. Thus, considering the 15-s sample interval adopted by the available data, consecutive jump detection reduces the alerting ability to once every 30 s. Detected unrealistic measurements are replaced with the average of measurements from the last 150 s, and detected spikes are replaced with a first-order polynomial fit based on measurements from the last 150 s. A first-order polynomial fit was chosen to better represent the data trend. Figure 4 shows an example of spikes detected by consecutive jumps with an established threshold of 15 m in slant ionospheric delay. If consecutive jumps larger than 15 m are identified, then the previous epoch is classified as an outlier and is replaced with the polynomial fit. In Figure 4, it is also possible to visually identify a small spike that is not detected with the adopted threshold of 15 m. Minor issues such as this undetected spike can be mitigated by the smoothing process.
After the screening process, the cleaned code-derived ionospheric delays Iρ are ready to be used as inputs for the smoothing filter. However, this application requires a cycle slip detection strategy, as the filter depends on carrier-derived measurements. Cycle slip detection is performed on the carrier-derived ionospheric measurements Iφ, combining the implementations used by Jung & Lee (2012) and Subirana et al. (2013).
Finally, a strategy to maximize data usage and optimize smoothing was developed to deal with smoothing filter reset when a cycle slip is detected. Instead of the traditional approach of using the raw code-derived measurement to reset the filter, this method replaces the code-derived measurement of the epoch in which the cycle slip was found with an interpolated value from raw code measurements over the last 150 s. A first-order polynomial fit is used to compute the interpolated value, followed by a residual analysis to detect possible remaining outliers. The output of the data preparation process results in smoothed code-derived ionospheric delays ready to be used as input for the test statistic (time-step gradients).
4 DATASET
The monitoring stations used in this work are part of the Continuously Monitoring Brazilian Network (RBMC), which is managed by the Brazilian Institute for Geography and Statistics (IBGE), and the CIGALA network, i.e., the Concept for Ionospheric Scintillation Mitigation for Professional GNSS in Latin America conducted by Unesp University in Presidente Prudente, São Paulo, Brazil, which is devoted to the study of ionospheric scintillation in Brazil. A few reference stations deployed by the Institute of Airspace Control (ICEA) were also used. A complete description of the networks used in this experiment can be found in Paula et al. (2023). In all cases, the available data were collected by receivers with 15-s measurement intervals. From these networks, the dataset chosen for this work was based on the observations of very large spatial gradients used to develop the Brazilian GBAS ionospheric threat model (Lee et al., 2015; Mirus, 2015; Yoon, Lee et al., 2017). This dataset encompasses the period between 2011 and 2014 and includes 19 different reference stations. From the dataset, only gradients above 400 mm/km were selected to be processed and used as a validation reference for this work. This lower limit was chosen because it is near the upper limit of the CONUS ionospheric threat model. It is assumed that gradients below this limit can be managed at low latitudes by the CONUS PDGS technique and the CONUS threat model with a relatively small impact on availability. Thus, the proposed alerting methodology aims to mitigate gradients above this limit.
The most significant gradients used to develop the Brazilian ionospheric threat model were observed by pairs of stations separated by ~10 km and approximately aligned with the geomagnetic equator. Few receivers were available at some locations within a 15-km or 20-km radius. For this reason, the selected dataset forms clusters of stations that will be used in this work as monitoring stations. The map in Figure 5 shows the locations and names (adopted in this work) of the reference station clusters. Further information about each cluster, such as the number of reference stations and the number of gradients selected from the dataset, is provided in Table 1.
The dataset is composed of RINEX observation files of days and stations selected from the Brazilian ionospheric threat model results, consisting of a table with each station-pair gradient magnitude, time of occurrence, and pair of stations that detected the gradient, among other complementary information. Continuous measurements of the station-pair gradients over time are not available for this dataset.
5 ALERTING METHODOLOGY
The key part of the proposed strategy consists of establishing a set of criteria and thresholds for a test statistic to declare alerts based on measured data in real time. First, the test statistic chosen for the alerting algorithm is the 30-s time-step gradient computed in real time from code-derived ionospheric delays smoothed by carrier-derived ionospheric delays, as illustrated in Figure 3. Note that the IPP direction is not projected into the direction of the gradient, as the method is designed to be implemented in real time when the actual gradient direction is unknown.
With the test statistic, i.e., time-step gradients, computed accordingly, a set of rules must be established to decide when an alert must be issued. Based on the architecture depicted in Figure 1, the proposed methodology sends alerts to nearby GBAS or other supported DGNSS facilities if at least one monitoring station issues an alert. More specifically, if at least one monitoring station from the network issues an alert for a satellite, then this satellite is under alert for the supported system covered by that network. As previously mentioned, the dataset was used to simulate clusters of monitoring stations, as presented in Figure 5.
Using this test statistic, the first parameter of the alerting method is a threshold, which will represent the maximum tolerance for the test statistic before an alert is issued. This parameter is denoted as the alert threshold (AT). If the time-step gradient for a given satellite exceeds the pre-defined AT, then the facilities served by the monitoring stations will be under an alert state for that satellite.
Considering the rapid variability of time-step gradients, especially in the case of ionospheric irregularities such as equatorial plasma bubbles, it is undesirable to use a single threshold as the only criterion for declaring or cancelling an alert. Instead, hysteresis is applied with a threshold lower than AT to determine when a particular satellite is no longer under alert. Furthermore, to provide more stability to alerts and to avoid frequent changes of status over short time periods, the test statistic must remain under this lower threshold for a given period, beyond which it will be assumed that the threatening condition has passed for the satellite in question. At this point, recovery (alert cancellation) occurs. Here, the time-step gradients must be below the recovery threshold (RT) for a given duration of time (time to recover [TR]).
Hence, the steps for declaring an alert via the proposed methodology with the 30-s time-step gradient as the test statistic are as follows:
a) If the test statistic for a satellite exceeds AT, the monitoring station issues an alert for that satellite.
b) The satellite will remain under alert until its test statistic continuously remains under RT for the period defined by TR.
Additionally, absence of data was also used as an alert criterion, as data gaps often occur during disturbed ionospheric periods. Thus, the steps above are performed only when the test statistic is available, i.e., when there is no data gap in the current epoch. Hence, data availability is considered a pre-condition to compute the test statistic. In the case of 30-s time-step gradients, continuous data for a duration equal to three times the value of Δt, i.e., 90 s, are required to compute the test statistic. If an absence of data is detected, meaning that there is not at least 90 s of continuous measurements of the smoothed code-derived ionospheric delay (Iρs), the satellite is immediately put under alert until the test statistic can be computed again.
The logic of these steps for a single satellite at a single monitoring station is summarized in the flowchart shown in Figure 6. The green dashed box encompasses the steps performed in normal operation, i.e., when the satellite is being used by a DGNSS or GBAS facility if not otherwise excluded by internal monitors. When the satellite is excluded by this external warning system, i.e., when an alert is issued for that satellite, the processing flow follows the steps shown in the lower dashed box.
Note that the time-step estimates do not have an exact correspondence with the station-pair gradient estimates. Taking into consideration the purpose of the alerting methodology, the time-step estimates will be considered when they are below the predefined thresholds. Once AT is exceeded, the method will monitor the test statistic to check for stability of measurements below RT. In other words, the magnitude of the test statistic has no effect on the method once AT is exceeded, unless the test statistic is below RT.
For the detection methodology established and summarized in Figure 6, the parameters AT, RT, and TR for the application of the alerting method must be defined. Moreover, the performance of the methodology with this set of parameters (and possible variations) must be assessed in terms of its detection of previously known threatening gradients and its impact on the availability of a supported facility. The results from this assessment will be presented and discussed in Section 6.
The proposed method involves a set of parameters that must be assessed. For example, it is expected that lower AT values will be more sensitive in detecting ionospheric irregularities. However, high sensitivity can generate frequent alerts that may be unnecessary, excluding satellites and significantly degrading satellite availability. AT is considered the most critical parameter because it has a major impact on system performance. The RT and TR parameters of the alerting methodology will act together, directly impacting the time required for satellites to be readmitted to the augmentation solution. Higher RT values are more permissive and allow for more prominent gradient variations. The TR parameter directly affects the availability of the satellites, as it will determine the delay before possible satellite readmission. Thus, longer TRs will result in less availability in general.
Combinations of these three parameters were tested using the values listed in Table 2, which shows the values tested for each parameter in each column. Hence, this table should not be read row by row. Instead, all combinations of values for the AT (5), RT (2), and TR (3) parameters were tested, resulting in 30 different alert parameter combinations. The results of the alerting method were also analyzed, considering the periods of time in which each satellite is under an “alert” condition, if any.
An example for a satellite arc in the Sao Jose dos Campos cluster (SJ) is presented in Figure 7, with the alert periods separated in different colors to express whether they were detected by the test statistic (pink) or data gap (orange) criterion. For this performance evaluation, all periods identified by the method are counted, regardless of the criteria by which they were detected. Figure 7 includes all analyzed elements, including a validated gradient from the Brazilian threat model at the time it was detected.
The results were analyzed by assessing whether known station-pair gradients could be detected by the method with the evaluated set of parameters. In the example of Figure 7, a gradient of 491 mm/km validated as contributing to the Brazilian ionospheric threat model was detected by the alerting parameters (described in the figure caption), as the periods for which the alerts are shown contain this gradient.
Additionally, for a GBAS facility sited at a reasonable distance from the alerting stations, the loss of availability was assessed for each set of parameters. Assuming that the affected satellite is excluded when an alert is issued, a loss of availability was considered as the exclusion of three or more simultaneous satellites. Hence, the next section will present all results, expressed in terms of detection performance and availability loss.
6 RESULTS AND DISCUSSION
6.1 Performance Metric
In a first approach, the performance was evaluated by rigorously considering the alert periods detected by the proposed method using data gaps and the test statistic with the combination of parameters described above. However, in some cases, the proposed method triggered alerts a few minutes after the time of the threat model gradient. Such behavior suggests that the alerting method can detect gradients from this dataset, although a couple of minutes later (typically 5 or 10 min if not within the original alert period) compared with the time of the threat model gradient observation. This delay occurs because the station-pair method estimates spatial gradients at an instant of time, whereas the time-step method estimates temporal gradients that require a certain time interval to be estimated based on the spatial displacement of the IPP. Therefore, IPP movement may require a longer period to generate more significant estimates than the instantaneous relative delay differences between pairs of stations, resulting in time-step alerts that arise after the occurrence of the validated station-pair gradient. This is another reason to consider tolerance times in the assessment of the method’s performance. The results presented in the next subsection verify that such behavior (late detection) occurred in a minority of cases for this dataset.
The results without such tolerance periods are presented in Section 6.2. Based on this observation, subsequent detection performance evaluation assumed periods of tolerance for the alerts. Tolerance periods represent allowed delays between the occurrence of a significant ionospheric event identified in the threat model dataset and a corresponding alert. In operational terms, taking 5 or 10 min to detect and alert the user is most likely unacceptable. However, the actual response time is associated with the distance and distribution of the monitoring stations relative to the supported facility location. These aspects are not discussed in this paper because the available data are historical and were collected at limited times within the 11-year solar cycle. Investigations about the distance and distribution of the monitoring stations would require a specific campaign during the solar maximum, which is not feasible with the available data. Additionally, the reference dataset has limitations in terms of data quality and completeness because only a single gradient is identified and validated during a longer ionospheric event. Considering variable tolerance times in the performance analysis (as a sensitivity parameter) is an option to mitigate such limitations.
Hence, the periods of tolerance considered are within the interval of 5–20 min in steps of 5 min. Then, each set of parameters is evaluated by a score varying from 0 (gradient not detected) to 100 based on the classification presented in Table 3. The assessment algorithm expands the beginning of the alert periods by the tolerance time specified in Table 3 and determines whether each threat model gradient lies within a particular detection zone. This process is illustrated in Figure 8, where tas is the time when the alert starts. For example, if a given alert period starts 8 min after a known validated gradient from the dataset, then it lies within detection zone 2 (representing a tolerance period of 5–10 min). Hence, its score is 50. In a different example, if a given alert period starts 2 or 3 min after the validated gradient, it is classified in detection zone 1 and receives a score of 100.
The results are the same if the alert period starts beforehand, as long as the alert is maintained until the gradient occurs. In other words, the scores are attributed by determining whether the gradient from the threat model database table is within the alerted period, considering that they are expanded (only in advance) by each tolerance time.
An example to illustrate the tolerance time, which is an expansion of the alert periods for the purposes of analysis, is provided in Figure 9. This figure shows the test statistic for two stations in the upper panel, the validated gradient from the threat model, and the smoothed ionospheric delay in the lower panel. The red shaded area is the alert period determined by the test statistic with the following parameters: AT: 400 mm/km, RT: 100 mm/km, TR: 10 min. Three candidate tolerance times are represented by black dashed lines, showing the time before the alert period starts.
First, the alert period is extended to start 5 min earlier, and it is determined whether this alert period contains the threat model gradient. If the gradient is not found, the alert period is extended to start 10 min earlier than the original period. If the gradient is not contained within this new period, then the period is extended again in 5-min steps up to a maximum of 20 min.
For the specific case illustrated in Figure 9, the threat model gradient is not within the 5-min tolerance, but within detection zone 2 indicated in Figure 8. Because the threat model gradient is within the 10-min tolerance period, the score for the indicated set of parameters is 50 for this gradient, according to the scoring system shown in Table 3.
Thus, the final score for each set p of parameters is the average of the scores of each gradient g from the dataset containing Ng threatening gradients:
4
This technique of computing scores was used to generate a single metric to evaluate the overall performance for each of the 30 combinations of the three parameters (AT, RT, and TR). With the same set of parameters applied to all clusters in the entire dataset, 72 gradients from the threat model were evaluated for each combination, resulting in 30 different metrics to be compared.
Note that the scores used to measure the performance of each parameter combination do not necessarily represent the percentage of gradients detected but instead provide a measure for the entire dataset that considers not only whether the gradient was successfully detected but also whether it was detected within a given tolerance time. In other words, the scores also consider how late the alert was issued after a known threatening event occurred (if an alert was not issued before the event). Here, higher scores are given for shorter tolerance periods because the sooner the threatening gradient is detected, the better the performance in terms of protecting integrity. Importantly, tolerance times are only used to assess the methodology on this specific dataset; these times are not part of the method itself.
6.2 Results
Station-pair and time-step gradient estimates have different characteristics. Additionally, the available dataset has limitations in terms of completeness of results and receiver robustness, such as the use of a single point in time to represent an ionospheric event. To mitigate such limitations, tolerance periods were considered with the scoring system presented in Table 3. However, before the scores were computed, the detection results were analyzed for each tolerance time to assess their effect for each combination of parameters.
In Figure 10(a), the performance is presented, showing the percentage of detected gradients for each tolerance time. Bins of the same color represent the same parameter combination, as shown in the figure legend. As indicated in the top left corner of the plot (upper panel), each combination has five bins representing the five tolerance periods, which vary from 0 to 20 min in 5-min steps.
The first aspect to note from Figure 10(a) is that all parameter combinations can achieve the same maximum percentage of detected gradients (97%) within a tolerance time varying from 5 to 20 min, increasing from the more sensitive sets to less sensitive parameter combinations (as explained in Section 6.1). The 3% of gradients not detected by any combination with any tolerance time correspond to two different events that will be discussed in Section 6.3.
The results from Figure 10(a) show that considering a tolerance time of at least 5 min encompasses more gradients, significantly improving the final performance. The clearest example of such behavior is seen in the last six groups of Figure 10(a), corresponding to the combinations with an AT of 400 mm/km. Not allowing for the tolerance time results in 76% of gradients being detected. With a tolerance of 5 min for the same parameters, 89% of gradients are detected, corresponding to an additional 9 gradients detected. When the tolerance time is increased to 10 min, the detection of gradients grows to 94%, and a detection of 97% is reached for a tolerance time of 20 min.
It is possible that an ionospheric event identified in the dataset may not be the same event that triggered the alerting method 10 or 20 min later. Nevertheless, notwithstanding the limitations of this dataset, which were previously discussed, except for the two undetected gradients that will be further discussed in Section 6.3, in 100% of the cases in which the alerting method did not detect the threatening gradient in the resulting alert period, an alert was triggered within a maximum of 20 min, depending on the alerting parameters used.
This behavior shows that the combination of data gaps and time-step gradients has sufficient sensitivity to detect threatening station-pair gradients, although with some delay. This delay, if considered significant, can be mitigated by distributing monitoring stations at longer distances from the supported facility and/or with a more favorable configuration. As previously stated, the distances and orientations of the monitoring stations relative to the supported DGNSS facilities could not be investigated in this paper; such investigations are left for future work.
The results obtained via the scoring system based on the tolerance times from Table 3 are presented in Figure 10(b), showing that the overall performance exceeded 90% for all parameter combinations. The performance of all parameter sets is quite similar, with a score of 97 for the best performance and a score of approximately 93 for the worst performance. In both panels of Figure 10, it is noted that the RT parameter showed no difference when varied by itself (with the other two parameters held constant).
Taking advantage of the results shown in Figure 10, an additional clarification can be provided regarding how the scoring system can be applied as a performance metric. The results from Figure 10(b) can also be obtained by averaging the data from Figure 10(a). In other words, each five bars of the same color in Figure 10(a) would become a single bar of the same color in Figure 10(b). The best detection result of 97 in the scoring system indicates that the vast majority of gradients were detected within the alert period or with some delay.
Choosing the optimal set of parameters for detection is not straightforward because the loss of availability resulting from a specific combination of parameters must also be taken into account. The availability assessment will be discussed in Section 6.4. Additionally, the available dataset is noticeably heterogeneous, with monitoring stations in significantly different regions with distinct ionospheric characteristics. Considering only performance as a criterion, based on Figure 10(b), the set of parameters with the best performance for the dataset as a whole that is also likely to result in higher availability (i.e., high AT and RT with short TR) is the combination 250-150-10, respectively, for AT-RT-TR.
Because of the heterogeneity of the dataset, the two clusters with the largest number of gradients were investigated. The Salvador cluster (SV) has 31 identified gradients distributed over 26 different days, and the Sao Jose dos Campos cluster (SJ) has 24 threat model gradients distributed over 10 days. The same scoring system was applied only to the events in each cluster. Thus, the performance for each cluster also took into account the tolerance times to classify the detections and compute the performance scores as proposed. In this evaluation, more than one third of the parameter sets achieved scores of 100, with the worst score of 95 for the SV cluster. For this case, the least sensitive parameter set with the best performance results is the combination of 350-150-10, respectively, for AT-RT-TR. However, in the SJ cluster, no combination achieved a score of 100, and the least sensitive combination with the best results is the set of 250-150-10 for AT-RT-TR, with a score of 96%, agreeing with the results for the entire dataset. The SJ cluster could not achieve a score of 100 because one gradient was not detected by any set of parameters. This specific gradient will be briefly discussed in Section 6.3.
The different optimal thresholds obtained from the two presented clusters located far apart in regions with distinct ionospheric characteristics suggest that the ideal set of parameters must be analyzed and adapted for each region. However, if the same values must be used for different regions, a more conservative approach should be adopted with the best overall integrity results, even if the method is too sensitive (sacrificing availability) for some regions.
The performance results in Figure 10 show that the best detection results are obtained for more sensitive parameter sets. In particular, the best detection results with the largest AT are achieved for the combination of 250-150-15, respectively, for AT-RT-TR, with a final score of approximately 97. The performance slowly degrades until the least sensitive (higher AT) combination of 400-150-5 is reached, with a final score of approximately 93. However, these results do not determine the final selection. To choose the best set of parameters, one should not only consider the performance results but also consider the loss of availability caused by each combination, which will tend to favor less sensitive choices.
6.3 Undetected Gradients
For the best results shown in Figure 10, 70 out of 72 gradients were detected from the selected ionospheric threat model gradients within a maximum tolerance time of 10 min, indicating that two gradients were not detected by the proposed method for any combination of parameters. One of the undetected gradients was caused by a receiver fault not detected by the screening process. The fault occurred in the MC cluster when GPS PRN 27 was tracked around 2 am (UTC) on February 11, 2014. The second threat model gradient not detected by any combination of parameters or tolerance time was measured by stations SJCU and SJCE for GPS PRN 03 on February 15, 2014. In this case, the detection analysis was performed by using only these two stations, which originally measured a gradient of 444 mm/km. Figure 11(a) presents the test statistic and the slant ionospheric delay, indicating the occurrence time of this undetected gradient. The plots show that neither station from the pair would detect this gradient, as the test statistic did not reach the minimum AT of 200 mm/km over all considered combinations of alerting parameters.
Although the proposed method did not issue an alert for the threat model gradient of 444 mm/km detected by stations SJCU and SJCE, a different station-pair in the threat model dataset detected the same event from a nearby location within the same SJ cluster, only 210 m to the northwest of SJCE. As a result, the threat model dataset includes a gradient of 423 mm/km detected by stations SJSP and SJCU, as depicted in Figure 11(b), whereas the gradient considered previously did not generate sufficient alerts for it to be captured when only the SJCE and SJCU stations were considered.
The alert period in Figure 11(b) was generated by considering only the SJSP and SJCU pair, which are the stations that measured the 423-mm/km gradient. However, data from the three stations of the cluster are presented, indicating the two different gradients from the threat model dataset. As clearly shown by the upper panel in Figure 11(b), an alert was issued by the SJSP station, which computed a test statistic of approximately 260 mm/km, exceeding AT for the defined threshold of 200 mm/km and triggering an alert for PRN 03.
This second undetected gradient analysis suggests that using only two monitoring stations around a GBAS facility, for example, is likely not sufficient to detect threatening gradients, but that at least three monitoring stations are required. Yet, 70 out of 72 gradients from the adopted dataset were detected with a combination of parameters when only two stations in a cluster were used. Increasing the number of monitoring stations around a facility can reduce the probability of missed detection. However, this also increases the likelihood of false alerts. In this context, false alerts refer to actual irregularities that do not pose a threat to a DGNSS or GBAS station. While false alerts can occur because of local non-ionospheric disturbances such as multipath effects, it is worth highlighting that 100% of the alerts were triggered exclusively during local nighttime when the 24-h dataset was processed. This result serves as evidence that false alerts originating from factors other than ionospheric irregularities are relatively rare when the proposed alerting methodology is applied.
The performance results from the dataset allow one to conclude that using the time-step method to obtain a test statistic along with data gaps as criteria to detect threatening gradients enables one to detect the vast majority of gradients from the Brazilian ionospheric threat model when using at least one set of alerting parameters and allowing for tolerance times up to 10 min with the most sensitive set of parameters.
6.4 Impact on Availability
Excluding satellites from use by a DGNSS service such as a GBAS because of alerts may cause loss of availability. It is expected that more sensitive parameters will result in the detection of more gradient events, threatening or not, which means that more satellites will be excluded for longer periods and availability will be degraded. Hence, the choice of the optimal set of alerting parameters is a trade-off between detection performance and availability loss.
This availability assessment was conducted considering all stations that form each cluster depicted in Figure 5, following the single alert station rule. In other words, the alerts from all stations of each cluster were merged to generate the alert periods for each satellite. Assuming that these satellites are excluded from broadcast correction messages during their alert periods, the availability analysis checked for periods of simultaneous alerts for multiple satellites.
For this analysis, it was established that the exclusion of three or more satellites simultaneously results in a total outage of the system. Although such an assumption might not match the results if an availability assessment were performed on an actual GBAS facility, this criterion was applied in this work considering the available threat model data. Thus, to analyze the outages caused by each set of alerting parameters, a useful metric can be obtained by summing the total time that three or more satellites were simultaneously excluded for each day in each cluster. Due to the nature of this metric, this kind of analysis only makes sense if applied to each cluster individually. This approach is different from the detection performance evaluation reported previously, which used a single metric for the entire dataset.
Analyzing the outage time with the proposed metric is relatively straightforward when studying a single day. However, the ionospheric conditions along the days and clusters vary considerably. Hence, it is implausible that a single metric could adequately express the overall impact on availability. For this reason, outage analysis was performed for each day individually, regardless of how many ionospheric events were present in the day. Due to the large number of plots generated for this investigation, only the most relevant information is presented.
Figure 12 presents the days with the longest and shortest outage periods, respectively, in the dataset. In Figure 12(a), the system is totally unavailable for approximately 5.5 h in the worst case and approximately 3 h in the best case. In Figure 12(b), the outage periods vary across parameter sets from just over 2 h to zero (meaning 100% availability according to the adopted metric).
These results regarding the overall behavior of outages with respect to the parameter combinations are as expected, as discussed in Section 5. Shorter outage periods are obtained for higher AT values. A few cases of more extended outages occurred for a higher AT, with RT and TR held constant, but were due to specific conditions. TR is the parameter with the most consistent behavior among the combinations because it naturally extends the alert periods when higher values are employed. Considering that Figure 12 shows the worst (longest outages) and best (shortest outages) cases among the validation dataset; all remaining cases resulted in outages with significant variability depending on the day and location but were always between these two limits in terms of overall behavior.
Because of the significant variability of outage times over the days and clusters of the dataset, the distribution of outages for each parameter combination was analyzed. The most noticeable conclusion from such an analysis is the large dispersion of outage durations. In general, the shortest outages were obtained from parameter sets with an AT of 400 mm/km, reaching minimum outage durations of less than half an hour. The combinations with an AT of 200 mm/km yielded minimum outages with a duration at least twice as long, but still with considerable variability depending on the day and location with respect to the geomagnetic latitude.
7 CONCLUSIONS
This paper has developed the concept of a system with a methodology for issuing alerts regarding threatening ionospheric gradients in advance to support differential augmentation systems in low-latitude regions. The main motivation for the work is the ionospheric issues with GBAS in Brazil that currently prevent its full operation. From the preceding analysis of the parameter combinations considered in this study, it was found that the proposed method detected 70 out of 72 threatening gradients from the Brazilian ionospheric threat model for GBAS for many reasonable values of these parameters. Furthermore, although the technique requires a non-obvious choice of parameters as thresholds and recovery times for the test statistic, the performance results proved that the vast majority of gradients were detected within a reasonable tolerance time.
The dataset used in this work has limitations in terms of quality and characteristics, e.g., non-continuous data and spurious measurements. In this paper, the concept of the proposed methodology is validated to the extent possible with this dataset. A follow-up to this work can be found in Marini-Pereira et al. (2023), in which the same method is applied with better results and a cross validation is performed using a more complete dataset.
Considering the adopted dataset, the best performance score was 97%, with the great majority of threatening gradients detected and alerts issued with a tolerance time of 5 min and all detected gradients detected with a maximum tolerance time of 10 min. Only two gradients from the dataset were not detected by the proposed technique. One of these cases was due to a receiver error not captured by the data screening process and the rules established to compute the test statistic. The other missed gradient was declared undetected when the criteria of the proposed method were applied. However, this gradient was detected by a neighboring station that was part of the same cluster using a more sensitive combination of parameters.
An analysis of the two gradients that were not detected by the proposed method suggests three different actions for improving the alerting methodology. First, the data screening process could be reviewed and updated to make it more robust to the receiver errors apparent in these two datasets. Second, more robust monitor station receivers could be used that are less susceptible to faulty or inconsistent measurements, particularly during times of ionospheric disturbances. Third, three or more monitor stations should be used in each cluster, as two out of three stations were not able to detect a threatening gradient in the SJ cluster. Hence, mis-detection issues can be mitigated by using additional and more spread-out monitoring stations within each cluster.
Regarding the loss of availability imposed by the alerting method, the results varied significantly for different clusters and days. Consequently, determining the optimal trade-off between detection performance and availability is not a straightforward task. For a safety-critical system such as a GBAS, the detection of threatening gradients must be the priority, even if this results in conservatively issuing more alerts than strictly necessary. Although the worst outages exceeded 5 h during nighttime, this loss of availability under threatening ionospheric conditions is far better than declaring the system totally unavailable every night. The possibility of improvement in low-latitude nighttime GBAS availability is a major contribution of this paper.
HOW TO CITE THIS ARTICLE
Marini-Pereira, L., de Oliveira Moraes, A., Pullen, S. (2024). A simple and effective approach to real-time ionospheric monitoring at low latitudes and its applicability to GBAS. NAVIGATION, 71(1). https://doi.org/10.33012/navi.619
ACKNOWLEDGMENTS
This work was sponsored by the DECEA of the Brazilian Air Force, whom we gratefully acknowledge. A. O. Moraes is supported by CNPq award 309389/2021-6 and is grateful to PROAP for supporting this publication through CAPES (Finance Code 001). We thank IBGE for maintaining the RBMC network, which made this research possible. We also acknowledge INCT GNSS-NavAer grants CNPq 465648/2014-2 and FAPESP 2017/50115-0, which provided funding for maintaining some of the monitoring stations used in this work. The authors are grateful to Prof. J. F. Galera Monico for his comments and suggestions.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.