Abstract
This paper presents estimates of surface reflectivity (SR) over time of global navigation satellite system (GNSS) signals scattered from a partially frozen lake surface. A portable ground-based GNSS reflectometry sensor system that collects both scattered global positioning system L1 signals and independent validation data (lidar and camera) was deployed on the Lake Michigan waterfront in Chicago at a time when the lake surface was a mixture of ice and water. Lidar surface scans were merged with camera images and mapped, along with estimated reflection zones. For three satellites whose reflection points scan across ice and water over time, the relative SR and mean red intensity (differentiating ice from water) of camera pixels inside the first Fresnel zone were computed and shown to be correlated. This system concept will be used in the future for more complete mapping of phase changes of snow and ice in the cryosphere.
- global navigation satellite system
- global navigation satellite system reflectometry
- signal-to-noise ratio
- surface reflectivity
1 INTRODUCTION
Global navigation satellite system (GNSS) signals are designed for navigation; a reflected GNSS signal is often eliminated as a nuisance term known as multipath. Yet the global coverage and widespread availability of GNSS signals make these signals a potentially exceptional source for remotely sensing Earth’s environment. GNSS reflectometry (GNSS-R) focuses on studying the characteristics of a reflected signal to infer properties of the surface from which it reflects.
GNSS-R is a form of bistatic radar whose concept dates back to Martin-Neira (1993); GNSS-R uses scatterometry, i.e., measurements of scattered signals, to determine the surface properties of the reflecting surface. For a review of the over all field, see Larson (2019). GNSS-R has been explored with ground (Fabra et al., 2010; Larson et al., 2013), airborne (Belmonte Rivas et al., 2010), and space-borne platforms such as TechDemoSat-1 and CYclone GNSS (CYGNSS) (Carreno-Luengo et al., 2020; Foti et al., 2015; Gleason & Ruf, 2015; Unwin et al., 2017; Zavorotny et al., 2014). Recently, space-based surface freeze/thaw based on changes in surface reflectivity (SR) of the soil has been demonstrated and mapped (Carreno-Luengo & Ruf, 2022a, 2022b) with CYGNSS in order to monitor year-to-year seasonal changes at low-latitude, high-altitude sites because of CYGNSS’s low-inclination orbit.
Several ground-based existing reflectometry technologies for cryospheric sensing are based on using a single upward-facing antenna for GNSS interferometric reflectometry (GNSS-IR). GNSS-IR methods leverage existing geodetic sites opportunistically, using variations in the signal-to-noise ratio (SNR) of low-elevation satellites, as a direct signal and reflected signal constructively and destructively interfere when both are received at the antenna (Larson & Small, 2016). Single-receiver GNSS-IR can provide altimetry to the surface level and measure snow accumulation or ablation (Siegfried et al., 2017). These techniques are useful for multi-season or multi-annual records of changes to the snow and ice surface. However, as surface changes may occur on daily or even hourly timescales, and for more challenging terrain, an antenna dedicated to collecting the reflected signal may provide the needed spatial and temporal resolution.
Two-antenna reflectometry systems such as the GPS open loop differential real-time receiver (GOLD-RTR) have also been developed and deployed for sea ice (Yun et al., 2015) and dry snow altimetry based on polarization measurements (Fabra et al., 2010, 2012). For a review of methods pertaining to sea ice, see Yan & Huang (2019). A two-antenna system for sensing wet vs. dry snow has recently been tested in a high-alpine environment using commercially available receivers (Koch et al., 2019). Some receiver designs are software-based for open-source use (Junered et al., 2016), again using interferometry for altimetry (Ribo et al., 2017).
In addition to detecting wet vs. dry snow, it is also important to be able to monitor meltwater on the surface of snow/ice, as well as detect phase transitions between snow, ice, and water. MacAyeal (2018) reviewed seismologic signatures of snow and ice layers and the importance of meltwater-saturated snow in seismological responses.
Existing ice and water monitoring systems include optical and thermal band imaging such as that of the moderate-resolution imaging spectroradiometer (MODIS) on the Terra and Aqua satellites, which has a temporal resolution of one to two days, but the pixel size is 250 m. Other higher-resolution optical satellite sensors are available for imaging, e.g., Landsat 8 (30 m), Sentinel-2 (15 m), and WorldView (2 m), but their temporal resolutions are much lower than that of MODIS. Other limitations of these various optical satellite sensors include the fact that they can only be used in daytime and in cloud-free conditions. Some satellite missions operate in bands unaffected by the presence of clouds, e.g., Sentinel-1 at the D-band (5 cm wavelength), and have a resolution of approximately 5 m, but the revisit interval is greater than one day, which may be too infrequent for snow/ice/water phase change monitoring (Miles et al., 2017).
A dedicated GNSS-R antenna and receiver have the potential to be a data source complementary to satellite monitoring, with the advantages of lower cost, higher spatial and temporal resolution, and continuous operation over a region day and night, unimpeded by cloud cover or precipitation. In this work, we are motivated to examine surface intensity variations in a reflected signal not due to interferometric effects, but due to the surface itself. Chaput et al. (2018) and Komjathy et al. (2000) proposed that dielectric variations would result in scattered intensity variations. Wiehl & Legresy (2003) developed theory and simulations of different snow surface and subsurface dielectric scattering for air- and space-borne antennas.
We describe and demonstrate a ground-based downward-facing GNSS-R sensor system including lidar and camera and compare global positioning system (GPS) reflected power with collocated camera images in field experiments sampling the ice and water surface. The technique described in this paper is distinctive in that it (1) demonstrates the possibility of monitoring surface phase variations from ice to freshwater (which have similar salinities, unlike sea ice and sea water, affecting the dielectric constant) and (2) uses a reflected signal from a single dedicated reflection-receiving (i.e., downward-facing) ground-based antenna. This application may become more important as the world’s glacier ice starts to melt more rapidly under a warming climate.
Section 2 reviews the concept of SR as it pertains to measuring the SNR. Section 3 describes the sensor system and data collection and provides an overview of the GNSS signal processing used in this work. Details of the data processing are given in the Appendices. Section 4 presents our results, Section 5 discusses these results, and Section 6 summarizes the paper.
2 SR BACKGROUND
A GNSS signal reflecting from a surface of ice or water is dominated by specular reflection. The specular point is the point at which the angle of incidence equals the angle of reflection, according to the principle of least time. Most of the energy is scattered from an area around the specular point known as the first Fresnel zone (FZ) (Beckmann & Spizzichino, 1987).
Figure 1 illustrates the specular point and FZ geometry. The position x0 in the propagation plane and the semi-major and semi-minor axes a, b of the first FZ are functions of the height h1 of the receiving antenna at R0 from the scatterer, the elevation angle el, and the wavelength λ of the signal (Larson & Nievinski, 2013):
1
These equations assume a flat Earth, which is valid for a ground-based antenna system.
The final signal received by an antenna directed downward is the superposition of scattered GNSS reflections from the FZ. For a downward-facing reflection antenna, the scattered power can be computed for all pairs of time delay m and frequency f of the replica signal and is represented as a function of two variables. The correlation power Ŝ is given as follows (Gleason, 2006):
2
where Ps is the digitized received signal power, f is the frequency of the incoming signal, m′ is the unknown code delay integer in units of sampling period Ts, fD is the unknown Doppler frequency shift due to relative motion between the satellite and the receiver, and Tcoh is the coherent correlation interval. In this work, the GPS L1 signal is used, whose Tcoh = 1 ms. The function Λ for the GPS code is called the correlation triangle function because it is has the shape of a triangular peak in delay space:
3
In this equation, x is the coarse/acquisition (C/A) code for one satellite, is the C/A code replica, and N = Tcoh/Ts is the number of samples (Garrison et al., 1998; Gleason et al., 2005). The triangle function is the expected value of the C/A code correlation with the C/A replica.
The SNR of a received GNSS signal is as follows:
4
5
6
where Pmax is the correlation peak value over all time and Doppler shifts m, fD and occurs at values is the mean value over time shifts m of the correlation curve at , excluding the points within one-half chip of the peak, where one chip has duration Tc.
SR is related to the received SNR (Chew et al., 2018) by the bistatic radar equation. In decibels, this relationship can be expressed as follows:
7
where Pt, Gt are the power and gain of the transmitter, respectively, Gr is the gain at the receiving end of the signal, λ is the wavelength of the transmitted signal, and Rts, Rrs are the distances between the transmitter and scatterer and the receiver and scatterer, respectively (see Figure 1). For a given GNSS satellite, the SNR can be computed according to Equation (4). The transmitter power Pt and gain Gt and carrier wavelength λ are known, the distances Rts, Rrs can be estimated given the positions of the antenna and satellite relative to the scatterer, and the receiver-end gain Gr can be estimated from the antenna gain pattern. SR is computed in this manner in this work.
3 METHODS
The objectives of this investigation are to compute the SR based on GPS L1 signals and to assess the correlation between the SR computed from Equation (7) and the surface type. The “true” surface type will be determined by collocated optical cameras that image the surface as a satellite’s specular point and FZ shift over time, ideally moving from scattering off water to scattering off ice. A camera pixel’s red-green-blue (RGB) normalized intensity triplet will tend to be close to (1,1,1) for ice but (0,1,1) for water. Therefore, we use the camera pixel red value as an indicator of the presence of ice versus water on the surface. By averaging the red pixel value over the GNSS signal’s first FZ, we correlate the relative GNSS SR with this mean red value (MRV).
To estimate the SR of ice and water using GNSS-R in this way and to compare this reflectivity with optical mean red intensities, we developed a portable sensor system for monitoring lake surface water phase changes (Parvizi, 2020). The system has been used for 11 data campaigns from 2017 to 2020 (Parvizi et al., 2018), with most occurring in the winter season (Parvizi, 2020). Fieldwork was conducted at Lake Michigan in Chicago, Illinois. In campaigns 1–9, the lake surface conditions were either entirely water or entirely ice. In campaigns 10 and 11, the lake surface had a heterogeneous ice and water surface. In this study, we will show the results of data campaign 11. In the next subsections, we describe the sensor system, data collection conditions, and post-processing by which the GNSS-R SR and optical MRV are computed.
3.1 Sensor System and Data Collection
Figure 2 presents a connection schematic of the sensors used in the system. A lidar and three cameras, each spanning different fields of view (FOVs), sense and image the surface from which the GNSS signal is reflected. The lidar produces a point cloud of ranges and intensities at which the transmitted light is backscattered, providing an estimate of h1. The lidar wavelength used reflects off ice but does not reflect from water. The cameras are internet protocol (IP) security cameras that produce RGB color images of their FOVs, from which the MRV within the GPS FZ is computed for comparison with ŜR. These cameras and lidar are connected to a network switch, which sends their data packets to a laptop to be stored on an external storage drive. A weather station displays air temperature and wind speed and direction, but this information is not used in this work. Because of the camera FOV overlap, the center camera data are also not used here.
Collocated with these sensors are two GNSS antennas. A right-hand circularly polarized (RHCP) antenna is used to collect the direct signal. A left-hand circularly polarized (LHCP) antenna oriented downwards obtains the reflected GNSS signals from the surface. Two universal software radio peripherals (USRPs) act as front ends, each of which uses a GPS-disciplined oscillator (GPSDO) connected to the RHCP antenna as a timing reference. One USRP receives the direct GNSS signals from the RHCP antenna for sensor system positioning. The other USRP collects front-end samples from the LHCP reflection antenna. The SNR and SR are computed from the reflection antenna signal. The direct signal is not used in this work, as the multipath aspect of the environment requires further development of a software-defined receiver (SDR) for positioning, which is ongoing but beyond the scope of this work.
The GNSS antennas are mounted on a tripod approximately 2 m above the ground next to the lake surface, on the end of a boom that extends horizontally in a direction about three-fourths of a meter from the tripod mast. The lidar and cameras are mounted on a shorter platform below the GNSS antenna boom extending in the same direction . Figure A1 provides sensor system diagrams defining the relevant geometry used in post-processing.
Figure 3(a) shows a photograph of the sensor system on-site and lake surface conditions from the data campaign that occurred on 21 February 2020. The tripod-mounted sensors were placed at the shore of a harbor with empty docks because this was the only part of the lake with any ice accumulation, likely due to the stillness of the water. The harbor is largely protected from the wind. North of the site (to the left of the photograph), there were underwater spouts at which liquid water was likely being output because a bubbling source was visible, and there was no ice formation for several meters around the source. There is a clear ridge in the left part of the photograph, where the subsurface heated water did not prevent surface ice formation. The surface was entirely water on the left side of Figure 3(a), with a layer of ice to the center and right. There are visible breaks in the ice and thinner parts of ice in which blue water is visible between sections of white ice. The tripod was mounted at the edge of the water, with the boom extending out over the lake surface.
Figure 3(b) shows a Google Map satellite image (not from the date of data collection) of the field site in Chicago on the shore of Lake Michigan, annotated to show the heading angle μ of the boom direction . The photograph was taken standing to the west of the tripod and looking east approximately in the direction of . Because of the presence of the docks, we limited our analysis in the post-processing to consider satellites that are to the east of the site and at a sufficiently high elevation such that their direct path is below the antenna’s horizon and their first FZs are imaged. We expect these criteria will have the added benefit of limiting multipath effects by only allowing the analysis of satellites whose reflections are well clear of the docks.
The system configuration and orientation details for data campaign 11 are provided in Table 1. The lidar elevation ψ1, reflection antenna elevation angle κ, and relative heading angles μ2, μ3 and elevation angles ψ2, ψ3 of the left and right cameras were measured with a digital angle measuring tool on site after the sensor suite was set up. The boom heading angle μ was manually recorded upon site setup as well. The angles listed in the table are defined in Appendix A. These angles are used in post-processing for rotations and projections.
When the GPS signal from a given satellite, identified by its pseudorandom number (PRN), arrives at an antenna, it is amplified, filtered, down-converted, and digitized in the GPS front end. These processes are implemented with USRPs. The USRP Ettus N210 output data format is 16-bit for in-phase and 16-bit for quadrature-phase components of the complex signal. The specific configurations for the field work, including the in-line gain between the antennas and USRPs and the sampling frequency fs = 2fIF chosen, are shown in Table 1. The filtered, down-converted, digitized samples are stored during each field campaign and returned to the lab.
The sensors are controlled by a laptop, and data are collected simultaneously by script and sent to external storage devices. To prevent buffer overruns from the USRPs, we collected data in eight 20-min parts. This work uses Parts 1 and 3–6 collected on the test day. The Part 2 camera data were corrupted and are thus unavailable as a truth reference.
3.2 Post-Processing in the Lab
Figure 4 illustrates the processing workflow for computing and comparing the GPS SR and the optical MRV in the lab after data have been collected in the field. From the USRP connected to the reflection antenna, GPS L1 signal power is accumulated to give Ŝ by coherent integration for 1 ms summed incoherently for 1 s and then averaged over 1 min as described in Appendix B. Then, the SNR is estimated based on Equations (4)–(6). To obtain ŜR using Equation (7), we need the satellite and specular point positions relative to the reflection antenna, and , respectively, so that Rts, Rrs, Gr can be computed.
While the direct antenna signal was intended for computing , the data from the direct antenna had less gain than the reflection antenna; thus, the standard SDR acquisition did not acquire enough satellites for positioning and is not used in this study. Modifying the SDR to successfully use direct GPS signals for positioning is ongoing but not essential to this investigation. Instead, we use the approximate receiver position and a GPS almanac to estimate the satellite sv position in the sky. An approximate sensor position for the reflection antenna origin R0, as listed in Table 1, is used based on knowledge of the site and referenced to a Google Map (Parvizi, 2020). To estimate the satellites’ positions relative to the reflection antenna R0, a Yuma almanac from the U.S. Coast Guard archives is used to obtain the satellite positions relative to the reflection antenna origin over time:
8
9
10
where E, N, U are the component distances of the satellite position relative to the reflection antenna’s position in east-north-up (ENU) coordinates.
We assume the elevation angle el in Equation (10) to be the same elevation angle with respect to the lake surface, as drawn in Figure 1. Given el and an estimate of the surface height h1 relative to the reflection antenna position R0, the specular point position can be represented in the propagation plane coordinate system as follows:
11
where h1 is defined in Equation (1) as positive when the antenna is above the scatterer. With the lidar point cloud, the average height h1 of the lake surface from the reflection antenna is computed as described in Appendix C.
Once the specular point position and satellite position are found relative to the reflection antenna, one can calculate Rrs and Rts as follows:
12
13
after performing a rotation by the azimuth angle az to convert to ENU coordinates.
To compute the antenna gain part of Gr, the angle ϕ between the specular point viewing direction and the body-zenith direction ŷR of the reflection antenna is defined in the upper right of Figure A1:
14
The antenna-zenith angle ϕsp is used to estimate the gain Gr relative to 42 dB interpolated between the angles in Table A1. Having found Rrs, Rts, Gr, we compute SR using Equation (7). The data are collected in the field in 20-min segments, each of which resets the automatic gain control of the USRP, which is unknown. Thus, we will show ΔSR, corresponding to the SR relative to the SR value at the first timestamp of the 20-min data segment during which that SR was collected.
We also use the FZ parameters x0, a, b from Equation (1) to determine the camera pixels lying within the FZ and compute their MRV as a measure of the true surface type. The locus of points on the perimeter of the FZ can be written in parametric form in the propagation plane coordinate system as follows:
15
where θ varies from 0 to 2π and a, b are given by Equation (1). This set of points can be translated relative to R0 as follows:
16
which can then be expressed in the ENU system using a rotation matrix by the azimuth angle az.
Sensor fusion is performed between the camera and lidar (lower left of Figure 4) to map the camera images in three dimensions and determine which pixel positions lie within the FZ perimeter defined by . Using the lidar point cloud measurements of the surface, the i-th camera’s pixels are backward-projected onto a three-dimensional (3D) map of the surface. Backward projection uses camera-intrinsic properties, such as focal length, along with range information obtained from the lidar regarding the features captured by the camera. The position of each pixel of the camera image is then transformed to its 3D position in space relative to the reflection antenna origin . Details on the lidar–camera surface reconstruction are given in Appendix D.
Simultaneous images from multiple cameras are combined together by identifying overlapping regions in the FOV, normalizing the RGB histograms based on the moments of the histograms in the overlapping region, and cropping redundant pixels within the overlapping region as described in Appendix D. For a given satellite, the MRV of the pixels whose positions are inside the FZ is calculated.
Finally, ΔSR and MRV are compared if 1) the satellite itself is below the reflection (i.e., downward-tilted) antenna’s horizon and 2) the entire FZ area lies within the FOV of the cameras. These criteria reduce the likelihood of the GNSS-R SNR including any energy from the direct line of sight (LOS) and also ensure complete sampling of the FZ via camera imaging. The first requirement indicates that the antenna-zenith angle ϕsv of the direct satellite LOS to the reflection antenna must satisfy ϕsv > 90°, in order to reduce possible direct LOS received power. The angle ϕsv can be computed by substituting for in Equation (14). The second requirement is met if the horizontal area spanned is within 99% of the FZ area, in order for the FZ imaging to be considered complete. The MRVs at times meeting both of these criteria are compared with the relative SR, ΔSR, for the associated PRN to assess the effectiveness of the GPS SR in varying with surface conditions.
4 RESULTS
The surface of Lake Michigan was a heterogeneous mix of ice and water during the data campaign, as visible at the left of Figure 3. The satellites in the sky at the start and end of the data set shown are presented in the sky plots of Figure 5. Blue circles indicate satellites in the sky. Azimuth angles are indicated around the perimeter, and at the center of the circle is the zenith of the direct antenna. Because the boom is oriented to the east (Figure 3), reflected signals of the satellites in the eastern part of the sky are anticipated to be on the lake surface. In particular, PRN 16 begins at a high elevation in the sky, and the elevation for PRN 26 begins at a high enough level that the direct signal is below the antenna’s horizon. During the course of data collection, PRN 27 will rise into the FOV of the cameras and thus is also considered. PRN 8 also rises but, as we will show, does not fully lie within the cameras’ FOVs before the end of data collection; hence, PRN 8 is not used.
Figure 6 shows the combined images of cameras 2 (imaging the northern section) and 3 (southern) projected to east–north coordinates relative to the reflection antenna at 17:58, 18:43, 19:25, and 20:10 universal time (UT) (approximate start or end times of 20-min data collection segments.) All pixels from camera 2 are plotted (arbitrarily chosen as the primary camera), as well as the pixels from camera 3 that do not overlap with camera 2. The RGB values of camera 3 have been color-normalized based on their overlapping regions to better match camera 2. Although the docks are in the cameras’ FOVs, they are well beyond the region plotted, in which the first FZs lie. For this reason, we expect the possible multipath effects to be reduced.
In Figure 6, the green cross indicates the reflection antenna origin R0. Orange dots mark the specular points. The FZs of PRNs 8, 16, 26, and 27 are shown in Figure 6, shaded in light transparent red. For the SRs and MRVs that are eligible for comparison (ϕsv > 90°, full FZ in the FOV of the cameras), the FZ area is typically 2–3 m2. For example, the geometry of PRN 26 in Figure 6(a) meets the ϕsv and fully imaged FZ criteria, but the geometry of PRN 26 in Figure 6(b) does not meet these criteria.
The surface conditions are largely static over this time period, but slight variations are visible, particularly at the boundary between the ice and water to the north. There are also solar illumination variations, most easily seen with the shadow of the tripod in the north over the water shifting eastward with time.
By tracking one PRN at a time over these plots, we can see that the FZ of PRN 16, even though it reaches the highest elevation at 17:58 UT, is partly out of the camera’s FOV in Figure 6(a) but has shifted fully into the FOV by 18:43 UT, following along the ice–water boundary. The specular point of PRN 26 begins fully in the FOV at 17:58 UT and shifts eastward, grazing the ice–water boundary. The FZ of PRN 27 overlaps the camera’s FOV from 18:43 UT until the end at 20:11 UT, remaining primarily over the icy part of the surface. PRN 8 rises but is never fully in the FOV of the cameras.
PRNs 16, 26, and 27 meet our criteria for a valid comparison of camera MRV to the GNSS-derived SR. Unfortunately, none of the specular points or FZs land fully on the water. If they had, we would have a very clear MRV signal, spanning nearly the entire range of 0 to 1, against which to test the correlation of GPS SR. We may expect, and will demonstrate, that our investigation covers a narrower red value range, which is a more stringent test of the GNSS-R SR as a detector of surface ice or water than intended.
The relative SR and MRV for PRN 16 are displayed over time in Figure 7(a). The UT hour and minute are shown on the horizontal axis. The left vertical axis in blue is the SR in dB relative to the SR at the initial time of each 20-min data segment (i.e., at 18:43 UT and 19:06 UT). The vertical axis on the right, in red, is the MRV during this same time period. The start of this time interval corresponds to the image map in Figure 6(b), and the end of the time interval corresponds to Figure 6(c). At 18:43 UT, the specular point of PRN 16 is on the ice, and the first FZ spans the ice–water boundary over this time interval, with more than one third of the ellipse spanning water. By 19:25 UT, the ellipse has moved along the ice–water boundary and has grown, such that a smaller proportion of the FZ samples the water. For this reason, the MRV increases over time, as the FZ covers proportionately more ice than water. The relative SR varies within 1 dB, showing a less clear trend than the MRV in the first 20-min segment from 18:43 to 19:02 UT, with a possibly increasing trend from 19:06 to 19:25 UT. Figure 7(b) plots the relative SR versus MRV for PRN 16. The correlation coefficient between the two quantities is ρ = 0.74, indicating a moderately strong correlation over approximately 40 min.
Figure 8 presents the SR (left axis, in blue) and MRV (right axis, in red) for PRN 26 over time. The start of this 20-min time interval corresponds to Figure 6(a). The specular point is on the ice near the ice–water boundary, such that the FZ area lies on approximately 3/4 ice and 1/4 water. The end time is not shown on a map; by the time imaged in Figure 6(b), even though the entire FZ is imaged, the antenna-zenith angle of the direct LOS is less than 90°. However, between the times corresponding to Figures 6(a) and (b), the specular point has moved approximately 1.5 m eastward and approximately 1 m northward, with the FZ area covering a larger proportion of ice than water. The FZ area grows larger over time. We anticipate that the MRV will increase as the FZ covers proportionally more ice. There are no camera data for part 2 of the data collection from approximately 18:20 to 18:40 UT; thus, comparison results cannot be shown. The relative SR for PRN 26 increases with time in this case, as well as the MRV for pixels inside the FZ for PRN 26. Figure 8(b) plots the SR versus MRV. For PRN 26, the correlation coefficient is ρ = 0.84, i.e., the SR of PRN 26 and the MRV are highly correlated over 20 min of data collection.
Figure 9(a) shows the relative SR and MRV over time for four 20-min segments of data collection for PRN 27. The SR at the start of each segment is set as the baseline, at 18:43, 19:07, 19:29, and 19:52 UT. The region of the surface scanned by PRN 27 is imaged in Figures 6(b)–(d). During this time, we can see that the specular point of PRN 27 scans across the icy portion of the surface, but that the ice is divided up by numerous cracks of meltwater within the FZ area. Consequently, we do not expect any obvious visual trend in MRV. Accordingly, there is little obvious trend seen in the MRV in Figure 9(a). The relative SR plotted over time shows a slight decreasing trend over each 20-min segment. Figure 9(b) plots ΔSR versus MRV for PRN 27. In this case, the correlation coefficient is low at ρ = 0.20, and the GNSS reflection is sampling the surface, which is a random mix of ice and water over time. This random scattering appears to have an effect on the correlation between the relative SR and MRV.
5 DISCUSSION
In Figures 7 and 8, consecutive changes in the relative SR are on the order of 0.1 dB, corresponding to a 2% increase in the SR from one time point to the next. A change of only a couple percent in relative SR may seem small, but the change is of similar order to those of the MRV, which can be seen in Figure 8 to be on the order of 0.005, or approximately 1% of the typical MRV in these plots of 0.5. Thus, changes in SR are comparable in magnitude to the changes in MRV. We expect that these small variations are due to the large overlap minute-to-minute of the sampled region of the surface. For this data set, we are hampered in testing the dynamic range of our GNSS-R SR calculation by the fact that our geometry did not yield a clean scan from the fully ice region over to full water. Even so, there is a moderately strong correlation between the measurements, as shown in Figures 7 and 8.
In the PRN 27 results, it is possible that the scattering may be impacted by surface roughness due to cracks between the ice and water zones. It is possible that the correlation of SR with MRV for PRN 27 is poor because of 1-cm height variationson the surface as ice forms, floats, and scrapes other pieces of surface ice, all within the FZ.
In all results, it is possible that the multipath, i.e., scattering from places other than the FZ, has some effect. However, the majority of the built environment is to the west of the antenna, which has a ground plane and is tilted downward with its zenith direction toward the east. Hence, we expect multipath from scatterers to the west to be greatly reduced. To the east, there are docks, as visible in the photograph in Figure 3(a). Multipath due to low-elevation satellites is not a concern because the satellites have been eliminated by the requirement that the satellite itself be below the antenna-body zenith ϕsv > 90°. Given the antenna elevation angle of κ = −45°, this requirement roughly eliminates satellites below 45° elevation. Multipath from the high-elevation PRNs 16, 26, and 27 due to the docks might be a contributing factor in reducing their correlation, particularly for PRN 27, which is at an azimuth of approximately 150° at the start of data collection.
We believe that positioning uncertainties have a minimal effect on our findings for the following reasons. First, uncertainties in satellite position would produce a negligible change in the azimuth and elevation angles because of their distance. Uncertainties in receiver positioning would affect the distances Rts, Rrs. However, these uncertainties would bias the SR and are effectively eliminated by differencing the first SR to produce a relative reflectivity. Uncertainties in receiver positioning also affect the position of the specular point. In this case, we can observe from the contextual photographs in Figure 3 that the tripod is sited between Docks B and C labeled in the Google Map. The lidar scans show returns from the docks that correspond to the location pinpointed on the Google Map.
The geophysical interpretation of our study is that surface ice has a different reflectivity than surface water at non-optical bands such as GPS L1. This difference may be due to material reflectivity or surface roughness. The reflectivity could be a stand-in, at resolutions lower than optical wavelengths (but also not subject to weather or cloud cover), for regions of ice on the surface. More in-depth geophysical modeling is beyond the scope of this work, which has focused on sensor integration and comparison. To our knowledge, other researchers have not examined the relationship between GNSS reflection and optical imagery at meter resolution before.
Because we have used the relative SR, there are no absolute values of SR that we can directly map to surface type, because the SNR generally depends on the hardware configuration and even, as in our case, the presence of automatic gain control. Yet, it could be possible to develop or calibrate a baseline or to shut off automatic gain control for future systems. Because of the averaging process over the FZ, there is no unique mapping of pixels that will yield a specific MRV. However, for a scatterer in a phase of matter that is relatively constant over time, it might be possible to combine successive overlapping FZs rather than using single snapshots, to better distinguish the ice versus water surface regions. While it would have been ideal if one or more of these specular points had been definitively on the water for the purposes of showing a clear correlation, the sensitivity of our SR calculation underwent a more stringent test, by being analyzed over a narrower range of surface conditions.
Because of the geometry of our test, the PRN 16 and 26 satellites are at a high elevation, starting to set to the east-northeast during the times they are considered. Consequently, their FZ area expands over time. We believe that this increase has little effect on the MRV, which is also increasing for PRNs 16 and 26 over time. The MRV is an average over pixels, and pixels are spread out over larger distances farther away from the cameras. High-elevation satellites correspond to pixels that are closer to the camera, resulting in an averaging that is more spatially uniform. We also suspect that for PRNs 16 and 26, it is largely coincidental that the FZ area increases as the SR increases. In fact, for PRN 16 from 18:43 to 19:02 UT, the SR is not that correlated with FZ area.
6 CONCLUSION
In this work, we correlated the GNSS-R SR with camera images to assess whether GNSS-derived SR can distinguish ice from water, as measured by the MRV of optical images. Results from a Lake Michigan data campaign held in Chicago, Illinois, in February 2020 are presented in this work. During this field work, the lake surface consisted of mixed ice and water.
To compute the SR, 1-ms coherent integration of the correlation power was incoherently summed over each second and then averaged for 1 min. The first FZ for PRNs whose specular points scan across the lake surface in the FOV of the cameras was estimated. Pixels inside the FZs of three PRNs were evaluated and selected to estimate the MRV.
The SR results of PRNs 16 and 26 are well correlated with the camera RGB red value in a FZ that spans a clear water/ice boundary. The SR of PRN 27 shows poor correlation with the camera RGB red values over 40 min. We attribute this low correlation to the surface scanned by the FZ of PRN 27 being heterogeneously ice and water within the FZ. Based on this study, we conclude that the SR of GNSS reflected signals can likely be used to distinguish ice from water.
Our ultimate aim is to expand this work to be able to test the detection of surface melt conditions on glacier ice and snow. In the future, we will study additional data sets that include conditions of both surface ice and water, including those in a glaciated terrain in Antarctica, and additional correlations will be computed. It is possible that the use of the more generalized bistatic radar equation derived by Voronovich & Zavorotny (2018) could help improve our model and the use of SNR, possibly improving the poor correlation observed for PRN 27 and also being applicable for a wider variety of surface roughness. Additionally, the SR and MRV can be evaluated for each second rather than over 1 min to test for increased sensitivity to spatial variations or to provide uncertainty estimates. The GNSS direct signal will be used to improve the estimate of the FZ and sensor system location.
AUTHOR CONTRIBUTIONS
Parvizi was responsible for processing the GNSS-R signal, coordinating the fieldwork, and leading the initial drafts of the manuscript. Khan was responsible for the initial lidar and camera processing and associated manuscript sections. Banwell was responsible for review and the ice detection optical methodology. As PI of the investigation, Datta-Barua was responsible for overseeing and coordinating all aspects of the effort, result analysis, and paper revisions. All authors helped with the field work.
CONFLICT OF INTEREST
The authors declare no potential conflicts of interest.
HOW TO CITE THIS ARTICLE
Parvizi, R., Khan, S., Banwell, A., & Datta-Barua, S. (2024). Surface reflectivity variations of global navigation satellite system signals from a mixed ice and water surface. NAVIGATION, 71(1). https://doi.org/10.33012/navi.614
Acknowledgments
The authors thank Li Pan, David Stuart, and Aurora López Rubio for on-site help during the data campaigns and Kristine Larson and Douglas MacAyeal for their advice and support. They also thank the Chicago Park District for providing the Research Access Permit for all field tests. This work was funded by the National Aeronautics and Space Administration award NNX15AV01G, National Science Foundation (NSF) award 1940483 to the Illinois Institute of Technology, and NSF award 1940473 to the University of Colorado Boulder.
APPENDIX A: SENSOR SYSTEM DIAGRAMS
Figure A1 shows the geometry of the mounting system for the sensors. A boom with direction is mounted horizontally on a tripod. The mounting system is oriented relative to the local ENU coordinates , with the boom rotated about by heading angle μ to give a boom-aligned coordinate system (see Figure A1(a)). Figure A1(b) shows a side view of the tripod and the boom on which the GNSS antennas D and R are mounted. Below this, the lidar L and cameras C1, C2, C3 are mounted. Rotating by the lidar elevation angle ψ1 gives a lidar body-fixed coordinate system , where is the axis of symmetry of the lidar. The central camera axis has the same elevation angle ψ1. The elevation angles of the left and right cameras are ψ2, ψ3 (not shown). Figure A1(c) shows that the relative headings of the cameras C2, C3 with respect to are μ2, μ3 respectively.
The reflection antenna R has its axis of symmetry ŷR at an elevation angle κ from , defining an coordinate system, where ŷR is the antenna’s zenith (Figure A1(d)). The angle between the antenna zenith ŷR and the position vector of the specular point of the i-th PRN relative to the antenna origin R0 is ϕsp.
The position vectors between relevant origins are as follows, in units of meters:
A1
A2
A3
The data shown in Table A1 were used to linearly interpolate an estimate of Gr, the gain of the Antcom antenna used, for the signal arriving from angle ϕsp.
APPENDIX B: GNSS-R SIGNAL PROCESSING
In this section, we introduce the SDR-based GNSS signal processing to compute the correlation power Ŝ(m, fD) for use in Equations (4)–(6). The USRP Ettus N210 output data format is 16-bit for in-phase and 16-bit for quadrature-phase components of the complex signal. The first 2 s of the data are truncated to remove the USRP automatic gain control overshoot and settling (Datta-Barua et al., 2016; Parvizi, 2020; Parvizi et al., 2017). A Chebyshev type II low-pass filter with a stop-band frequency of fC = 0.45fs/2, where fs is the user-selected sampling frequency, is applied. The signal is converted from the USRP complex form to a real signal. The down-converted signal has the following form:
B1
where n is the sample number, is the sampling period corresponding to the sampling frequency fs, Ps is the digitized received power, D is the navigation data bit, m′Ts is the unknown code delay, x is the C/A code for one satellite, is the unknown Doppler frequency shift due to relative motion between the satellite and the receiver, θ is the unknown phase offset, and wIF is multipath and thermal noise. Each PRN code consists of 1023 rectangular pulse chips of duration Tc, and the full sequence of chips repeats every TC/A = 1 ms.
The down-converted signal sIF is multiplied by a cosine and a sine wave to produce the in-phase and quadrature-phase samples, represented in complex form as follows:
B2
To detect satellites, the USRP output is correlated with a replica PRN code associated with a given satellite:
B3
The number N of samples integrated is related to the coherent integration time Tcoh, which, in this work, is one repetition of the full PRN code duration TC/A:
B4
With incoherent integration over a number Ninc > 1 of segments, each of which is coherently accumulated, the square magnitude of the correlation power is accumulated. Incoherent integration permits a longer integration time, but the absolute value and squaring operations also increase the noise:
B5
In this work, we incoherently sum for 1 s, i.e., Ninc = 1000.
In practice, a fast Fourier transform is used to compute the correlation in a time-efficient manner. Because the relative motion between the satellite and receiver (and, when reflected, the scatterer as well) introduces a Doppler shift, the correlation process in Equation (B3) is repeated for frequency bins that are ±10 kHz about the intermediate frequency fIF = fs/2.
The result of correlation is a matrix in which the rows are the search frequency fD and the columns are time delays m. This correlation output Ŝ is used in Equations (5)–(6).
APPENDIX C: POINT CLOUD PROCESSING
In this section, the point cloud, a set of points from which lidar reflections are received, is transformed from lidar body coordinates to a local ENU system whose origin is at the reflection antenna point R0 (see Figure A1) in order to estimate the height h1.
The Velodyne VLP-16 is a 16-channel lidar that has a measurement range of 100 m accurate at ±3 cm, a vertical FOV of ±15° about the lidar body axis, and a horizontal FOV of 360° about the axis. The VLP-16 can measure the range and intensity of approximately 300,000 points/s in single return mode, corresponding to 600 frames per minute. In this work, we use the range data provided as positions of points P relative to the lidar L in lidar body coordinates .
A Python script starts and ends data collection from all instruments simultaneously. The open source packet analyzer Wireshark records packets of data transferred from the ethernet switch of the auxiliary instruments (i.e., lidar) in packet capture (PCAP) format. Using Wireshark, we filter and export the data coming from only the lidar IP address.
We then rotate each point P of the point cloud from the lidar L body coordinates to , translate the results to place the origin at the reflection antenna, and crop the data to the desired FOV. For each position of point P relative to the lidar origin L0, we rotate to the boom B frame , as shown in Figure A1, using the matrix BRL with ψ1:
C1
C2
After the data are rotated, we translate the origin to the reflection antenna to give the point cloud positions of points P relative to the reflection antenna R0 as follows:
C3
where is defined in Equation (A1). One more rotation enuRB by heading angle μ, which is measured with respect to and defined as positive for angles east of north, gives the following:
C4
C5
Because the lake water level is below the dock (Figure 3), we crop out all reflection points that are above ground level (i.e., we keep points that are more 2 m below the reflection antenna). Moreover, the heading angle μ = 70°, as listed in Table 1, is oriented primarily eastward; thus, we crop the point cloud to keep points within 10 m south or north of R0 and within 10 m to the east:
C6
To compute the FZ positions on the lake, the height h1 of the reflection antenna from the water surface is averaged over the cropped point cloud PC:
C7
APPENDIX D: FORWARD PROJECTION AND MERGING OF CAMERA IMAGES
A camera image is a forward projection of a point P at position relative to the optical center C0i of the i-th camera in a 3D environment onto a 2D pixel plane with directions . Assuming that the camera has zero skew in this work (Khan, 2020), the backward projection to reconstruct the positions of points P in 3D space is given as follows:
D1
where f is the focal length of the camera, ku, kv are pixel densities in the directions of the image plane, respectively, and x0, y0 give the position of the principal point (the point of intersection of the line passing through the optical center of the camera along the optical axis onto the sensor plane) on the image plane, relative to the top left corner of the image plane.
Before use in the field, the Camera Calibrator App of the MATLAB Image Processing Toolbox was used to measure f, ku, kv of the calibration matrix and image distortion coefficients of each camera based on measurements of a checkerboard pattern in the lab.
When the field data are post-processed, the distortion is first removed, and the camera image is cropped to the original resolution to provide 2D coordinates (u, v) of each pixel on the image plane used in Equation (D1).
The lidar point cloud is used to provide depth information zC associated with the corresponding camera pixels. To estimate zC, a lidar point P is translated to the camera origin as follows:
D2
where is the position of a point cloud point P with respect to the i-th camera origin is the position vector of the point with respect to the lidar origin, and is the position vector of the lidar origin with respect to that camera origin, given in Equations (A2)–(A3).
The position of each point in the point cloud is rotated to the camera coordinate system based on angles μi, ψi, from Table 1. The lidar points are cropped to include only those in the FOV of the camera.
These lidar points are forward-projected onto the image plane as uL, vL based on the inverse of Equation (D1). The component of each lidar point is linearly interpolated to obtain a component zC associated with each pixel in the camera image. The interpolated zC is used on the right-hand side of Equation (D1) for backward-projecting the camera image onto 3D space. In the case of camera pixels without nearby lidar points (e.g., from water), a pseudolidar point is constructed using the mean height of the true point cloud lying within the camera FOV. These pseudolidar points are used for zC if there is no lidar return within 90 pixels of a given camera pixel.
This step yields pixel positions expressed in the camera Ci coordinate system. To rotate this pixel to the ENU system, the point is first rotated from the camera frame C to boom frame B through a sequence of rotations with an intermediate frame A:
D3
D4
D5
Rotating from the boom coordinate frame B to the ENU coordinate frame requires rotation by the boom heading angle μ according to Equation (C5). The pixel point cloud is then translated to the reflected signal antenna origin R0, which corresponds to the antenna that is collecting the reflected GPS signal off the lake surface:
D6
where is obtained by summing the relevant position vectors in Equations (A1)–(A3).
To combine optical RGB values from multiple cameras, we use their region of overlap as determined by the pixel positions of camera i that lie within the convex hull of the pixel positions of camera j. If there is no overlap in the FOV, a method such as that described by Reinhard et al. (2001) may be used. For each time point, we compute the sample means and standard deviations of the R, G, and B histograms of each of the cameras within the region of overlap. We designate one camera as the primary (camera 2 in this work). We normalize the entirety of each secondary camera’s histograms to more closely match that of the primary camera by scaling each channel (R, G, and B) with the mean and standard deviation of the primary camera’s histograms from the overlapping region. After the entire image from the secondary camera has been color-matched, the primary camera’s pixel RGB values are retained only in the overlapping region. The east and north coordinates of the cropped and color-normalized may be plotted on an east-north map with their associated RGB values (as in Figure 6) and are compared with the perimeter of the first FZ .
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.