Abstract
This paper describes the design, analysis, and experimental evaluation of a spherical grid-based localization algorithm that leverages quantization theory to bound navigation uncertainty. This algorithm integrates data from light detection and ranging (lidar) and inertial measuring units in an iterative extended Kalman filter to estimate the position and orientation of a moving vehicle. An analytical bound is derived from the vehicle’s state estimation error, which accounts for both random measurement noise and the loss of localization information caused by gridding. The performance of the proposed approach is analyzed and compared with that of a brute-force spherical grid-based method and a landmark-based method in an indoor environment, whereas an outdoor experiment verifies the practicality of the method in a realistic driving scenario.
1 INTRODUCTION
In this paper, we develop, implement, and evaluate a new point cloud (PC)-based localization algorithm integrating light detection and ranging (lidar) and inertial measuring units (IMUs) using a signal quantizer and an iterative extended Kalman filter (IEKF). This paper builds upon a prior conference paper (Hassani & Joerger, 2021) in which we introduced a spherical grid-based localization (SGL) method to estimate position and orientation (or pose) corrections between a measured three-dimensional (3D) PC and a map-based predicted PC. In this prior paper, corrections were obtained by brute-force (BF) searching over candidate vehicle poses. In contrast, in this research, we achieve SGL using an IEKF, which enables tight integration of the lidar PC with IMU measurements and prediction of pose estimation uncertainty. We develop an automated approach that locally adjusts the size of the grid cells to minimize the information loss due to gridding. In addition, we derive an analytical bound on the spherical gridding error and evaluate its impact on pose estimation error. These methods are tested using experimental data collected in both indoor and outdoor environments.
This research is intended for autonomous ground, air, and space vehicle navigation applications in which localization uncertainty quantification is critical. In our prior work (Hassani et al., 2018, 2019), we developed and evaluated a landmark-based localization (LL) method that required two intermediary pre-processing steps: feature extraction (FE) and data association (DA). FE finds recognizable landmarks in the lidar PC, and DA assigns these extracted landmarks to mapped ones. FE and DA are computationally expensive and prone to faults in cluttered environments (Hassani et al., 2019; Joerger & Hassani, 2020).
Lidar localization methods that do not require FE and DA include PC-matching algorithms, such as the widely implemented iterative closest point (ICP) algorithm (Censi, 2008; Diosi & Kleeman, 2007; Rusinkiewicz & Levoy, 2001). ICP can be computationally expensive and does not enable reliable uncertainty prediction. Lidar PCs can be efficiently processed using grid or voxel representations (voxels are 3D representations of pixels, i.e., Cartesian-coordinate cubic grid cells). Biber and Strasser (2003) derived the normal distribution transform for position estimation and simultaneous localization and mapping (SLAM). Their method partitions the two-dimensional (2D) scan into regular square cells, forms a probability density map, and uses overlapping cells to minimize discretization errors. Shen et al. (2022) introduced voxel-based SLAM for multirobot systems in Global Positioning System (GPS)-denied environments. This approach uses point-to-voxel and voxel-to-voxel matching for lidar-inertial odometry and state estimation. Park et al. (2019) designed a curved-voxel clustering (CVC) to partition 3D lidar PCs using spherical grids. CVC outperforms the radius-based nearest neighbor approach in multiple scenarios. Li et al. (2024) developed a feature- and voxel-based lidar SLAM framework using voxel matching for state estimation. However, none of these four grid-based approaches provide the means to predict localization errors. Recently, McDermott and Rife (2024b) derived an iterative closest ellipsoidal transform for voxel-based lidar scan matching, which addresses navigation solution errors and offers covariance prediction for state error estimation. The voxel design relies on spherical grids from shadow-mitigation algorithms with the voxel size configured to detect large physical structures.
Our own prior work (Hassani & Joerger, 2021) focused on developing an SGL method. The spherical grid fits the spherical-coordinate mode of measurement used by lidars. The number of grid cells can be selected to reduce the computational burden of thousands of high-update-rate data samples in the lidar PC. The grid in our prior work (Hassani & Joerger, 2021) used azimuth–elevation bins at regular angular intervals. In each bin, a point feature was selected and its distance to the spherical grid’s center point served as a range measurement. This SGL approach provided a measure of navigation uncertainty. However, regular data gridding was suboptimal, and rigorous error bounding, which is required for safe localization, has yet to be achieved.
In this paper, we develop a new PC spherical gridding approach that leverages a signal quantization technique used in data compression (Sayood, 2017). This approach locally modifies grid spacing to reduce information loss and improve pose estimation without increasing computational cost. This method also provides a deterministic bound on the range quantization error in each azimuth–elevation bin. In addition, we design an IEKF that tightly integrates IMUs with lidar SGL. The IMU facilitates the matching of measured versus computed PCs whereas the SGL helps correct accelerometer and gyroscope biases. We validate these methods using experimental data collected on a sensor platform moving in both indoor and outdoor environments.
Section 2 of this paper aims at developing an advanced lidar PC spherical gridding process. We leverage quantization theory, which is a well-studied topic in data transmission, compression, and classification (Max, 1960; Sayood, 2017), information theory (Sabin & Gray, 1986), and finite-element methods for mesh generation (Du & Gunzburger, 2002). Here, we use quantization theory to rearrange a spherical grid’s discretized representation of the environment (Gallager & Zheng, 2006; Madisetti, 2018). The algorithm assigns a greater number of azimuth bins to parts of the environment that show larger geometric variations.
In Section 3, we design an IEKF that incorporates the spherically gridded lidar PC, IMUs, and their error models for estimation and uncertainty quantification of a moving platform’s pose and of the IMU’s accelerometer and gyroscope biases. We also derive an upper bound on the impact of quantization errors on pose estimation uncertainty. In Section 4, we implement the IEKF-based IMU/lidar SGL algorithm by post-processing experimental data. An indoor performance evaluation of the LL, BF SGL, and IEKF-based SGL demonstrates the robustness of the new method in quantifying positioning uncertainty over multiple repeated trajectories. An outdoor experiment illustrates pose estimation and uncertainty quantification of the IEKF SGL in a realistic driving scenario.
2 SPHERICAL GRIDDING USING SIGNAL QUANTIZATION
2.1 Quantization Theory
Quantization is a process in which a large number of values is represented by a much smaller set of values (Sayood, 2017). We focus on scalar quantization, where the values to be quantized are scalars. The quantizer design aims at minimizing information loss. Figure 1 shows the quantization of range measurements in a 3D lidar PC. The parameters of a scalar quantizer are shown in Figure 1 and are conceptually represented in Figure 2 on a one-dimensional input axis. These parameters are defined as follows:
Spherical gridding of a 3D lidar PC
Scalar quantizer parameters
Input: The quantizer input is a set of values r with an associated probability density function f(r).
Interval: Intervals partition the input into separate ranges of values J = {, for i = 1, …, m}. Intervals are also known as Voronoi regions or encoders.
Representation value: The representation value is a single value selected for each interval, corresponding to the output of the quantizer. Representation values are also known as decoders.
Decision boundary: The limit value of an interval is a decision boundary. The number of decision boundaries is m + 1, where m is the number of intervals.
The quantization process Q(r) for an input r can be described as follows:
1
In this research, the performance measure we use to reduce information loss between quantizer input and output is the quantization distortion D, which is defined as follows:
2
The right-hand side of Equation (2) is the sum of the squared errors in each interval over all m intervals, divided by the total number of range measurements M. The number M can be expressed as , where Mi is the number of range measurements per interval , for i = 1, …, m.
The quantization process can be described as follows. Given an input r and a number of intervals m, the quantizer finds the values of di and ai in each interval , for i = 1, …, m, that minimize the distortion D. There are two main categories of quantizers: uniform quantizers use fixed-size intervals, whereas non-uniform quantizers use intervals of varying size.
In this paper, we implement a Lloyd–Max quantizer, which is a practical non-uniform scalar quantizer. Lidar range measurements serve as inputs r. The Lloyd–Max quantization algorithm finds the values of ai and di that minimize the distortion by taking the derivatives of D with respect to these parameters and setting them equal to zero. The values of ai and di for a set of inputs r can be found using the following equations (Madisetti, 2018):
3
4
For a Lloyd–Max quantizer, the representation values ai are equal to the mean of the input values in interval for i = 1, …, m as expressed in Equation (3), and di in Equation (4) is the mid-point of two neighboring representation values. Considering initial values for di and ai, the minimization of D can be iteratively achieved (Madisetti, 2018; Sayood, 2017).
2.2 Spherical Grid Design
In this section, we develop a novel approach that aims to apply quantization techniques borrowed from the signal processing field to lidar PCs. Data points are processed one elevation cone at a time, instead of following the traditional Cartesian-frame representation of a lidar PC. Therefore, the number of azimuth bins can differ from one lidar-frame elevation angle to the next. This gives flexibility for the resulting spherical grid to adjust to the surrounding environment. Figure 3 shows the range versus bearing angle for a single elevation angle (1° elevation in this example) and for a 360° azimuth angle. The raw lidar scan is represented by a thin blue line.
Lidar scanned and quantized range measurements
Then, we use quantization theory to find the optimal decision boundaries di and representation values ai for i = 1, …, m, that minimize the distortion D. These values are represented as horizontal gray dashed lines and solid red lines, respectively, in Figure 3. The vertical solid gray lines show the boundaries of the azimuth bins, which define the varying bearing angle intervals. Original and quantized signal discontinuities (interruptions in the curves) occur in the absence of lidar return measurements. Each interval has a constant ai value. A change in ai defines a new azimuth bin, enabling the spherical grid algorithm to (a) dynamically adapt to the environment and (b) increase the number of azimuth cells when variations in range measurements are more frequent. Conversely, the algorithm reduces the number of azimuth bins when fewer variations occur. Multiple non-adjacent bins can be assigned the same ai value. Figure 4 shows the quantizer parameters and the azimuth bins in a zoomed-in region of the lidar range-versus-bearing-angle curve.
Quantizer parameter definition for lidar range versus bearing angle measurements
Figure 5 shows the output measurements in a Cartesian coordinate system. This output is obtained after minor data trimming to exclude azimuth bins containing fewer than a minimum number of data points, which only increase the computational cost. The solid gray lines intersecting at the origin correspond to the vertical gray lines in Figure 3. In Figure 5, the lidar scan is represented by a blue curve, and the selected points after quantization are shown by red crosses. This representation is consistent with that used for the 3D results shown in Figure 1. The zoomed-in window shows raw azimuth–range measurements in each bin and their corresponding quantized values. In addition, a few elements of the testing environment, including segments of walls, doors, cardboard cylinders, and furniture, are labeled and shown as colored boxes and circles. A photograph of the testbed is displayed later in Figure 9. In Figures 1 and 5, the quantization azimuth bins are large for surfaces on which range measurement variations are small (e.g., walls with no irregularities); these bins are smaller, for example, near the edges of cylindrical objects, where ranging variations are larger compared with their center.
Lidar scan and quantized signals
2.3 Quantization Error
The quantization error is defined as the difference between quantizer input r and output ai. The quantization error bound bi for i = 1, …, m is defined as follows (Sayood, 2017):
5
The decision boundaries di are given in Equation (4). Quantization theory ensures that the following inequality is always satisfied:
6
Figure 6 shows quantization errors (blue) and error bounds (red) for all intervals in a frame. The red curve bounds the blue curve and is known.
Quantization error and error bound for 360° measurements
The range measurement quantization error bounding bias vector is defined as follows:
7
where k is the time epoch, bi is defined in Equation (5), and 01×m is an m × 1 vector of zeros. The zeros correspond to bearing angle measurements, which are not quantized but will be included in a lidar measurement vector.
2.4 Applying a Spherical Grid to the Map
We apply the same spherical grid to the mapped PC. We first convert the PC map from the navigation frame to the sensor frame using the predicted state vector provided by the pose estimation algorithm. Then, in each azimuth–elevation bin, we select the mapped point closest to the sensor. This approach automatically addresses occlusions of objects that may not be visible from the lidar’s current viewpoint. Figure 7 shows the lidar and map PCs after quantization, with the azimuth bins shown in gray.
Applying a spherical grid to the map and selecting points
3 LOCALIZATION AND ANALYTICAL UNCERTAINTY QUANTIFICATION
In this section, we design an IEKF that uses the mapped and sensed PCs output by the spherical gridding process. In parallel, we design a batch algorithm to bound the impact of range quantizer errors on pose estimation.
3.1 Vehicle Linearized State Propagation Model
The localization algorithm uses the IMU measurements and error model to propagate vehicle pose between two lidar updates. The IMU states consist of vehicle position, velocity, orientation, and IMU biases. This section describes the continuous-time linearized state propagation model. The complete nonlinear continuous and discrete-time equations and sensor error models can be found in our prior work (Hassani et al., 2019) based on the work by Titterton et al. (2004). In the following equations, the notation “δ” indicates deviations of the state parameters relative to reference values about which linearization is performed. We consider a vehicle body frame “B,” an east–north–up (ENU) navigation frame “N,” and an inertial frame “I.” The state propagation model is expressed as follows:
8
9
10
where:
- δxV
- is the vehicle position expressed in the navigation frame N,
- δvV
- is the vehicle velocity with respect to earth expressed in frame N,
- δeV
- is the attitude of the vehicle with respect to earth expressed in the body frame B,
- δbgy, δbac
- are the gyroscope’s and accelerometer’s time-varying bias vectors in frame B, respectively,
- FV2T, FH2V
- are 3×3 matrices defined in Appendix A,
- NωIN
- is the angular velocity vector of the inertial frame I with respect to frame N expressed in frame N (Hassani et al., 2018),
- is the estimated specific force expressed in frame N,
- is the 3×3 rotation matrix from frame B to frame N (Titterton et al., 2004),
- is the measured specific force vector at the IMU axis center with respect to frame I, expressed in frame B (Titterton et al., 2004),
- BIB
- is the measured angular velocity vector of frame B with respect to frame I, expressed in frame B,
- τgy, τac
- are the gyroscope’s and accelerometer’s Gauss–Markov process (GMP) time constants,
- Sgy, Sac
- are the estimated gyroscope and accelerometer scale factors in frame B,
- Mgy, Mac
- are the estimated gyroscope and accelerometer misalignment matrices in frame B,
- vgy, vac
- are the gyroscope and accelerometer measurement white noise error components expressed in frame B,
- ngy, nac
- are the gyroscope and accelerometer GMP time-uncorrelated driving noise vectors.
Considering the discrete-time expressions of the terms in Equation (10) given in Appendix B, the discrete-time realization of Equation (8) can be written as follows:
11
where Φk is the state transition matrix between time steps k and k + 1 (Brown & Hwang, 1992).
3.2 Measurement Models
In this section, we derive nonlinear equations for the sensed and mapped PCs designed in Sections 2.2 and 2.4. Let Nm be the total number of mapped points. Their coordinate vector in navigation frame N is expressed as for j = 1 … Nm. These point coordinates are converted in the sensor frame using the following equations:
12
where deviations in vehicle position xv and orientation ev appear in the state error equation (Equation (9)), is the mapped data point in sensor frame “S,” and is the rotation matrix from frame N to frame S derived using the predicted vector v. We use the notations and .
After applying the spherical grid to , for each elevation range, we define the ranges i and bearing angles of the gridded points, for i ranging from 1 to m, as given below:
13
14
We then stack the computed range and bearing angle measurements and define the mapped measurements at time step k as follows:
15
The lidar range ri and bearing angle θi are provided in the sensor frame. The 2m × 1 sensed measurement vector can be written as shown below:
16
17
18
where:
- xk
- is the state vector whose elements are defined in Equation (9) for the error states,
- vk
- is the 2m × 1 measurement error vector modeled as a vector of zero-mean normally distributed random variables with covariance matrix Vk. We use the notation vk ~ N(0,Vk). vr and vθ are random range and bearing angle measurement errors, respectively.
We linearize Equation (16) about our best pose prediction of the vehicle. The linearized range and angular measurement and measurement error vectors are designated by δr, δθ and vr, vθ, respectively. The linearized lidar measurement equation can be written as follows:
19
where the coefficient matrices Fr,x, Fθ,x, and Fθ,e are determined via the state prediction vector as described in Appendix A.
3.3 Model-Based Estimator Design
We develop an IEKF to tightly integrate lidar and IMU measurements and estimate vehicle pose. A diagram of the IEKF SGL is presented in Figure 8, which includes the spherical gridding and pose estimation processes. In the IEKF pose estimation block in Figure 8, the last term in the state vector estimation equation, together with the integration of high-update-rate IMU data, aims at improving the convergence of the iterative solution. The following variables are used in Figure 8:
Diagram of the IEKF SGL
- is the predicted state vector at time step k,
- is the predicted covariance matrix at time step k,
- is the estimated state vector at time step k,
- Pk
- is the estimated covariance matrix at time step k,
- Hk
- is the observation matrix defined in Equation (19) at time step k,
- Kk
- is the Kalman gain at time step k,
- is the discrete-time state transition matrix at time step k,
- ck
- is the expected error due to range quantization at time step k (explained in Section 3.4),
- Lq,k
- is the estimation error bound for quantile q at time step k (explained in Section 3.4).
3.4 Pose Error Due to Quantization
In this section, we determine the impact of the quantization bias vector on the state estimation error. The expected value of the Kalman filter error in the presence of a bias bk can be written as follows (Garcia Crespillo, 2022):
20
The impact of the quantization bias on the a posteriori estimation of the state vector at time step k is the following (Tanil et al., 2018):
21
where:
22
In Equation (21), bias vectors bi, for i = 1, …, k, are stacked from time step 1 to k in bK, where K designates time steps 1, …, k. We can express a bound on the impact of the range quantization bias on a state of interest, or a linear combination of states such as the cross-track positioning coordinate for ground vehicle applications, as follows:
23
where αk is a vector used to evaluate errors on the state combination of interest. For example, the cross-track error is obtained by projecting the errors in state vector xk on a unit vector defining the vehicle’s horizontal cross-track direction at time epoch k. Calculating ck requires that we stack biases and estimator coefficients from time steps 1 to k. Appendix C shows the individual and total impacts of for m = 1,…, k on ck which demonstrates that current and recent bias contributions are significantly larger than those in a more distant past. This feature can be exploited in future work to design a sliding time window or a recursive approach to derive ck in a computationally efficient manner. In addition, the state estimation error variance for the state of interest is given by the following:
24
The two terms in Equations (23) and (24) contribute to defining estimation error bounds for a desired quantile “q.” The estimation error bound on a state of interest is defined follows:
25
where κq is a confidence multiplier for a given quantile q: κq = Φ−1(q), where Φ−1() is the inverse cumulative distribution function for a standard normal distribution. For example, we can determine the 84% and 16% quantiles of the estimation error bounds as follows:
26
27
where the impact of the quantization error ck is accounted for in a worst-contributing manner to guarantee a bound on the actual error quantiles. If ck = 0, then the range L16%,k to L84%,k defines a 1σ (68%) error envelope on the state of interest.
4. EXPERIMENTAL EVALUATION OF SGL
In this section, we carry out two sets of experiments: one set in a structured indoor environment with recognizable landmarks to facilitate performance comparisons between SGL and LL, followed by a set of experiments in a realistic outdoor automotive navigation environment.
4.1 Indoor Testing for Experimental Validation in a Structured Environment
We first process the experimental data collected using the testbed shown in Figure 9 (Hassani et al., 2019). This setup includes a sensor platform moving on a figure-eight track and equipped with a Velodyne VLP-16 Puck LTE lidar and a NovAtel IMU-IGM-A1. Sixteen Optitrack Prime 13W infrared (IR) motion capture cameras provide sub-centimeter-level positioning by tracking retro-reflective markers fixed on the rover. Vertical cardboard cylinders surround the figure-eight track and occlude each other as the rover passes by. The features of these cylinders are extracted to serve as landmarks for the LL method. In contrast, the IEKF SGL algorithm leverages most of the lidar PC, excluding returns from the lab floor but including returns from walls and furniture as well as cylinders. Testbed settings and parameter values are listed in Table 1.
Overview of the indoor testbed
Indoor Experiment Settings
Figure 10 shows a top view of the true-versus-estimated rover trajectories. The SGL trajectory is shown by a red line, and the LL trajectory is shown by a dashed green line. The trajectories match most of the time. The map used for LL comprises the center-point coordinates of the six cylindrical landmarks shown by black circles. In contrast, the map employed for SGL uses the PC represented by blue crosses. This map is built via lidar measurements transformed from the lidar frame to the navigation frame using “ground truth” pose estimates provided by the IR camera-based motion capture system. A color-coded background is used to facilitate the interpretation of subsequent figures over time: the upper loop’s background is shown in white, the lower loop in light gray, and the straight segments in dark gray. The initial position of the rover is shown by a black cross.
Performance of the IEKF SGL for a single-lap estimation of rover position
Figure 11(a) shows cross-track SGL errors over time for experimental data collected over 80 laps. The sample cross-track positioning error is color-coded in shades of gray, from white to black as the rover travels from the first to the last lap. Slight differences in performance are observed in the early laps as compared with later laps because of the sensor system’s warm-up period. The blue envelope represents the error bounds in Equations (26) and (27). These bounds are represented by an area whose boundaries are the minimum and maximum L84%,k and L16%,k values over the 80 laps at trajectory time k. The solid red line indicates the empirical 68% error envelope (i.e., the 84% and 16% sample quantiles), which is bounded by the analytical blue envelope for most of the 22-s-long trajectory. The looseness of the blue bound with respect to the red bound indicates our limited knowledge of the spherical grid quantization error, which we are only able to bound.
IEKF SGL errors and error bounds for the rover’s (a) cross-track and (b) heading angle estimation performance
Figure 11(b) shows error bounds for the rover’s heading angle estimation. In this case, again, the analytical envelope bounds the sample error envelope for most of the trajectory. In addition, the difference between the blue area and red line for the heading error is smaller than that of the cross-track error because the range quantization primarily impacts the rover’s position states whereas the impact on the heading error is due to coupling of the rover’s states. The lidar’s bearing angle resolution error and the rover’s heading prediction error provide the main contributions to the analytical error envelope of the heading angle.
In Table 2, we compare the performance of LL, BF SGL (Hassani & Joerger, 2021), and IEKF SGL. For each approach, the table presents the cross-track error bounds derived from the maximum-over-the-trajectory of the 16%–84% quantile deviation. LL and SGL involve different sources of uncertainty and therefore have different error bounds. For LL, which is not affected by gridding errors, the error bound LLAT accounts for localization errors and for the risk of incorrect associations, as derived in Appendix D. For BF SGL, the error bound is optimistic because we have no means to account for gridding errors. For IEKF SGL, the error bounds L84%,k and L16%,k are given in Equations (26) and (27).
Comparison of the Cross-Track 16%–84% Quantile Deviation Bounds, Computation Times Over a Single Lap, and RMSE for Three Different Localization Algorithms
The maximum error bound for LL is 5 cm: LL requires FE and DA, which introduce additional risks (Hassani & Joerger, 2021; Hassani et al., 2019). These risks grow if landmarks are not well separated or are not easily recognizable (LL is challenging outdoors). In this indoor test, the experimental setup favors LL over SGL. The BF SGL error bound is also 5 cm. BF SGL does not require FE or DA, but it only provides a 1-sigma bound, with no means to account for gridding errors, resulting in an underestimated localization error. Finally, the IEKF SGL provides a maximum 84% quantile bound of 26 cm: in this case, spherical gridding errors are accounted for.
Table 2 also presents the single-lap computation time, which was evaluated using a 4-GHz processor with 32-GB RAM on a 64-bit Windows operating system. The longest computation time was observed for BF SGL. The computation time is determined by the size and resolution of the vehicle pose candidate search space and of the fixed-size azimuth–elevation grid. The automated grid size and resolution selection of IEKF SGL reduces the computation load to half that of BF SGL. The LL has the lowest computation time because it only processes a small subset of data points corresponding to extracted landmarks.
In the last column of Table 2, the root mean square error (RMSE) at the final time step of a single lap is shown for each algorithm. The RMSE values for BF SGL and IEKF SGL are similar, whereas LL yields a smaller error. Note that the RMSE may vary for different laps. In this work, we have modeled the spherical gridding error analytically and considered its contribution to the localization error bound. However, it is important to recognize that the RMSE is not an ideal metric for comparison in this context, as our experimental process is not ergodic. For an ergodic process, time-domain (such as RMSE) and ensemble characteristics are equivalent, typically requiring constant or periodic dynamics. This condition does not apply to our experiments, where changes in the rover’s linear velocity and sensor behavior, such as the sensor warm-up phase discussed earlier, result in non-ergodic characteristics.
4.2 Outdoor Testing in a Realistic Automotive Environment
Figure 12 displays an outdoor sensor platform that includes a NovAtel IMU-IGM-A1, a NovAtel ProPack6 global navigation satellite system (GNSS) receiver and antenna, and an Ouster OS1-64 lidar. The sensor platform is mounted on the roof rack of a car. A differential code and carrier phase GNSS reference station is installed at a nearby pre-surveyed location. Sensor data are collected on a laptop computer using the Robotic Operating System (ROS). We use ROS to time-tag and record sensor data for post-processing. The “truth” vehicle position and orientation are determined from NovAtel’s synchronous position, attitude, and navigation solution, which tightly integrates IMUs with a carrier phase differential GNSS. This section evaluates the performance of the IEKF SGL algorithm using lidar/IMU; the test settings and lidar/IMU parameter values are listed in Table 3.
Overview of outdoor test equipment
Outdoor Experiment Settings
Figure 13(a) displays a 3D PC map of the test site, where the height is color-coded: measurements between 2-m and 18-m heights were included. Figure 13(b) shows the test site, which corresponds to a parking lot on Virginia Tech’s Blacksburg campus. The a priori PC map is rough, which will accentuate localization errors and the need for realistic error bounds while achieving submeter-level accuracy. The map uses a limited subset of lidar measurements converted from the moving sensor frame to the ENU navigation frame using the truth solution.
3D PC map used in the outdoor experiment
Mapping errors include deviations in the reference “truth” solution, imperfect GNSS/IMU/lidar synchronization, errors caused by non-stationary objects (including vegetation), and poor, noisy, and inconsistent returns from surfaces with varying retro-reflectivity properties, particularly those we observed through glass windows. In addition, in this first outdoor experiment, we did not implement motion compensation: we did not account for vehicle motion over the 0.1-s-long 360° lidar scans. We will consider improving the map in future work; such improvements are not essential in this paper because the emphasis is placed on error bounding rather than error reduction. Further information on PC motion compensation can be found in the work by McDermott and Rife (2024a). The standard deviations of the range and bearing angle measurements in Table 3 were selected to incorporate mapping errors.
Figure 14 shows a top-view of the rover trajectory for a single lap in the parking lot. Vehicle pose is estimated using lidar and IMU data processed via the IEKF SGL algorithm. The true and estimated trajectories overlap. The initial position of the vehicle is indicated by a black cross. Figure 14 also shows magenta covariance ellipses that roughly represent the 2D positioning uncertainty for vehicle locations taken at regular 0.5-s intervals. The covariance ellipses are inflated by a factor of thirty to facilitate visualization. These ellipses underestimate localization errors because they do not account for gridding errors.
Performance of lidar/IMU-based IEKF SGL for a vehicle during testing in an outdoor environment
Figure 15(a) shows the cross-track error over time. The sample cross-track positioning error is shown by black cross markers. The blue curves represent the upper and lower IEKF SGL error bounds L84%,k and L16%,k in Equations (26) and (27), respectively. The black sample error curve is bounded by the blue sub-meter analytical bounds over the 41-s-long trajectory. The magenta line represents the 1σ (68%) error covariance envelope, which grossly underestimates localization errors by not considering the uncertainty caused by gridding. Figure 15(b) shows the error, error covariance envelope, and error bounds for the vehicle heading angle. In this case, again, the blue analytical IEKF SGL error bounds better represent the black sample errors than the magenta covariance envelope.
IEKF SGL error bounds for the vehicle’s (a) cross-track positioning error and (b) heading angle estimation error
In Figure 15, the increase in cross-track position and heading angle estimation errors occurs as the vehicle turns, as highlighted by a dashed white ellipse in Figure 14. During this time period, the errors due to a lack of motion compensation are accentuated by the vehicle’s rotation and are significantly higher compared with other segments of the trajectory. This effect will be mitigated in future work by implementing motion compensation (Li et al., 2016; McDermott & Rife, 2024a; Meng et al., 2021; Ollero et al., 2004). Furthermore, it is important to note that the error curves represent a single sample of a random process. A larger data set, similar to what was used in the indoor experiment, would enable a more statistically meaningful comparison. However, collecting such statistically significant data outdoors is often impractical. Therefore, we performed this comparison using our indoor testbed.
In addition, a smaller difference between the blue and magenta error covariance envelopes is observed for the heading angle as compared with the cross-track deviation. This result occurs because range quantization primarily impacts the vehicle’s position states. Its impact on heading estimation is indirect, through the coupling of the vehicle and heading angle states in matrix in Equation (23). Additionally, the RMSE values for cross-track and heading errors are 0.28 m and 1.6°, respectively. However, this experiment is also non-ergodic, as explained for the indoor test, owing to variations in the vehicle’s linear and angular velocities (driven manually) and changes in sensor behavior.
5. CONCLUSION
In this paper, we developed a tightly integrated SGL algorithm that provides an analytical bound on the positioning error. The algorithm leverages signal quantization theory to limit the extent of lidar data processing while minimizing the loss of information. In addition, we designed an IEKF to estimate vehicle pose and predict pose uncertainty. We derived an analytical bound on pose estimation errors that accounts for both the IEKF covariance and gridding errors.
Experimental evaluation was performed (a) in an indoor environment using data collected by a rover traveling 80 times along the same repeated trajectory and (b) in an outdoor parking lot using data collected by sensors installed on a car. Indoor testing showed that the IEKF SGL provides a more realistic positioning error bound than BF SGL while also reducing computation costs. Outdoor testing validated the IEKF-based SGL in a realistic driving environment.
CONFLICT OF INTEREST
The authors declare no potential conflicts of interest.
HOW TO CITE THIS ARTICLE
Hassani, A., & Joerger, M. (2025). Spherical grid-based IMU/lidar localization and uncertainty evaluation using signal quantization. NAVIGATION, 72(3). https://doi.org/10.33012/navi.709
APPENDIX
A IMU AND LIDAR MEASUREMENTS AND COEFFICIENTS
The IMU measurement coefficient matrices in Equation (10) are defined as follows (Titterton et al., 2004):
A.1
A.2
where:
- R
- is the earth’s radius,
- h
- is the vehicle’s altitude,
- λ
- is the vehicle’s latitude,
- g0
- is the acceleration due to gravity at zero altitude.
The lidar measurement coefficient matrices in Equation (19) are as follows (Joerger, 2009):
A.3
A.4
A.5
where and .
B DISCRETE-TIME EQUATIONS FOR THE IMU
The specific force of the rover is measured with respect to the inertial frame “I” and expressed in body frame “B” as . The specific force measurement is imperfect: it can be modeled in the continuous-time domain as follows:
B.1
where
- is the 3×1 true specific force vector of body B with respect to frame I expressed in body frame B,
- is the measured specific force vector of body B with respect to frame I expressed in frame B,
- Sac, Mac
- are the true accelerometer calibration scale factor and misalignment matrices in frame B,
- bac
- is the accelerometer time-varying bias vector in frame B,
- vac
- is the accelerometer measurement white noise error component expressed in frame B.
In Equation (B.1), the measured specific force is expressed in terms of the scale factor and misalignment matrices for which manufacturers provide estimates Ŝac and respectively. The symbol (^) in Ŝac designates the estimate of parameter S.
The time-varying bias of the accelerometer is modeled as a first-order GMP (Brown & Hwang, 1992). We can write the corrected specific force and the continuous-time dynamics of the time-varying bias as follows:
B.2
B.3
where:
- τac
- is the GMP time constant,
- nac
- is a 3×1 vector of GMP time-uncorrelated driving noise.
The discrete-time forms of Equations (B.1)–(B.3) can be written as follows:
B.4
B.5
B.6
where:
- ts
- is the IMU sampling interval.
The gyroscope measures the angular velocity of the body frame with respect to the inertial frame and can be expressed in the body frame as (Titterton et al., 2004). We can derive equations similar to Equations (B.2)–(B.6) for gyroscope measurements. These are given in our prior work (Hassani et al., 2019).
Assuming that the IMU-corrected angular velocity remains constant over the short IMU sampling interval ts, between time steps k – 1 and k, we can write the discrete-time form of the attitude equations as follows:
B.7
B.8
where:
The notation in Equation (B.8) designates the (i, j)-th scalar component of matrix i.e., the component in the i-th row and j-th column.
Finally, we use the Van Loan algorithm to determine the discrete-time state propagation Φk−1 and process noise covariance matrices Wk−1 based on the continuous-time matrices F and spectral density function of δw, defined as Q (Brown & Hwang, 1992).
C TIME EVOLUTION OF BIASES AND ESTIMATOR COEFFICIENTS
In this appendix, we study the time-dependent impact of the quantization biases on the a posteriori estimation of the state vector at each time step. This impact is governed by Equations (21) and (22), which are rewritten below for clarification:
C.1
where:
C.2
Figure C1 shows the contribution of the past and current quantization biases to the cross-track error bound estimation for the current time step (i.e., k = 5 and k = 6) as blue and red crosses. For example, the first and second blue crosses are the contribution of b1 and b2 to the cross-track error bound estimation at time step k = 5. We can observe that the greatest contribution at each time step belongs to the quantization bias at the same time step. Additionally, the summation of the contribution of all biases on the current cross-track error estimation is shown by blue and red circles. The gray curly bracket at time step k = 5 demonstrates the reduction in the contribution of b5 to the cross-track error bound estimation between time steps k = 5 and k = 6.
Impact of previous and current quantization biases on the IEKF SGL analytical cross-track error bound
Figure C2 shows the same curves as Figure C1 for three well-separated time steps: k = 10, k = 100, and k = 200. This figure shows that the weight of previous quantization biases decreases as time passes, with only the few last biases being dominant in the error bound estimation. It is noteworthy that the contribution of biases is geometry- and application-dependent. In other words, a greater time step is not necessarily equal to a greater contribution of biases due to accumulation, as we can see by comparing the blue and green circles.
Comparison of the impact of quantization biases on the IEKF SGL analytical cross-track error bound
D PROBABILISTIC LATERAL LL ERROR BOUND ACCOUNTING FOR THE RISK OF INCORRECT ASSOCIATIONS
This appendix outlines the derivation of a probabilistic bound on the lateral positioning error for LL. LL is impacted by DA risks experienced when matching an observed, extracted landmark with a mapped landmark. The risk that the algorithm will confuse one landmark for another is low when landmarks are clearly distinguishable, as is the case in this experiment. This risk is quantified in our prior work (Hassani & Joerger, 2023) as a function of the distance between landmarks and the variance of the extracted features.
A probabilistic bound on the positioning error can be derived for a predefined risk requirement IREQ,k. The positioning error bound is directly derived from Equations (23)–(25) in our prior work (Hassani & Joerger, 2023). The lateral error bound LLAT can be expressed as follows:
D.1
where
- K
- designates a range of time indices: K = {1 … k},
- CAK
- is the correct association hypothesis for all landmarks, at all times 0, …, k
- Q
- is the tail probability function of the standard normal distribution,
- σk
- is the standard deviation of the estimation error for the vehicle state of interest,
- IREQ,k
- is a predefined integrity risk allocation for FE, chosen to be a fraction of the overall integrity risk requirement IREQ,k.
Further information for computing P(CAK) can be found in our prior works (Hassani & Joerger, 2023; Joerger & Pervan, 2009).
At equality, Equation (D.1) is equivalent to the following:
D.2
This positioning error bound can be expressed as follows:
D.3
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
REFERENCES
- ↵Biber, P., & Strasser, W. (2003). The normal distributions transform: A new approach to laser scan matching. Proc. of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), Las Vegas, NV, 2743–2748. https://doi.org/10.1109/IROS.2003.1249285
- ↵Brown, R. G., & Hwang, P. Y. C. (1992). Introduction to random signals and applied Kalman filtering (Vol. 2). John Wiley. https://doi.org/10.1002/acs.4480060509
- ↵Censi, A. (2008). An ICP variant using a point-to-line metric. Proc. of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, 19–25. https://doi.org/10.1109/ROBOT.2008.4543181
- ↵Diosi, A., & Kleeman, L. (2007). Fast laser scan matching using polar coordinates. The International Journal of Robotics Research, 26(10), 1125–1153. https://doi.org/10.1177/0278364907082042
- ↵Du, Q., & Gunzburger, M. (2002). Grid generation and optimization based on centroidal Voronoi tessellations. Applied Mathematics and Computation, 133(2), 591–607. https://doi.org/10.1016/S0096-3003(01)00260-0
- ↵Gallager, R., & Zheng, L. (2006). Principles of digital communications I. Massachusetts Institute of Technology: MIT OpenCourseWare. https://ocw.mit.edu/courses/6-450-principles-of-digital-communications-i-fall-2006/
- ↵Garcia Crespillo, O. (2022). GNSS/INS Kalman filter integrity monitoring with uncertain time correlated error processes [Doctoral dissertation, EPFL]. https://doi.org/10.5075/epfl-thesis-7634
- ↵Hassani, A., & Joerger, M. (2021). A new point-cloud-based lidar/IMU localization method with uncertainty evaluation. Proc. of the 34th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2021), St. Louis, MO, 636–651. https://doi.org/10.33012/2021.17905
- ↵Hassani, A., & Joerger, M. (2023). Analytical and empirical navigation safety evaluation of a tightly integrated lidar/IMU using return-light intensity. NAVIGATION, 70(4). https://doi.org/10.33012/navi.623
- ↵Hassani, A., Joerger, M., Arana, G. D., & Spenko, M. (2018). Lidar data association risk reduction using tight integration with INS. Proc. of the 31st International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2018), Miami, FL, 2467–2483. https://doi.org/10.33012/2018.15976
- ↵Hassani, A., Morris, N., Spenko, M., & Joerger, M. (2019). Experimental integrity evaluation of tightly-integrated IMU/lidar including return-light intensity data. Proc. of the 32nd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2019), Miami, FL, 2637–2658. https://doi.org/10.33012/2019.17095
- ↵Joerger, M. (2009). Carrier phase GPS augmentation using laser scanners and using low earth orbiting satellites [Doctoral dissertation, Illinois Institute of Technology]. ProQuest Dissertations & Theses Global. https://www.proquest.com/docview/304898652
- ↵Joerger, M., & Hassani, A. (2020). A new data association method using Kalman filter innovation vector projections. Proc. of the 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, OR, 318–327. https://doi.org/10.1109/PLANS46316.2020.9110229
- ↵Joerger, M., & Pervan, B. (2009). Measurement-level integration of carrier-phase GPS and laserscanner for outdoor ground vehicle navigation. Journal of Dynamic Systems, Measurement, and Control, 131(2). https://doi.org/10.1115/1.3072122
- ↵Li, N., Yao, Y., Xu, X., Peng, Y., Wang, Z., & Wei, H. (2024). An efficient lidar SLAM with angle-based feature extraction and voxel-based fixed-lag smoothing. IEEE Transactions on Instrumentation and Measurement, 73, 1–13. https://doi.org/10.1109/TIM.2024.3436055
- ↵Li, Z., Dosso, S. E., & Sun, D. (2016). Motion-compensated acoustic localization for underwater vehicles. IEEE Journal of Oceanic Engineering, 41(4), 840–851. https://doi.org/10.1109/JOE.2015.2503518
- ↵Madisetti, V. K. (2018). The digital signal processing handbook-3 volume set. CRC press. https://doi.org/10.1201/9781315216065
- ↵Max, J. (1960). Quantizing for minimum distortion. IRE Transactions on Information Theory, 6(1), 7–12. https://doi.org/10.1109/TIT.1960.1057548
- ↵McDermott, M., & Rife, J. (2024a). Correcting motion distortion for lidar scan-to-map registration. IEEE Robotics and Automation Letters, 9(2), 1516–1523. https://doi.org/10.1109/LRA.2023.3346757
- ↵McDermott, M., & Rife, J. (2024b). ICET online accuracy characterization for geometry-based laser scan matching. NAVIGATION, 71(2). https://doi.org/10.33012/navi.647
- ↵Meng, Q., Guo, H., Zhao, X., Cao, D., & Chen, H. (2021). Loop-closure detection with a multiresolution point cloud histogram mode in lidar odometry and mapping for intelligent vehicles. IEEE/ASME Transactions on Mechatronics, 26(3), 1307–1317. https://doi.org/10.1109/TMECH.2021.3062647
- ↵Ollero, A., Ferruz, J., Caballero, F., Hurtado, S., & Merino, L. (2004). Motion compensation and object detection for autonomous helicopter visual navigation in the comets system. Proc. of the IEEE International Conference on Robotics and Automation (ICRA ’04), New Orleans, LA. https://doi.org/10.1109/robot.2004.1307123
- ↵Park, S., Wang, S., Lim, H., & Kang, U. (2019). Curved-voxel clustering for accurate segmentation of 3d lidar point clouds with real-time performance. Proc. of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China. https://doi.org/10.1109/IROS40897.2019.8968026
- ↵Rusinkiewicz, S. M., & Levoy, M. (2001). Efficient variants of the ICP algorithm. Proc. of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, 145–152. https://doi.org/10.1109/IM.2001.924423
- ↵Sabin, M., & Gray, R. (1986). Global convergence and empirical consistency of the generalized Lloyd algorithm. IEEE Transactions on Information Theory, 32(2), 148–155. https://doi.org/10.1109/TIT.1986.1057168
- ↵Sayood, K. (2017). Introduction to data compression. Morgan Kaufmann. https://doi.org/10.1016/C2015-0-06248-7
- ↵Shen, H., Zong, Q., Tian, B., & Lu, H. (2022). Voxel-based localization and mapping for multirobot system in GPS-denied environments. IEEE Transactions on Industrial Electronics, 69(10), 10333–10342. https://doi.org/10.1109/TIE.2022.3153822
- ↵Tanil, Ç., Khanafseh, S., Joerger, M., & Pervan, B. (2018). An INS monitor to detect GNSS spoofers capable of tracking vehicle position. IEEE Transactions on Aerospace and Electronic Systems, 54(1), 131–143. http://doi.org/10.1109/TAES.2017.2739924
- ↵Titterton, D., Weston, J. L., & Weston, J. (2004). Strapdown inertial navigation technology (Vol. 17). The Institute of Engineering and Technology (IET). https://doi.org/10.1049/PBRA017E






















