Analytical and Empirical Navigation Safety Evaluation of a Tightly Integrated Lidar/IMU Using Return-Light Intensity

  • NAVIGATION: Journal of the Institute of Navigation
  • December 2023,
  • 70
  • (4)
  • navi.623;
  • DOI: https://doi.org/10.33012/navi.623

Summary

This paper describes the design, analysis, and experimental evaluation of a new landmark-based localization method that integrates light detection and ranging (lidar) with an inertial measurement unit (IMU). We develop a tight IMU/lidar integration scheme that exploits the complementary properties of the two sensors to facilitate safety risk evaluation. Lidar localization updates limit the IMU error drift over time while IMU data improve lidar position and orientation (or pose) prediction, thereby reducing the risk of incorrectly associating perceived features with mapped landmarks. In addition, lidar return-light intensity measurements are incorporated to better distinguish landmarks and to further reduce the risk of incorrect associations. We analyze the integrity performance of the localization algorithm using an automated testbed that generates analytical and empirical pose estimation error distributions.

Keywords

1 INTRODUCTION

In this paper, we develop, analyze, and test new position and orientation (or pose) estimation and integrity monitoring methods using data from light detection and ranging (lidar) and an inertial measurement unit (IMU). Testing is performed in a mapped laboratory environment.

This research is intended for safety evaluation in autonomous vehicles such as automated driving systems (ADSs). ADS testing is necessary but insufficient to provide navigation safety guarantees because accumulating “autonomously driven” road miles does not provide a statistically significant number of traveled miles as compared with manned driving incidents (Kalra & Groves, 2017; Kalra & Paddock, 2016).

To quantify safety risks in ADS navigation, we leverage prior analytical work in global navigation satellite system (GNSS)-based aviation navigation, where safety is assessed in terms of integrity. Integrity is a measure of trust in sensor information (International Civil Aviation Organization, 2006). Several methods have been established to predict aircraft GNSS integrity risk (Radio Technical Commission for Aeronautics (RTCA) Special Committee 159, 1996; Working Group C, 2016). Unfortunately, these methods do not directly apply to ADS because ground vehicles operate under sky-obstructed areas, where GNSS signals can be altered or blocked by buildings and trees.

ADSs require sensors in addition to GNSS, such as IMUs, lidar, camera, and radar. This paper focuses on IMUs and lidar. The integration of lidar with an IMU improves pose prediction because the IMU can be used to coast between lidar pose updates and lidar updates can be used to calibrate IMU biases (Opromolla et al., 2016). Prior work includes tightly coupled implementations in which an IMU is used to determine the lidar tilt angle (Soloviev, 2008; Soloviev et al., 2007). In robotics, lidar-based localization is often achieved by odometry (e.g., from an IMU) and simultaneous localization and mapping (SLAM) (Bresson et al., 2015; Dryanovski et al., 2013; Guivant & Nebot, 2001; Guivant et al., 2000; He et al., 2018; Hess et al., 2016; Joerger et al., 2016; Leonard & Feder, 2000; Montemerlo & Thrun, 2003; Nerurkar & Roumeliotis, 2011; Zheng & Zhang, 2019). However, SLAM is insufficient in safety-critical ADS navigation applications because localization errors drift over distance and loop closures are trajectory-constraining.

In this paper, we assume that an a priori map is available. The first category of map-based lidar localization approaches includes matching and correlation methods. These methods aim to maximize data point correspondences between the lidar point cloud (PC) and the map, whether the map itself is a PC (Guo et al., 2014; Pomerleau et al., 2013; Sappa et al., 2001) or an occupancy grid map, i.e., a tessellated representation of the PC (Fan et al., 2018; Luo et al., 2020; Nuss et al., 2015). However, rigorous safety risk quantification via matching methods is an unsolved and cumbersome problem. Instead, this research uses a landmark-based localization method for which we have developed an integrity risk equation (Hassani et al., 2018; Joerger et al., 2017 2016; Joerger & Pervan, 2017). Landmark-based localization aims to identify landmarks in the lidar PC and to match them with mapped landmarks (Hunde & Ayalew, 2018; Pirovano et al., 2020; Vosselman & Dijkman, 2001).

Landmark-based localization requires two pre-estimator procedures (Hunde & Ayalew, 2018; Pirovano et al., 2020). First, feature extraction aims to identify the most consistently recognizable, viewpoint-invariant landmarks in the lidar PC. Then, data association matches the ordering of mapped landmarks to that of PC-extracted features over successive observations (Bailey, 2002; Bar-Shalom et al., 1990; Cooper, 2005; Joerger & Pervan, 2009). Incorrect association is a well-known algorithmic fault that can cause a loss of navigation integrity.

This paper builds upon the multiple-hypothesis extended Kalman filter (EKF) innovation-based data association method presented by Joerger et al. (2016). This method provides a means for evaluating the incorrect association risk of the matching process while considering all possible combinations and permutations of sensed landmarks to mapped landmarks. The incorrect association probability is then used to bound the integrity risk of lidar-based pose estimation over successive iterations. Joerger et al. (2016) and Joerger & Pervan (2017) showed that the incorrect association probability grows rapidly in cluttered environments. One approach to mitigate this problem is to select a subset of the most distinguishable features in the lidar PC (Joerger et al., 2017). However, this approach reduces the number of redundant associations and lowers the ability to detect unwanted, unmapped landmarks.

In response, in this paper, we enhance data association and integrity monitoring performance by tightly integrating lidar with an IMU and by incorporating lidar return-light intensity measurements. In addition, we design and implement an experimental testbed to evaluate the localization and data association performance of the IMU/lidar algorithm.

We first develop a tightly integrated IMU/lidar process specifically to quantify integrity risk. IMU integration can reduce integrity risk not only by improving pose prediction but also by lowering the risk of incorrect association. Then, we derive a new method to exploit return-light intensity measurements provided by lidar in addition to range and bearing angle observations. Light intensity measurements can improve the system’s ability to distinguish landmarks if their surfaces have different reflectivity properties. For example, lidar intensity can help identify an aluminum pole from a pedestrian.

Section 2 of this paper describes the tightly integrated EKF-based IMU/lidar algorithm. Nonlinear continuous-time dynamic-propagation and measurement equations are derived, linearized, and discretized. Section 3 presents a derivation of the multiple-hypothesis data association and integrity risk bounding methods, with a focus on the contributions of IMU and lidar intensity measurements. In Section 4, we perform direct simulations and a covariance analysis to evaluate the risk reduction brought about by the IMU. In Section 5, we experimentally quantify the reduction in integrity risk achieved when incorporating (a) IMU data, (b) lidar intensity measurements, and (c) both IMU data and lidar intensity.

2 HIGH-INTEGRITY IMU AND LIDAR MEASUREMENT MODELS

2.1 IMU Measurement Equations

In this implementation, we use raw IMU accelerometer and gyroscope measurements. The IMU is fixed in the ADS body frame “B,” which can be approximately oriented along the vehicle’s principal axes of inertia. The IMU accelerometers measure inertial acceleration, i.e., second time derivatives of position with respect to the inertial frame (labeled “I”), and we are interested in the ADS pose expressed in a navigation frame “N” (for example, in the local east, north, up directions). We also define an earth-centered, earth-fixed frame “E” because N may move in E. We use the Newton and Euler methods to describe the ADS translational and rotational motion. The following three equations express the time derivative of the ADS velocity with respect to N and expressed in N, the time derivative of the position with respect to E and expressed in N, and the time derivative of the rotation matrix from B to N, respectively (following equation [3.26] in Titterton et al. (2004)):

1 1

2 2

3 3

where:

Graphicis the 3×1 ground speed vector, i.e., the vehicle velocity vector with respect to E expressed in N,
Ad/dtis a time derivative with respect to frame “A”, where A may stand for B, E, I, or N,
Graphicis the 3×3 rotation matrix from B to N (Titterton et al., 2004),
Graphicis the 3×1 vehicle position expressed in N,
Graphicis the 3×1 IMU-measured and corrected specific force vector expressed in B, as derived in Appendix B,
NωIEis the angular velocity vector of E with respect to I expressed in N,
NωENis the angular velocity vector of N with respect to E expressed in N,
Graphicis the measured and corrected angular velocity vector of B with respect to I expressed in B, as derived in Appendix B,
Ngis the local gravity vector at the IMU axis center expressed in N (Rogers, 2007),
[a×]is the skew symmetric matrix of vector a.

2.2 Lidar Range and Bearing Angle Measurement Equations

A raw lidar PC consists of thousands of data points, each of which individually does not carry useful navigation information. Thus, raw measurements must be processed before they can be used for localization. Feature extraction aims to consistently extract identifiable static landmark features. Figure 1 shows an example of a lidar PC collected in our laboratory testbed. Colors from red to blue designate high to low levels of return-light intensity, respectively.

FIGURE 1

Lidar PC showing return-light intensity (color-coded from blue to red, from low to high intensity)

The experimental testbed in Figure 1 includes six static vertical cylinders with different surface properties, which are easy-to-distinguish landmarks that facilitate feature extraction. We want successful feature extraction because this process is not the primary focus of this paper. Feature extraction aims at finding the center of quasi-circular ellipses formed by the projection of the cylinders’ edges in the lidar zero-elevation angle plane. Figure 2 illustrates our two-step feature extraction algorithm.

  • Segmentation: Within an elevation cone, range differences over the azimuthangle sequence help distinguish cylinders from the background. Point clusters corresponding to cylinders can thus be segmented.

  • Model fitting and feature estimation: The segmented points are projected in the zero-elevation plane, and a two-dimensional (2D) circle-fitting algorithm is used to fit a circle to the projected points. This step assumes that the cylinders are vertical and that the elevation plane is horizontal, which is valid in our lab environment.

FIGURE 2

(a) 3D segmentation of a lidar PC; (b) circle fitting and point-feature measurement extraction

The center of the fitted circle, parameterized by its range and bearing angle with respect to the lidar, is the extracted point feature. Let id and ia, respectively, be the range and bearing angle measurements in B for the point feature of landmark ‘i’, where i = 1…nL and nL is the number of extracted features. The horizontal position of the extracted point feature is time-invariant in the navigation frame N, which, in this paper, is fixed in E. In addition, let ipE and ipN, respectively, be the east and north position coordinates of landmark “i” in N. The ADS position and orientation vectors in N, xADS and eADS, respectively, can be expressed as follows:

4 4

5 5

where xE, xN, xU are the three-dimensional (3D) ADS position coordinates along the east, north, up axes, and ϕ, θ, ψ are the ADS Euler angles. Euler angles can be extracted from the rotation matrix in Equation (3), as described in Appendix (B).

Using these notations, the nonlinear lidar range and angular measurements are respectively given as follows:

6 6

7 7

In Equations (6) and (7), vd and va are random feature measurement errors. Feature extraction error distributions are not Gaussian, but the error’s cumulative distribution function (CDF) can be overbounded by using zero-mean normal CDF models, as described in Appendix (C) (Blanch et al., 2019; DeCleene, 2000; Rife et al., 2006). Throughout the paper, the function “arctan(b/a)”, for Graphic and Graphic, designates a function that equals arctan(b/a) when a > 0, arctan(b/a) + π when a < 0, π / 2 when a = 0 and b > 0, and −π / 2 when a = 0 and b < 0.

We can stack the ranging and angular measurements for all extracted landmarks in a 2nL × 1 vector and write the lidar nonlinear measurement equation as follows:

8 8

9 9

10 10

where:

xis an ns × 1 state vector including ADS position, velocity, orientation, and 3D IMU gyroscope and accelerometer biases; i.e., ns = 15,
vis the 2nL × 1 feature measurement error vector.

Vector v is modeled as a vector of normally distributed random variables with zero mean and covariance matrix V. We use the following notation: v ~ N(0, V). Nonzero elements of the diagonal matrix V are derived in Appendix (C). In Equation (8), the vector function h(x) consists of stacked nonlinear equations (Equations (6) and (7)) arranged as indicated in Equation (9).

In addition, the lidar provides return intensity measurements for each PC point. We evaluate the mean intensity measurement for landmark “i” by averaging intensity values for all points in a point cluster associated with landmark i. The nL × 1 return-light intensity measurement vector is modeled as follows:

11 11

where we use the overbounding distribution derivation in Appendix D to model s as normally distributed with mean s and diagonal covariance matrix Vs. Vector vs is an nL × 1 intensity measurement error vector modeled as vs ~ N(0, Vs), as described in Appendix D.

2.3 Linearization and Discretization of IMU and

First, we linearize the IMU measurement equations. The continuous-time model is linearized by using a first-order Taylor series expansion about reference state parameter values. We use the notation “δ” to indicate deviations of state parameters relative to the reference values. Using Equations (1)–(3) and the accelerometer and gyroscope measurement equations in Appendix B, we can write a continuous-time linearized state propagation model as follows:

12 12

13 13

14 14

15 15

where bgy, and bac are bias vectors of the IMU accelerometers and gyroscopes, I is a 3 × 3 identity matrix, NωIN is the angular velocity vector of I with respect to N expressed in N (which can be defined as NωIN = NωIE + NωEN), and Graphic is the corrected specific force expressed in N. Matrices FH2V and FV2T are defined in Appendix A, and S, M, τ, n, and v are defined in Appendix B for both the accelerometers and gyroscopes.

The discrete-time form of Equation (12) can be written as follows:

16 16

where k is a discrete time step and Φk−1 is an ns × ns state transition matrix over the IMU sampling interval, i.e., between time steps k – 1 and k. The discrete-time IMU measurement equations and the method for computing Φk−1 are found in Appendix B. Then, we linearize the lidar measurement equations. We can linearize Equation (8) about our best prediction of the vehicle pose Graphic. Considering both the lidar angular and ranging measurements, the total number of extracted feature measurements is n = 2nL. Let Graphic be the n × 1 feature measurement vector in Equation (8). We use the overbounding distribution derivation in Appendix C to model Graphic as Graphic. Using a first-order Taylor series approximation, the linear measurement equation can be written in terms of the predicted state vector Graphic under the correct-association hypothesis at time step k as follows:

17 17

where the observation matrix Hk is a linearized measurement-to-state coefficient matrix. The linearized range and bearing angle measurement vectors and their measurement error vectors are denoted as δd, δa and vd, va, respectively. The coefficient matrices Fd,x, Fa,x, and Fa,e are determined by using the state prediction vector and assuming a correct association, as described in Appendix A. It is worth noting that the subscript k is used in both Equation (16) and Equation (17). However, Equation (17) is only relevant when lidar measurements are available, typically at regular 0.1-s intervals (for a 360° azimuth scan), whereas the IMU sampling interval is 10–20 times smaller.

3 INTEGRATED LIDAR/IMU ESTIMATION PROCESS

In this section, we use an EKF to tightly integrate the lidar and IMU, and then, we derive an analytical upper bound on the ADS pose integrity risk that accounts for incorrect associations. The block diagram in Figure 3 outlines the three main steps of this process, which are color-coded and described in detail in Sections 3.13.3. The inputs to the block diagram are the IMU and lidar measurements and the map; the outputs are the pose estimation and integrity risk bound. Section 3.1 describes the IMU-based prediction process and includes the EKF initialization. Section 3.2 presents the EKF measurement update requiring data association; its output feeds into the state propagation equation. Section 3.3 describes the integrity risk bounding process, which accounts for the impact of incorrect associations.

FIGURE 3

Block diagram of lidar/IMU integration (prediction, estimation, and integrity risk evaluation processes)

3.1 EKF Initialization and IMU-Based Pose Prediction Process

Figure 4 shows EKF initialization and prediction of the state vector xk with IMU measurements as input. This figure also shows the initialization of components of the integrity risk bound or probability of hazardous misleading information (HMI). The IMU specific force and angular velocity measurements are employed in two parallel processes. (a) Nonlinear state propagation: We use the discrete-time forms of the nonlinear expressions in Equations (1)–(3), the derivation of attitude using the rotation matrix from B to N, and the IMU measurements described in Appendix B to predict the state vector at each time step (Titterton et al., 2004). (b) Linearization, discretization, and covariance propagation: We apply Van Loan’s algorithm with the linearized form in Equation (12) to compute the discrete-time transition matrix Φk−1 and the process noise covariance matrix Wk−1. Let Graphic be the predicted state estimate and Graphic be the state prediction covariance matrix. The other parameters in Figure 4 are defined as follows:

Graphicis the ns × 1 initial predicted state estimate vector,
Graphicis the ns × ns initial EKF state prediction covariance matrix,
P(HMI0)is the initial value of the probability of HMI,
Bkis the 3 × 3 rotation matrix relating navigation axes at time k – 1 to navigation axes at time k, as given in Appendix B (Titterton et al., 2004),
Wkis the ns × ns process noise covariance at time step k,
Graphicis the ns × 1 predicted state estimate vector at time step k,
Graphicis the ns × ns prediction of the EKF covariance matrix at time step k,
gis the discrete-time form of Equations (1) and (2), defined in Appendix B.
FIGURE 4

Initialization and EKF prediction process with an IMU

We propagate the state vector xk by using the nonlinear expressions in Equations (1)–(3). After each iteration in the EKF dynamic propagation update, we assess whether lidar measurements are present for the current time step k. If such measurements are available, we implement a measurement update, as described in Section 3.2; otherwise, we continue iterating the dynamic propagation equations (Equations (1)–(3)).

3.2 Data Association Criterion and EKF Measurement Update

We want to process the linearized lidar measurement equation (Equation (17)) using the EKF measurement update to obtain a correction δxk, an estimate of the ADS state vector Graphic, and the covariance matrix Graphic. This process requires that lidar measurements be correctly associated with mapped landmarks because their ordering is not necessarily the same.

To perform data association, we use an innovation-based approach (Joerger et al., 2016). The innovation vector under correct association γ0,k is given by the following:

18 18

where A0,k is the n × n permutation matrix that corresponds to the correct association. In practice, we do not know which is the correct permutation matrix A0,k, but we can write an exhaustive set of permutation matrices.

The innovation vector can be interpreted as a measure of consistency between the extracted feature measurements Graphic and the measurement prediction vector Graphic. A more accurate state prediction corresponds to a higher likelihood of correct association. The state prediction is improved, for example, by using IMU data instead of a vehicle kinematic model.

3.2.1 Accounting for Incorrect Data Association

Extracted landmark feature measurements are arbitrarily ordered in vector Graphic. In this paper, the number of measured landmarks in the lidar field of view (FOV) can be predicted by using a reliable map and vehicle pose prediction. In the case of occlusion, if two landmarks are in the same azimuth bin, then only the landmark nearest to the lidar is visible. Other cases such as failed extractions or extracted-but-unmapped landmarks have been addressed in prior work via combination matrices and detection (Hassani et al., 2019; Joerger et al., 2017). This paper focuses on the incorporation of IMU and intensity measurements. Let nL be the number of extracted landmarks that are visible in the lidar FOV. There are (nL!) potential ways for assigning the observed landmarks to the mapped landmarks, which is the number of all possible landmark permutations.

Incorrect association occurs when the ordering of measured landmarks differs from that of mapped landmarks. There is only one correct ordering; thus, the number of incorrect associations is hIAnL! −1. For risk evaluation, we consider all possible orderings of measurements, Graphic and Graphic where i = 0…hIA. In an example scenario with nL = 3 landmarks (both extracted and mapped), the numbers of possible landmark permutations and incorrect associations are 3! and hIA = 5.

The innovation vector γi,k has a zero mean only under correct association. Any other (incorrect) association causes the mean of the innovation vector to be non-zero. Thus, the innovation vector is a good indicator of incorrect association. The innovation vector can be expressed as follows:

19 19

where Ai,k are n × n permutation matrices for i = 0…hIA.

Based on this criterion, we select the candidate association that satisfies the following equation:

20 20

where:

Formula

Figure 5 shows a detailed description of the second block in Figure 3. Lidar measurements, map data, predicted states, and the covariance matrix serve as inputs to the data association process in Equation (20). Then, we proceed to the EKF measurement update, calculate the Kalman gain Kk, and determine the ns × 1 state estimate vector Graphic and the ns × ns estimation error covariance matrix Graphic. The state prediction vector Graphic in Equations (18) and (19) is more accurate when a tightly integrated IMU is used as compared with an ADS kinematic model, which ultimately reduces the risk of incorrect association.

FIGURE 5

Data association and EKF measurement update with lidar

3.2.2 Integration of Lidar Return-Light Intensity to Improve Data Association

The association process can be further improved by using lidar return-light intensity. The difference between the lidar-extracted mean intensity measurements and that provided in the map, which is captured in the intensity-separation vector ξi,k, can be expressed as follows:

21 21

where:

skis an nL × 1 mean return-light intensity vector for all nL visible landmarks,
ŝkis the nL × 1 lidar-measured mean return-light intensity vector; we assume ŝk ~ N(sk, VS,k),
Graphicis the mapped mean return-light intensity vector,
Graphicis an nL × nL covariance matrix capturing the uncertainty in the mean intensity values of the mapped landmarks,
Graphicare nL × nL permutation matrices similar to those in Equation (19) but for scalar permutations, for i = 0…hIA.

Similar to the innovation vector in Equation (18), the intensity separation vector in Equation (21) has a zero mean only if the correct association is found. Landmark intensity parameters are not included in the EKF prediction and estimation processes because they do not provide direct information on ADS states. However, we can still improve the association criterion by augmenting the innovation vector with ξi,k. The resulting 3nL × 1 “separation vector” is defined as follows: Graphic. The association selection criterion for incorporating the return-light intensity is given by the following:

22 22

where:

Formula

3.3 Integrity Risk Bound

The integrity risk P(HMIk), or probability of HMI at time step k, is the probability of the ADS being outside of a specified alert limit box when the vehicle position is estimated to be inside this box (Joerger & Pervan, 2009; Reid et al., 2019). In ADS lane-centering applications, lateral deviations are of primary concern, and the alert limit is defined as the distance between the edge of the car and the edge of the lane when the car is at the center of the lane (Joerger et al., 2016). An analytical bound on the integrity risk that considers all possible incorrect associations has been given by Joerger & Pervan (2019) and is expressed as follows:

23 23

with:

24 24

25 25

where:

Kdesignates a range of time indices: K = {1…k},
Jdesignates a range of time indices: J = {1…j},
Q()is the tail probability function of the standard normal distribution,
is the specified alert limit that defines a hazardous situation,
σkis the standard deviation of the estimation error for the vehicle state of interest,
IALLOC,kis a predefined integrity risk allocation for feature extraction, chosen to be a fraction of the overall integrity risk requirement IREQ,k,
Graphicis a chi-square distributed random variable with a number of degrees of freedom that is the sum of the number of measurements and of states at time step j,
Graphicrepresents the minimum value of the mean landmark feature separation, including intensity separation, at time step j.

The probability of correct association in Equation (25) is a function of Graphic, which defines a probabilistic lower bound on the true value of ζk in Equation (22). This lower bound on landmark separation is set such that the risk of the true value of ζk being smaller than Graphic does not exceed IALLOC,k.

By integrating lidar with an IMU, we can reduce positioning errors, thereby lowering the risk P(HMIk | CAK). In addition, IMU and return-light intensity measurements are instrumental for increasing the ability of the localization system to distinguish landmarks. In Equations (23)–(25), IMU and return-light measurements enable greater separation Graphic values, which increases the probability of correct association P(CAj | CAJ−1) and ultimately reduces P(HMIk). We will quantify this P(HMIk) reduction using simulation and experimental data in the next two sections. Equations (23)–(25) are represented by the “Integrity Risk Evaluation” block in Figure 3, and the output is P(HMIk).

4 INTEGRITY RISK EVALUATION USING SIMULATED DATA

In this section, we analyze the integrity performance of a “lidar-only” scheme compared with the IMU/lidar scheme described in Section 3. In this 2D horizontal simulation, an ADS roves between two landmarks located 10 m apart. The initial pose of the ADS is known, and it is assumed that we have a map of landmark positions in the navigation frame N. The surface reflectivity is identical for all landmarks, and intensity measurements are not used in this first evaluation. The simulation settings and lidar and IMU parameters are listed in Table 1.

View this table:
TABLE 1

Lidar and IMU Simulation Settings

4.1 Covariance Analysis and Integrity Risk Bound (Analytical vs. Direct Simulation)

In Figure 6, the positions of landmarks are represented by black circles, and the ADS trajectory is shown by black triangles. The 2D estimation error covariance ellipses, which represent the spread of pose estimation error, are shown in solid and dashed red lines and are inflated by a factor of 50 for better visualization. The size and shape of the covariance ellipses change as the sensor-to-landmark geometry changes because of ADS motion. The relative lengths of the semimajor and semiminor axes are also related to the standard deviations of the lidar angular and ranging measurements, as explained by Joerger (2009). This figure also shows that the integration of IMU with lidar improves the ADS trajectory estimation.

FIGURE 6

ADS positioning error covariance ellipses obtained by using lidar-only and IMU/lidar schemes for a two-landmark scenario

In Figure 7, we evaluate the analytical integrity risk bound P(HMIk) as compared with the actual integrity risk calculated by direct simulation over 50,000 Monte Carlo (MC) trials for the lidar-only and IMU/lidar schemes. We focus on lateral deviations for integrity risk evaluation; the lateral alert limit is defined in Table 1. As captured in Equation (23), P(HMIk) accounts not only for lateral covariance variations, but also for the probability of correct association P(CAk).

FIGURE 7

Lateral positioning integrity risk when using the lidar-only and IMU/lidar schemes for a two-landmark scenario: analytical bound versus MC simulations over 50,000 trials

In Figure 7, black and red circles represent the integrity risk curves obtained by direct simulation. In parallel, the black and red solid lines are the analytical bounds computed from Equation (23). Both direct simulation and analytical bounds for the IMU/lidar scheme are orders of magnitudes lower than that for the lidar-only scheme at 10–20 m of travel distance when the vehicle is close to the landmarks. Simultaneously using the IMU improves the pose estimation and data association, which results in a reduced integrity risk. The direct simulation and analytical bounds for the lidar-only scheme overlap. Discrepancies occur for low risk values: these discrepancies would disappear if we simulated more than 50,000 trials, but our computational resources are finite.

5 EXPERIMENTAL INTEGRITY RISK EVALUATION

In this section, we quantify navigation integrity for the multi-sensor IMU/lidar system described in Sections 2 and 3. We consider three configurations: lidar-only, lidar+ (incorporating mean intensity measurements with lidar range and bearing angle), and IMU/lidar+ (using all available sensor information). When using lidar only, state prediction Graphic is obtained by using a coarse kinematic model to replace Equation (12). This model propagates ADS states assuming a constant velocity vector between lidar measurement updates. This approach can be inaccurate for rapid ADS dynamics. When using the IMU, we apply Equation (12) to improve state prediction Graphic and also to enhance data association. We performed an experimental test to quantify the risk reductions brought about by incorporating lidar intensity and IMU measurements as compared with the lidar-only approach. An automated sensor platform was moving on a figure-eight track next to a predefined set of landmarks, some of which were occluded over segments of the trajectory. Landmark occlusions can cause an increased risk of incorrect association.

Additional testing results and performance comparisons for different sensor combinations have been reported by Hassani et al. (2019). In this paper, we focus on using lidar with intensity measurements and IMU data. Table 2 lists the parameters and settings of the test. In Figure 10, we use four landmarks, each identified by a number ranging from 1 to 4. The surface properties of the landmarks are not all the same: we use cylinders with a retroreflective surface for Landmark 1, black surfaces for Landmarks 2 and 4, and a white surface for Landmark 3.

View this table:
TABLE 2

Lidar and IMU Parameters and Test Settings

5.1 Experimental Testbed

We designed and built an automated sensor safety evaluation testbed to quantify the impact of incorrect association on the integrity risk P(HMIk). The testbed shown in Figure 8 is composed of a rover housing a sensor platform on a figure-eight track. The rover can operate unattended for many hours to collect large amounts of lidar and IMU data. This testbed provides a means for analyzing the performance of a navigation system over repeated trajectories, over a significant number of such trajectories, and in a controlled environment in which we can focus, for example, on the integrity impacts of landmark occlusions and of landmarks with varying surface properties. Compared with the approach using only lidar range and angle measurements, this approach helps assess the relative performance improvement brought about by additional IMU and light-intensity data. Other experiments using sensors mounted on a car’s roof rack will be performed in future work.

FIGURE 8

Automated testbed setup with a sensor platform repeatedly moving on a figure-eight track

FIGURE 9

(a) IR camera, (b) IR markers on a sensor platform, (c) lidar-VLP-16 Puck, (d) IMU-IGM-A1

In this experiment, cardboard cylinders serve as landmarks to facilitate feature extraction from the lidar PC. These cylinders are covered with white and black felt and retroreflective straps (Landmark 1, first cylinder from the left) to provide different surface reflectivities. As the rover moves, the leftmost landmark (Landmark 1) is periodically occluded behind another landmark (Landmark 2), which tests the ability of the data association process to dynamically distinguish landmarks.

The sensor platform mounted on the rover includes the lidar and IMU stacked vertically to minimize lever arm calibration errors. We used Velodyne’s VLP-16 Puck LTE lidar and NovAtel’s IMU-IGM-A1 coupled with NovAtel’s ProPak6. The IMU was set to record at a 100-Hz sampling rate. Additionally, an infrared (IR) camera motion capture system (VICON) provides reference truth values for the position and orientation of the moving platform and for the static landmarks in the navigation frame. Twelve cameras, i.e., four VICON MX-T20 cameras and eight Vantage 5 cameras, record small retroreflective markers placed on the sensors and landmarks, providing sub-centimeter-level positioning. All three sensors (IR cameras, lidar, and IMU) are time-tagged using the same computer clock.

5.2 Using Lidar Range, Bearing, Intensity, and IMU Measurements

The integrated solution using IMU, lidar range, angle, and intensity measurements is referred to as the IMU/lidar+ configuration. In Figure 10, four landmarks are used. The lidar range limit is such that all landmarks are continuously in view of the lidar except where Landmark 2 occludes Landmark 1. The estimated trajectory is represented by a blue line and the true trajectory by a black line. The estimated and true trajectories overlap. The black arrow shows the direction of motion at the starting point. Background colors help identify segments of the rover trajectory for results presented over time: the rover follows straight line paths in the dark gray area, is in the top loop when in the white area, and is in the bottom loop when in the light gray area. The purple area represents ADS locations at which Landmark 1 is occluded by Landmark 2 in the upper loop of the figure-eight track. This arrangement makes data association challenging because Landmarks 1 and 2 can be mistaken for one another when Landmark 1 comes in and out of sight.

FIGURE 10

Covariance ellipses obtained by using the IMU/lidar scheme with intensity measurements

Figure 10 also shows red covariance ellipses representing the 2D positioning uncertainty for ADS locations taken at regular 0.8-s intervals. Covariance ellipses are inflated by a factor of five to facilitate visualization. Ellipses grow when Landmark 1 is hidden (purple area).

In addition, we derived a bound on the risk of the cross-track positioning error exceeding an example alert limit of 0.35 m (Reid et al., 2019). This integrity risk bound is predictable. The event of the risk bound exceeding the risk requirement causes a loss of availability. Thus, in this paper, we want the integrity risk bound to be as low as possible to achieve high availability performance. Both the actual integrity risk itself and our ability to analytically upper-bound this risk determine the value of the predicted risk bound. Thus, as compared with using lidar ranges, additional information from the IMU and lidar intensity not only helps reduce the actual P(HMI) but also helps tighten the predicted P(HMI) bound. Figure 11 shows P(HMI) curves for the lidar-only, lidar+, and IMU/lidar+ schemes. We find the highest risk bound values in the purple regions because the occlusion of Landmark 1 causes relatively poor landmark geometry. The lidar-only approach performs poorly, with a P(HMI) bound approaching a value of 1 as soon as the first difficult-to-identify landmark geometry is encountered. The lidar+ approach is consistently better except when Landmark 1 is occluded because fewer measurements are available and the risk of incorrect association increases. Finally, as expected, the IMU/lidar+ approach outperforms the other configurations, and our tests show that the resulting P(HMI) bound is at least four orders of magnitude lower than those of the other cases.

FIGURE 11

Integrity risk bounds obtained by using the lidar-only scheme versus the lidar+ and IMU/lidar+ schemes

5.3 Repeated Trajectories

Figure 12 shows the ADS cross-track positioning error and covariance envelopes over 100 laps. The figure-eight track helps evaluate navigation system performance repeatability. The “1σ” one-dimensional envelope represents the boundary within which 68% of the error samples are expected to occur, assuming a zero-mean error.

FIGURE 12 Cross-track error of ADS for 100 laps

The zoomed-in panel shows the IMU-derived positioning drift corrected via lidar updates at regular 0.1-s intervals.

The solid red line in Figure 12 is the analytical covariance envelope. The analytical covariance determines the contribution of P(HMIk | CAK) to P(HMIk) in Equation (23). The dashed red line is the sample covariance envelope, which is smaller than the analytical envelope over the entire 20-s-long trajectory; we want the analytical error bound to be larger than the sample envelope. Cross-track positioning error curves are color-coded from light blue to dark blue as the rover travels from the first to the last lap. Small discrepancies can be observed between early and late laps (e.g., accentuated at the 12- to 14-s time points), which are due to imperfections in the testbed, including a warm-up period causing variations in vehicle speed and sensor performance (Reina & Gonzales, 1997; Ye & Borenstein, 2002). Overall, we find that the pose estimation error curves are conservatively captured by the analytical covariance envelope.

6 CONCLUSIONS

In this paper, we derived a new IMU/lidar integration method that enables integrity risk evaluation while accounting for all possible incorrect associations between observed and mapped landmarks. The IMU improves the state prediction and reduces incorrect association risks. Our method also incorporates lidar return-light intensity measurements with lidar range and bearing data to better distinguish landmarks, which also results in a quantifiable reduction in incorrect association risk. We implemented a new analytical method to quantify the improvement in the probability of correct association. In addition, we evaluated the proposed integrity risk bound using empirical data in a structured, well-understood environment. Compared with the lidar-only approach in this specific testing environment, the performance assessment demonstrated a reduction in integrity risk of several orders of magnitude when the IMU and lidar intensity are used. Future work includes testing these methods in more realistic, unstructured environments.

HOW TO CITE THIS ARTICLE

Hassani, A., & Joerger, M. (2023). Analytical and empirical navigation safety evaluation of a tightly integrated lidar/IMU using return-light intensity. NAVIGATION, 70(4). https://doi.org/10.33012/navi.623

CONFLICT OF INTEREST

The authors declare no potential conflicts of interest.

ACKNOWLEDGMENTS

The authors gratefully acknowledge the National Science Foundation for supporting this research (NSF award CMMI-1637899). The opinions expressed in this paper do not necessarily represent those of any other organization or person.

APPENDIX

A IMU AND LIDAR MEASUREMENTS AND COEFFICIENTS

The IMU measurement coefficient matrices in Equation (14) are defined as follows (Titterton et al., 2004):

A.1 A.1

A.2 A.2

where:

Ris the earth’s radius,
his the vehicle’s altitude,
λis the vehicle’s latitude,
g0is the acceleration of gravity at zero altitude.

The lidar measurement coefficient matrices in Equation (17) are as follows (Joerger, 2009):

A.3 A.3

A.4 A.4

A.5 A.5

where Graphic and Graphic.

B DISCRETE-TIME EQUATIONS FOR THE IMU

The ADS specific force is measured with respect to the inertial frame “I” and expressed in body frame “B” as Bf. The specific force measurement is imperfect: it can be modeled in the continuous-time domain as follows:

B.1 B.1

where:

Bfis the 3×1 true specific force vector of body B with respect to I expressed in body frame B,
Graphicis the measured specific force vector of body B with respect to I expressed in B,
Sac, Macare the true accelerometer calibration scale factor and misalignment matrices in B,
bacis the accelerometer time-varying bias vector in B,
vacis accelerometer measurement white noise error component expressed in B.

In Equation (B.1), the measured specific force Graphic is expressed in terms of the scale factor and misalignment matrices for which manufacturers provide estimates Ŝac and Graphic, respectively. The symbol Graphic in Ŝac designates the estimate of parameter S.

The accelerometer time-varying bias is modeled as a first-order Gauss–Markov process (GMP) (Brown & Hwang, 2012). We can write the corrected specific force Graphic and the continuous-time dynamics of the time-varying bias as follows:

B.2 B.2

B.3 B.3

where:

τacis the GMP time constant,
nacis a 3×1 vector of GMP time-uncorrelated driving noise.

The discrete-time forms of Equations (B.1)–(B.3) can be written as follows:

B.4 B.4

B.5 B.5

B.6 B.6

where:

tsis the IMU sampling interval.

The gyroscope measures the body frame angular velocity with respect to the inertial frame and can be expressed in the body frame as Graphic (Titterton et al., 2004). We can derive equations similar to Equations (B.2)–(B.6) for gyroscope measurements. These equations have been reported by Hassani et al. (2019).

Assuming that the IMU corrected specific force Graphic and angular velocity Graphic remain constant over the short IMU sampling interval ts, between time steps k – 1 and k, we can write the discrete-time form of Equations (1)–(3) and the attitude equations as follows:

B.7 B.7

B.8 B.8

B.9 B.9

B.10 B.10

where:

Formula

The notation Graphic in Equation (B.10) designates the (i, j)-th scalar component of matrix Graphic, i.e., the component in the i-th row and j-th column.

Finally, we use the Van Loan algorithm to determine the discrete-time state propagation Φk−1 and process noise covariance matrices Wk−1 based on the continuous-time matrices F and spectral density function of δw, defined as Q (Brown & Hwang, 2012).

C OVERBOUNDING OF MEASUREMENT ERROR DISTRIBUTIONS

This appendix describes a method for deriving probabilistic models of the extracted feature measurements. This method is based on the overbounding theory described by DeCleene (2000). Overbounding theory is used in aviation navigation to model non-Gaussian sample distributions, even when they are not symmetric, unimodal, or zero-mean (Blanch et al., 2019; Rife et al., 2006). We collected lidar PC data for 4,250 sensor-to-landmark geometries, processed them using our feature extractor, and stored the estimated point-feature range and bearing angle measurements. Figure C1 shows the CDF for the range and bearing angle measurement error in quantile-to-quantile plots. The x-axis scales with theoretical standard normal distribution quantiles. The y-axis scales with the sample measurement error distribution quantiles. If the empirical measurement error distribution were normal, the sample points would lie on a straight line with a slope equal to the sample standard deviation and with the y-intercept equal to the sample mean. Figure C1 shows that the core of the distribution behaves like a normal distribution within ±2σ on the x-axis, i.e., 95% of the time. However, the sample distributions have wide tails. The black lines in Figure C1 are overbounding Gaussian functions, which account for errors occurring 99.5% of the time, i.e., out to “3σ.” The bounding standard deviations are 0.12 m for the range measurement error (versus 0.03 m for the sample standard deviation) and 2° for the bearing angle measurement error (versus 1° for the sample standard deviation). Thus, Gaussian overbounds are conservative compared with sample measurement error distributions, which will impact the pose estimation error distribution.

FIGURE C1

Quantile-to-quantile plots of the lidar PC’s extracted feature error distribution and Gaussian overbounding model for the (a) range measurement and (b) bearing angle measurement (4,250 data points)

D LIDAR INTENSITY MEASUREMENT DISTRIBUTION

In this appendix, we follow the same methodology as in Appendix C to study the lidar intensity measurement error model. The return-light intensity is a function of the object’s surface property and the light beam’s incidence angle. The incidence angle is defined as the angle between the normal vector to the surface and the laser beam. Figure D1 shows a quantile-to-quantile plot of 17,500 intensity samples of a retroreflective object at a 70° incidence angle. This example is selected to illustrate the fact that overbounding theory can be used to define a Gaussian error model for discrete measurements of intensity.

FIGURE D1

Intensity measurement error distribution and Gaussian overbounding model for a retroreflective surface with a 70° incidence angle (17,500 data points)

Figure D2 shows the mean values (thick lines) and standard deviations (thin lines, solid for 1σ envelopes, dashed for 3σ envelopes) of intensity measurement overbounding functions for three different surfaces at three incidence angles. The mean values decrease with increasing incidence angle.

FIGURE D2

Mean and overbounding standard deviations (±1σ and ±3σ) of intensity measurements for black, white, and retroreflective surfaces at 0°, 30°, and 70° incidence angles (total of 166,334 data points, approximately 18,480 data points per configuration over the nine configurations)

This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

REFERENCES

  1. Bailey, T. (2002). Mobile robot localisation and mapping in extensive outdoor environments [Doctoral dissertation, University of Sydney]. https://www.personal.acfr.usyd.edu.au/tbailey/techreports/phdthesis.htm
  2. Bar-Shalom, Y., Fortmann, T. E., & Cable, P. G. (1990). Tracking and data association. The Journal of the Acoustical Society of America, 87(2), 918919. https://www.doi.org/10.1121/1.398863
  3. Blanch, J., Walter, T., & Enge, P. (2019). Gaussian bounds of sample distributions for integrity analysis. IEEE Transactions on Aerospace and Electronic Systems, 55(4), 18061815. https://www.doi.org/10.1109/TAES.2018.2876583
  4. Bresson, G., Féraud, T., Aufrère, R., Checchin, P., & Chapuis, R. (2015). Real-time monocular SLAM with low memory requirements. IEEE Transactions on Intelligent Transportation Systems, 16(4), 18271839. https://www.doi.org/10.1109/TITS.2014.2376780
  5. Brown, R. G., & Hwang, P. Y. C. (2012). Introduction to random signals and applied Kalman filtering: With MATLAB exercises (4th ed.). John Wiley. https://www.wiley.com/en-us/Introduction+to+Random+Signals+and+Applied+Kalman+Filtering+with+Matlab+Exercises%2C+4th+Edition-p-9780470609699
  6. Cooper, A. J. (2005). A comparison of data association techniques for simultaneous localization and mapping [Doctoral dissertation, Massachusetts Institute of Technology]. https://dspace.mit.edu/handle/1721.1/32438?show=full
  7. DeCleene, B. (2000). Defining pseudorange integrity - overbounding. Proc. of the 13th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GPS 2000), Salt Lake City, UT, 19161924. https://www.ion.org/publications/abstract.cfm?articleID=1603
  8. Dryanovski, I., Valenti, R. G., & Xiao, J. (2013). Fast visual odometry and mapping from RGB-D data. Proc. of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 23052310. https://www.doi.org/10.1109/ICRA.2013.6630889
  9. Fan, H., Kucner, T. P., Magnusson, M., Li, T., & Lilienthal, A. J. (2018). A dual PHD filter for effective occupancy filtering in a highly dynamic environment. IEEE Transactions on Intelligent Transportation Systems, 19(9), 29772993. https://www.doi.org/10.1109/TITS.2017.2770152
  10. Guivant, J., & Nebot, E. (2001). Optimization of the simultaneous localization and map-building algorithm for real-time implementation. IEEE Transactions on Robotics and Automation, 17(3), 242257. https://www.doi.org/10.1109/70.938382
  11. Guivant, J., Nebot, E., & Baiker, S. (2000). Localization and map building using laser range sensors in outdoor applications. Journal of Robotic Systems, 17(10), 565583. https://www.doi.org/10.1002/1097-4563(200010)17:10<565::AID-ROB4>3.0.CO;2-6
  12. Guo, Y., Sohel, F., Bennamoun, M., Wan, J., & Lu, M. (2014). An accurate and robust range image registration algorithm for 3D object modeling. IEEE Transactions on Multimedia, 16(5), 13771390. https://www.doi.org/10.1109/TMM.2014.2316145
  13. Hassani, A., Joerger, M., Arana, G. D., & Spenko, M. (2018). Lidar data association risk reduction using tight integration with INS. Proc. of the 31st International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2018), Miami, FL, 24672483. https://www.doi.org/10.33012/2018.15976
  14. Hassani, A., Morris, N., Spenko, M., & Joerger, M. (2019). Experimental integrity evaluation of tightly-integrated IMU/lidar including return-light intensity data. Proc. of the 32nd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2019), Miami, FL, 26372658. https://www.doi.org/10.33012/2019.17095
  15. He, H., Wang, K., & Sun, L. (2018). A SLAM algorithm of fused EKF and particle filter. Proc. of the 2018 WRC Symposium on Advanced Robotics and Automation (WRC SARA), Beijing, China, 172177. https://www.doi.org/10.1109/WRC-SARA.2018.8584219
  16. Hess, W., Kohler, D., Rapp, H., & Andor, D. (2016). Real-time loop closure in 2D lidar SLAM. Proc. of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 12711278. https://www.doi.org/10.1109/ICRA.2016.7487258
  17. Hunde, A., & Ayalew, B. (2018). Automated multi-target tracking in public traffic in the presence of data association uncertainty. Proc. of the 2018 Annual American Control Conference (ACC), Milwaukee, WI, 300306. https://www.doi.org/10.23919/ACC.2018.8431852
  18. International Civil Aviation Organization. (2006). Annex 10 to the convention on international civil aviation aeronautical telecommunications, radio navigation aids (6th ed., Vol. 1). https://store.icao.int/en/annex-10-aeronautical-telecommunications-volume-i-radio-navigational-aids
  19. Joerger, M. (2009). Carrier phase GPS augmentation using laser scanners and using low earth orbiting satellites [Doctoral dissertation, Illinois Institute of Technology]. https://www.proquest.com/dissertations-theses/carrier-phase-gps-augmentation-using-laser/docview/304898652/se-2
  20. Joerger, M., Arana, G. D., Spenko, M., & Pervan, B. (2017). Landmark data selection and unmapped obstacle detection in lidar-based navigation. Proc. of the 30th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2017), Portland, OR, 18861903. https://www.doi.org/10.33012/2017.15406
  21. Joerger, M., Jamoom, M., Spenko, M., & Pervan, B. (2016). Integrity of laser-based feature extraction and data association. Proc. of the 2016 IEEE/ION Position, Location and Navigation Symposium (PLANS), Savannah, GA, 557571. https://www.doi.org/10.1109/PLANS.2016.7479746
  22. Joerger, M., & Pervan, B. (2009). Measurement-level integration of carrier-phase GPS and laser-scanner for outdoor ground vehicle navigation. Journal of Dynamic Systems, Measurement, and Control, 131(2), 021004. https://www.doi.org/10.1115/1.3072122
  23. Joerger, M., & Pervan, B. (2017). Continuity risk of feature extraction for laser-based navigation. Proc. of the of the 2017 International Technical Meeting of the Institute of Navigation, Monterey, CA, 839855. https://www.ion.org/publications/abstract.cfm?articleID=14899
  24. Joerger, M., & Pervan, B. (2019). Quantifying safety of laser-based navigation. IEEE Transactions on Aerospace and Electronic Systems, 55(1), 273288. https://www.doi.org/10.1109/TAES.2018.2850381
  25. Kalra, N., & Groves, D. G. (2017). The enemy of good: Estimating the cost of waiting for nearly perfect automated vehicles. RAND Corporation. https://www.doi.org/10.7249/RR2150
  26. Kalra, N., & Paddock, S. M. (2016). Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? Transportation Research Part A: Policy and Practice, 94, 182193. https://doi.org/10.1016/j.tra.2016.09.010
  27. Leonard, J. J., & Feder, H. J. S. (2000). A computationally efficient method for large-scale concurrent mapping and localization. Robotics Research, 169176. Springer. https://www.doi.org/10.1007/978-1-4471-0765-1_21
  28. Luo, Z., Mohrenschildt, M. V., & Habibi, S. (2020). A probability occupancy grid based approach for real-time lidar ground segmentation. IEEE Transactions on Intelligent Transportation Systems, 21(3), 9981010. https://www.doi.org/10.1109/TITS.2019.2900548
  29. Montemerlo, M., & Thrun, S. (2003). Simultaneous localization and mapping with unknown data association using fastSLAM. Proc. of the 2003 IEEE International Conference on Robotics and Automation (ICRA), Taipei, Taiwan, 19851991. https://www.doi.org/10.1109/ROBOT.2003.1241885
  30. Nerurkar, E. D., & Roumeliotis, S. I. (2011). Power-SLAM: A linear-complexity, anytime algorithm for SLAM. The International Journal of Robotics Research, 30(6), 772788. https://www.doi.org/10.1177/0278364910390539
  31. Nuss, D., Yuan, T., Krehl, G., Stuebler, M., Reuter, S., & Dietmayer, K. (2015). Fusion of laser and radar sensor data with a sequential monte carlo bayesian occupancy filter. Proc. of the 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, Korea (South), 10741081. https://www.doi.org/10.1109/IVS.2015.7225827
  32. Opromolla, R., Fasano, G., Rufino, G., Grassi, M., & Savvaris, A. (2016). Lidar-inertial integration for UAV localization and mapping in complex environments. Proc. of the 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, 649656. https://www.doi.org/10.1109/ICUAS.2016.7502580
  33. Pirovano, L., Principe, G., & Armellin, R. (2020). Data association and uncertainty pruning for tracks determined on short arcs. Celestial Mechanics and Dynamical Astronomy, 132(1), 123. https://doi.org/10.1007/s10569-019-9947-8
  34. Pomerleau, F., Colas, F., Siegwart, R. Y., & Magnenat, S. (2013). Comparing ICP variants on real-world data sets. Autonomous Robots, 34, 133148. https://www.doi.org/10.1007/s10514-013-9327-2
  35. Radio Technical Commission for Aeronautics (RTCA) Special Committee 159. (1996). Minimum Operational Performance Standards for Global Positioning System/Wide Area Augmentation System Airborne Equipment. RTCA. https://books.google.com/books?id=nXCRQQAACAAJ
  36. Reid, T. G., Houts, S. E., Cammarata, R., Mills, G., Agarwal, S., Vora, A., & Pandey, G. (2019). Localization requirements for autonomous vehicles. arXiv preprint arXiv:1906.01061. https://www.doi.org/10.4271/12-02-03-0012
  37. Reina, A., & Gonzales, J. (1997). Characterization of a radial laser scanner for mobile robot navigation. Proc. of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications (IROS), Grenoble, France, 579585. https://www.doi.org/10.1109/IROS.1997.655070
  38. Rife, J., Pullen, S., Enge, P., & Pervan, B. (2006). Paired overbounding for nonideal LAAS and WAAS error distributions. IEEE Transactions on Aerospace and Electronic Systems, 42(4), 13861395. https://doi.org/10.1109/TAES.2006.314579
  39. Rogers, R. M. (2007). Applied mathematics in integrated navigation systems (3rd ed., Vol. 1). American Institute of Aeronautics and Astronautics (AIAA). https://www.doi.org/10.2514/4.861598
  40. Sappa, A., Restrepo-Specht, A., & Devy, M. (2001). Range image registration by using an edge-based representation. Proc. of the 9th International Symposium on Intelligent Robotic Systems (SIRS), Toulouse, France, Vol. 7, 167176. https://www.researchgate.net/publication/247330589_Range_Image_Registration_by_using_an_Edge-Based_Representation
  41. Soloviev, A. (2008). Tight coupling of GPS, laser scanner, and inertial measurements for navigation in urban environments. Proc. of the 2008 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, 511525. https://www.doi.org/10.1109/PLANS.2008.4570059
  42. Soloviev, A., Bates, D., & Graas, F. V. (2007). Tight coupling of laser scanner and inertial measurements for a fully autonomous relative navigation solution. NAVIGATION, 54(3), 189205. https://www.doi.org/10.1002/j.2161-4296.2007.tb00404.x
  43. Titterton, D., Weston, J. L., & Weston, J. (2004). Strapdown inertial navigation technology (2nd ed., Vol. 17). Institution of Electrical Engineers. OCLC: 64573284 https://doi.org/10.1049/PBRA017E
  44. Vosselman, G., & Dijkman, S. (2001). 3D building model reconstruction from point clouds and ground plans. Proc. of the International Society for Photogrammetry and Remote Sensing Workshop (ISPRS), Annapolis, MD, 3743. http://www.isprs.org/proceedings/XXXIV/3-W4/pdf/Vosselman.pdf
  45. Working Group C. (2016). ARAIM Technical Subgroup, Milestone 3 Report (Tech. Rep.). EU-US Cooperation on Satellite Navigation. https://www.gps.gov/policy/cooperation/europe/2016/working-group-c/ARAIM-milestone-3-report.pdf
  46. Ye, C., & Borenstein, J. (2002). Characterization of a 2D laser scanner for mobile robot obstacle negotiation. Proc. of the 2002 IEEE International Conference on Robotics and Automation (ICRA), Washington, DC, Vol. 3, 25122518. https://www.doi.org/10.1109/ROBOT.2002.1013609
  47. Zheng, B., & Zhang, Z. (2019). An improved EKF-SLAM for Mars surface exploration. International Journal of Aerospace Engineering, 2019. https://doi.org/10.1155/2019/7637469
Loading
Loading
Loading
Loading
  • Share
  • Bookmark this Article