Toward High-Integrity Roadway Applications of Georeferenced Lidar Positioning: A Review

  • NAVIGATION: Journal of the Institute of Navigation
  • December 2025,
  • 72
  • (4)
  • navi.719;
  • DOI: https://doi.org/10.33012/navi.719

ABSTRACT

As a step toward founding the new field of lidar integrity, this paper compiles a list of lidar faults, threats, anomalies, and challenges – which we collectively label adversities. Engineers will eventually need to characterize and mitigate these adversities for rigorous quantification of lidar integrity. Lidar adversities manifest at the intersection of environment, hardware, and algorithms. By extension, the specific design approach or architecture of a lidar system, as well as its application, must be specified to define a complete set of adversities and resulting measurement-error distributions, including operationally hazardous errors. To this end, we focus on the application of absolute positioning for high-integrity roadway operations and identify three promising lidar architectures for that application. In comparing and contrasting these architectures, we review the broader literature to identify associated lidar adversities, and we provide a perspective on how those adversities might be mitigated in the future.

Keywords

1 INTRODUCTION

Although a vast number of lidar-based localization algorithms have been introduced in the research literature, very little work has been conducted to characterize their reliability. This paper seeks to develop a new field of lidar integrity, with the goal of eventual rigorous quantification of the risks associated with adding lidar into sensor-fusion systems for safety-critical navigation. To advance this goal, this paper identifies adversities to error characterization for lidar. Because lidar adversities are system-specific, we focus on the application of lidar-based absolute positioning for roadway operations and qualitatively characterize different architectures that might be used to enable this application.

Our work is motivated by rapid advances in highly automated vehicle (HAV) technology, in an era when self-driving cars and trucks are increasingly being tested on public roadways. How to best ensure the safety of HAV systems remains an open question, especially when considering the wide range of operational domains in which HAVs will drive (Wassaf et al., 2022). Quantifying the integrity of HAV subsystems will likely play a major role in overall system-safety analyses (Reid et al., 2019; Reid et al., 2023). For fully automated driving consistent with the Society of Automotive Engineers (SAE) Level 4 definition (On-Road Automated Driving Committee, 2021), it may be necessary to show that the vehicle localization subsystem causes no more than one accident per billion miles driven (Reid et al., 2019). Achieving this requirement is roughly equivalent to ensuring that the localization sensing system correctly reports cases in which its errors fall inside an acceptable envelope, called an alert limit (Reid et al., 2019; Kigotho & Rife, 2021), for all but approximately one out of ten billion measurements (Rife et al., 2023; Martello et al., 2024).

Lidar is one candidate sensing technology for HAV localization, among many others including global navigation satellite systems (GNSSs), inertial navigation systems (INSs), wheel odometry, cameras, and radar. We investigate lidar integrity with the recognition that the overall integrity for a sensor-fusion system depends on the integrity of each individual measurement. The integrity of the overall solution may become particularly dependent on an individual sensor if other sensors become unavailable. For instance, lidar-positioning integrity can be a critical alternative for satellite navigation in operational domains where satellite signals are spoofed (Kujur et al., 2024) or occluded, as in tunnels, large parking structures, and deep urban canyons (Nagai et al., 2020; Nagai, Ahmed, & Pervan, 2024). Consequently, this paper focuses on lidar-positioning integrity. We consider integrity for lidar architectures that enable georeferenced – or absolute – positioning (Wassaf et al., 2021), in contrast to lidar–odometry architectures (Zhang & Singh, 2014; Cho et al., 2020), which enable dead reckoning – or relative – positioning. However, we focus specifically on architectures that track position from a good initial guess and not on methods that solve the acquisition (Misra & Enge, 2011) or lost-robot (Thrun et al., 2002; Yuan et al., 2018) problem, in which lidar positioning is performed over a large-scale map (Desrochers et al., 2015; Xu et al., 2022).

This review refines and extends a previously published conference paper by Rife et al. (2024). The primary contribution of this review paper is, for the first time, to collect and compare promising architectures for georeferenced lidar positioning and to catalog the adversities that must be addressed to develop an integrity case for these architectures. In the next section, we define the architecture classifications in broad terms. Next, we provide a specific example of each classification. We then apply the lessons learned from these specific implementations as a means of qualitatively characterizing adversities to lidar-positioning integrity and related mitigations. We also identify how the various architectures can complement each other to provide higher levels of availability, integrity, continuity, or resiliency. The paper concludes with a discussion of future work needed to quantify system safety.

2 OVERVIEW OF ARCHITECTURES FOR GEOREFERENCED LIDAR NAVIGATION

This section defines three broad architectures that enable georeferenced positioning with lidar. We focus on map-tracking methods, where georeferenced positions and attitudes are obtained from lidar given a reasonable initial guess of map location. Broadly speaking, the three architectures we consider are differentiated by the types of map data and physical infrastructure required. We label the three approaches as the engineered-target architecture, the landmark-based architecture, and the scan-matching architecture. These approaches are summarized by the block diagram shown in Figure 1.

FIGURE 1

Three fundamental architectures for georeferenced lidar positioning (a) Engineered-Target (b) Landmark-based (c) Scan-Matching

The engineered-target architecture relies on purpose-built signage for geolocation. The lidar unit detects these signs and estimates the relative pose (location and orientation). Each sign must be carefully surveyed, to enable users to convert the estimated pose relative to the sign into georeferenced coordinates. Each sign is highly reflective (to enhance detection) and uniquely identifiable (to ensure unambiguous recognition).

The landmark-based architecture relies on existing objects in the built environment rather than introducing new, purpose-built signs. Appropriate objects might include lamp posts, road signs, or walls. The lidar system recognizes and extracts these features from each point cloud. The locations of extracted features are then compared to a globally aligned map of landmarks to infer global pose. Ideally, feature recognition can be achieved without requiring objects to be modified; however, one might enhance feature detection, for example, by applying a retroreflective coating to existing objects in the environment (Nagai et al., 2023; Nagai, Ahmed, & Pervan, 2024).

The scan-matching architecture relies on a high-resolution map of the surroundings. In this approach, features are not explicitly classified. Instead, the entire lidar-generated point cloud is aligned with a high-resolution map to estimate lidar pose. If the map is appropriately georectified, the pose estimate can be converted into a global position. Because features are classified and individually associated with targets or landmarks, this approach allows localization in a wide variety of terrains, ranging from urban environments to natural terrain.

Together, the three architectures span a tradeoff space, with each architecture exhibiting different dependencies on the physical environment and map data. These tradeoffs are illustrated in Figure 2. The engineered-target architecture requires minimal map information, but necessitates the installation of significant physical infrastructure. The scan-matching architecture places minimal constraints on the physical environment, but it requires a significant quantity of map data. The landmark architecture is a balance, placing only modest requirements on both the physical environment and map data.

FIGURE 2

Tradeoff between infrastructure and map data for different system architectures

3 ARCHITECTURE DETAILS

In this section, we provide a concise example of each architecture with an emphasis on practical implementation details. We briefly discuss one specific algorithm for each case, selecting algorithms that our team is developing as part of our broader efforts to characterize lidar-navigation performance. By exploring specific algorithms, we clarify assumptions about each architecture with regard to the built environment and map data; we also highlight similarities and differences among the architectures. A key similarity among architectures is that all can be framed as a nonlinear least-squares solution combining measurements relative to several mapped points (similar to a GNSS solution, which combines measurements relative to several satellites).

3.1 Engineered-Target Architecture

The engineered-target localization methodology is based on the deployment of geolocated, uniquely identifiable targets throughout the operational environment. The user has access to a database (or almanac) corresponding to accurate geographic locations of each target. Such information can be stored in memory local to the lidar and updated as needed over a communication channel. In the algorithm presented here, we treat targets as distinct points (Csanyi & Toth, 2007) rather than as detailed feature-rich objects that enable a direct inference of pose (Huang et al., 2022). As a result, at least two targets in common view are needed to determine the two-dimensional lidar position and yaw, and a minimum of three target positions is needed to determine the full six-degree-of-freedom pose in the navigation frame.

Integral to this process is the use of an encoding scheme that allows for identification of individual targets, such that the targets can be mapped to a specific geolocation. Targets might convey their unique identifiers via a variety of modalities, for example, by embedding the code into the lidar image itself, conveying the code via a secondary sensor such as a camera, or transmitting the identifier over radio frequencies.

Embedding the identifier into the lidar signal is advantageous in that the vehicle system would require no additional sensing or communication components. Data bits would be patterned by placing infrared reflecting panels on the target to indicate a high bit and absorbing panels to indicate a low bit (Huang et al., 2021). More than two reflectivity levels can be used to increase the density of the data message. An example, an engineered target using twelve panels, each with three values, was built and evaluated by Wassaf et al. (2021). As shown in Figure 3, this prototype provided the equivalent of a 19-bit message (312 ≈ 219). A schematic of the data message is shown on the left side of the figure, with each of the twelve data panels labeled according to their reflectance level: R1 (high), R2 (medium), and R3 (low). A point cloud is shown on the right side of the figure, where the cloud was experimentally generated by placing the lidar 6.5 m from the 1.8-m-wide target. Assuming a lidar with an azimuthal resolution of at least 0.2°, the data panels are sized at 30 cm across, to ensure the placement of a minimum of two lidar samples across the width of each panel even when the target is viewed at a range of 30 m and an angle of 45°.

FIGURE 3

A target’s unique identifier can be encoded on the target through a pattern of variable reflectance, as shown in the schematic (left) and experimental point cloud data (right). In the point cloud, color represents the intensity of the return, with red indicating high (R1), cyan indicating medium (R2), and dark blue indicating low (R3) intensity.

A disadvantage of a lidar-only target is its large size. This liability might be offset by re-arranging the panels to conform to the shape of existing infrastructure. For instance, the data panels might be aligned vertically, in a single column, to conform better to the shape of a streetlight or telephone pole.

More compact targets are enabled by leveraging mixed-modality imaging, with both a lidar and camera sensor viewing the target at the same time. Our team is currently designing a set of mixed-modality targets with uniform reflectance, to promote reliable lidar extraction, and with color panels, to encode identifier data. The color-based encoding uses patterns, similar to Gold codes, that feature auto-correlation and cross-correlation properties designed to allow accurate localization and unique target classification, respectively. As illustrated in Figure 4, our protype targets use 16 panels, each identified by one of four colors (for a total code length of 32 bits).

FIGURE 4

Two targets identified by distinct color patterns

The proposed mixed-modality target of Figure 4 improves on the original lidar-only target of Figure 3, simultaneously increasing the code from 19 bits to 32 bits, doubling the viewing range, and shrinking the target size to 60 cm × 60 cm, which is comparable to a standard road sign. The viewing range is doubled because the entire target is reflective to lidar (such that two azimuthal lidar points are sampled at a distance of 60 m and an angle of 45° for a lidar with an azimuthal resolution of 0.2°). Because each color-blocked panel is 15 cm × 15 cm, at least three camera pixels are sampled across the width (and height) of each panel for a camera with an angular resolution of 0.03°.

For either the lidar-only or mixed-modality case, the target is extracted from the lidar image via a correlation operation. The correlation entails scaling a replica code pattern and comparing the replica to the acquired image to find a match. After target detection and identification, the target’s bearing is inferred from the correlation peak and augmented with the measured range to create a relative position vector between the lidar and target. In this paper, we use the notation ykL to refer to the centroid vector for each confirmed target k. The trailing superscript indicates the measurement frame, which is the lidar frame L in this case. To solve for lidar pose, i.e., the position and attitude, the first step is to construct a full observation vector y, which concatenates the centroid locations for a total of K visible targets:

y=[y1LyKL] 1

A simulated example of a vehicle equipped with six cameras and a lidar is shown in Figure 5. The illustration depicts an open area (nominally a parking lot) of roughly 40 m × 40 m. The vehicle has three targets in view (K = 3), including one flat target and two corner targets. The corner targets are deployed to allow viewing from a wider range of directions.

FIGURE 5

Simulation of a vehicle (blue rectangle) traveling along a path (red line) The six blue arrows on the vehicle indicate camera look angles. A spinning lidar collects scans in the round, with the start of each scan aligned with the lidar x-axis (green arrow). Three targets are shown, each with a centroid marked by a magenta asterisk and with surface normals marked by cyan arrows.

A state vector ξ describes the pose of the lidar frame relative to the world (or navigation) frame N. The state vector consists of a set of Euler angles ψLN = [ϕ θ ψ]T and a translation vector xNLN. The translation vector, which runs to the lidar reference point from the navigation reference point (as indicated by the NL subscript), is written in lidar-frame coordinates (indicated by the L superscript):

ξ=[ψLNxNLL] 2

Using the target’s unique identifier, each measured target location ykL is associated with the corresponding georeferenced target location y¯kN drawn from the map database. (The overbar indicates surveyed truth as recorded on the map.) The measured and surveyed target locations are related by the following equation, where the rotation matrix RNL converts from the navigation basis to the lidar basis and the vector εkL describes the measurement error:

ykL=RNL(ψLN)y¯kNxNLL+εkL 3

A unique solution can be obtained by simultaneously solving the measurement equations (Equation (3)) for a set of three or more targets k. This set of nonlinear measurement equations for all K targets can be written as follows:

y=h(ξ)+ε 4

Here, the concatenated observation vector y is described by Equation (1), and the state vector ξ is described by Equation (2). The nonlinear function h is the concatenation of the measurement equation (Equation (3)) for each target k, where each set of three rows in h describes a different target. The concatenated measurement error is ε=[(ε1L)T(εKL)T]T, a column vector compiling the individual error vectors εkL for each target k. Given a reasonable starting guess, Equation (4), which is nonlinear, can be solved by linearizing and applying an iterative nonlinear least-squares approach such as the Newton–Raphson, Gauss–Newton, or Levenberg–Marquardt (also known as the damped least squares) approach. These solutions employ a Taylor series expansion, with higher-order terms assumed to be small. The following equation results from the Taylor series expansion of Equation (4):

δy=H δξ+ε 5

Here, the measurement perturbation, δy = yh0), is the difference between the measurement and the model h(ξ), evaluated at a reference pose ξ0. The state perturbation, δξ = ξ − ξ0, is linearly related to the measurement perturbation. The Jacobian H ∈ ℝ3K×6 is constructed by concantenating a set of 3 × 6 blocks, one for each target k. Each block is the derivative of h(ξ) in Equation (3) with respect to each of the six states in ξ. A weighted least-squares solution is obtained as follows, where the weighting matrix W is the inverse of the measurement covariance matrix R, such that W = R−1:

δξ=(HTWH)1HTWδy 6

Equation (6) is iteratively resolved to update the reference state: ξ → ξ + δξ. The measurement model is linear in the measurement error ε. Thus, the weighted least-squares process predicts the covariance of the pose error, which is the difference between the estimate ξ and the true pose ξ¯. The prediction of the covariance matrix, P=E[(ξξ¯)(ξξ¯)T], is as follows:

P=(HTWH)1 7

One factor in establishing integrity for these equations is addressing the risk of false extraction of the target. By construction, the probability of a false target extraction should be remote for the engineered-target architecture, given that random background noise is very unlikey to match the target code. Even if the false-extraction risk is negligble, there is always a finite risk of a large measurement error that results in a hazard (e.g., vehicle strays from its lane) even in otherwise nominal operating conditions. Thus, integrity risks must be characterized for both nominal and adverse conditions, possibly including bad weather, broken components, or malicious attacks. These issues will be discussed in more detail in Section 4.

3.2 Landmark-Based Architecture

In the opportunity-feature approach, the lidar identifies existing features in the built environment, such as light or telephone poles. Features in the lidar’s field of view are associated with preregistered landmarks whose locations are stored in a database accessible to the vehicle. Functionally, this approach closely resembles the engineered-target approach visualized in Figure 1, with the exception that features are not uniquely identifiable. Consequently, the opportunity-feature architecture can be implemented by using Equations (1)(7) from the prior section, with the primary difference being that each measurement vector ykL in Equation (1) refers to an opportunity feature and each reference location y¯kN in Equation (3) refers to a georeferenced landmark, or more specifically to a specified reference point on each landmark.

The lack of unique identifiers on each landmark adds uncertainty to the process of extracting features from an image and to the process of associating extracted features to the correct database entries. Mismatches can occur during both processes (Bar-Shalom & Fortmann, 1988). If undetected, mistaken matches can cause hazardous errors in pose estimation. To better understand these risks, consider Figure 6. The figure shows a sidewalk as a lidar point cloud (left) as well as a corresponding camera photograph (right). In analysis, two regions of the point cloud were extracted as candidate features, including a pedestrian (blue points) and a lamp post (red points). In early testing, the algorithm identified the pedestrian as the “correct match” to a database record for the lamp post, which implies a potential integrity concern.

FIGURE 6

An illustration of incorrect extraction

The pedestrian (blue) was mistakenly extracted and matched with the lamp post landmark (red), as documented by Nagai et al. (2023).

In the remainder of this section, we characterize the integrity risk associated with uncertain feature matching. To this end, we define four related terms: correct extraction, incorrect extraction, correct association, and incorrect association. A correct extraction (CE) occurs when an extracted feature corresponds to a landmark in the database; conversely, an incorrect extraction (IE) happens when an extracted feature does not relate to a landmark in the database. A correct association (CA) is defined as a true match of the extracted landmark to the database. An incorrect association (IA) occurs when the extracted feature is erroneously associated with an incorrect landmark in the database. Data association follows feature extraction, and each of these processes can be modeled as a complementary pair of probabilities, as we summarize in Table 1.

View this table:
TABLE 1

Associating Probabilities with Feature-Extraction and Data-Association Faults (H1) and the Fault-Free Hypothesis (H0)

Nagai et al. (2023) derived a bound to account for extraction and association risks. The combined integrity risk was called the probability of hazardously misleading information P(HMI). The bound on P(HMI) was derived for a time epoch j using the law of total probability:

P(HMIj)1(1P(HMIj|CA,CE))P(CA|CE)P(CE) 8

Here, 𝒯 (upper case) denotes all time increments from time epoch 1 to j: 𝒯 = {1, 2, 3, ... j}. The probabilities P(CA𝒯|CE𝒯) and P(CE𝒯) can be calculated via the following equations (Jamoom, 2016; Joerger et al., 2016; Hassani et al., 2023):

P(CA|CE)=i=1jP(CAi|CA1,CE) 9

P(CE)=i=1jP(CEi|CE1) 10

P(HMIj|CA,CE)=2Φ(ALσj) 11

Here, the symbol ℑ denotes all time increments from time epoch 1 to i, and Φ represents the cumulative density function for a zero-mean, Gaussian overbounding model for the fault-free position-error distribution (in one dimension); the Gaussian overbound is characterized by its standard deviation σj. The maximum allowed error for safe operation is the alert limit AL. A conservative yet practical approximation assumes that any past incorrect extraction or association events result in current HMI. With these assumptions, Equation (8) can account for the combined integrity risks associated with both the fault-free case and the case of faulted data extraction/association.

To meet stringent integrity risk requirements, as described by Reid (2019), it is necessary to minimize the fault-free position integrity risk P(HMI|CA, CE) and maximize the correct association probability P(CAJ|CEJ) and correct extraction probability P(CEJ). When navigating via lidar fused with GNSS, a high-quality INS, wheel speed sensors, and vehicle kinematic constraints, our test system met integrity requirements for a maximum landmark spacing of approximately 14 m (Nagai, Spenko, et al., 2024). The test system focused on extracting pole-like landmarks (e.g., street lamps or traffic lights) because of their location flexibility, relative ubiquity, and defined shapes, as demonstrated by Sefati et al. (2017) and Teo and Chiu (2015). Pole-like landmarks typically exist in urban environments, with separations in the range of 2–14 m, a distance appropriate for achieving both fault-free integrity and a low probability of missed association.

3.3 Scan-Matching Architecture

In the scan-matching architecture, a lidar-processing algorithm aligns, or registers, an entire scan to a reference map. The reference map is often assumed to be a high-definition (HD) map, which is a point cloud constructed as a mosaic of many individual lidar scans (Nagy & Benedek, 2018; Li et al., 2022). Point clouds in the HD map are aligned and georeferenced offline through the use of GNSS or other ground-truth measurements. Because HD maps are storage-intensive, compressed maps are also of interest; for example, neural radiance fields offer the potential to compress the map data and allow novel-view reconstruction by encoding the map in a simple neural network (Deng et al., 2023; Huang et al., 2023; Zhang et al., 2024; McDermott & Rife, 2025). If the map itself is georeferenced, then alignment of the current scan to the map results in absolute positioning.

Importantly, the scan-matching process aligns all lidar points to the map simultaneously. This aspect of the scan-matching architecture differs from target- and landmark-based architectures, where the current scan is first converted to a set of features, which must then be registered to a map database (see Figure 1). By extension, an advantage of scan matching is that feature-extraction faults (as shown in Figure 6) do not occur, because features are not extracted. However, a disadvantage of the scan-matching architecture is the ubiquity of scene changes. Whereas targets and landmarks can be selectively defined to be stationary over time, scan matching must handle all parts of a scene, even regions where objects are in motion or have moved since map creation. Figure 7 illustrates several examples of phenomena that can create discrepancies between a lidar scan and a reference map.

FIGURE 7

Although scene changes affect all lidar-positioning architectures, they are ubiquitous in scan-matching. Scene changes can be caused, for example, by (a) wind, (b) pedestrian and vehicle motion as well as construction, (c) repositioning of temporary infrastructure and waste bins, and (d) seasonal growth of foliage and loss of leaves.

To delve deeper into the scan-matching process, we choose to focus on analytical scan-matching methods. Analytical scan-matching methods can be characterized via principles of geometry and physics, and thus, they are fully interpretable. Because of our emphasis on safety analysis, we do not consider scan-matching algorithms that rely on machine learning (ML) models, such as deep neural networks, which ingest the current scan and reference data as inputs and produce a relative pose as an output (Aoki et al., 2019; Zhang et al., 2020; Zhou & Tuzel, 2018). Although efforts are underway to characterize ML-based integrity (Joerger et al., 2022), methods do not yet exist to interpret and analyze faults for general ML-based algorithms (Willers et al., 2020). Our team has implemented practical demonstrations of analytical algorithms that can run in real time, or faster, on a single processor when post-processing real-world lidar data (McDermott & Rife, 2024; Nagai, Spenko, et al., 2024).

Within the space of analytical scan-matching algorithms, our team has focused on implementing a grid-based algorithm that subdivides the full scan into a regular set of volume elements (or voxels) before aligning the textures within each voxel. Voxel-based algorithms are advantageous in that no explicit correspondence is required between points; by contrast, the well-known point-to-point iterative closest point scan-matching algorithm establishes an explicit correspondence between points within two clouds (Besl & McKay, 1992; Segal et al., 2009), a process that may require additional modeling of incorrect association probabilities as described by Equations (8)(11). The particular voxel-based scan algorithm we have implemented is called the iterative closest ellipsoid transform (ICET) (McDermott & Rife, 2022a; McDermott & Rife, 2024), which is a variant of the well-known normal distributions transform (NDT) (Biber & Straßer, 2003; Stoyanov et al., 2012). Compared with the NDT, the ICET adds processes to better predict output-error covariance and exclude misleading measurements. The pose-correction accuracy is relatively independent of voxel size, so long as the length scale of the grid is significantly larger than the mean distance between lidar points, as discussed in the sensitivity-study appendix of McDermott and Rife (2024).

The mathematical model for this process is described, once again, by Equations (1)(7). For scan matching, a grid of local textures (the textures within each voxel) replaces recognizable features. Within voxel k, the displacement δyk is the vector shift that must be applied to the local scan texture to best align it with the map texture. In other words, if a reference point for the texture were defined at ykL in the current scan, it would best match to the point y¯kN in the map, and thus, the displacement would be written as follows:

δyk=ykLh(ξ;y¯kN) 12

where the nonlinear function h converts the map point into the lidar frame through the rigid transform in Equation (3): h(ξ;y¯kN)=RNL(ψLN)y¯kNxNLL+εkL. Algorithmically, it is not necessary to designate specific reference points for the textures within each voxel k; local textures are assumed only to translate (and not rotate); thus, the displacement describes the entire texture (and not just one point on the texture).

The displacements δyk are concatenated over all voxels to define an observation vector δy, which appears in the concatenated measurement equation (Equation (5)). A pose estimate can be obtained iteratively from Equation (6), and covariance can be predicted by Equation (7).

The use of voxels to define texture in a local neighborhood is illustrated in Figure 8. Two complete lidar scans are overlaid: a current scan and a reference scan. The alignment of the scans is imperfect; hence, objects such as trees appear twice (once in red, once in blue), subject to a slight offset. Voxel-based algorithms overlay a regular pattern of voxels across the lidar scan. A representative subset of three adjacent voxels is shown in the figure. The three adjacent voxels are defined in spherical coordinates, over a distinct range of azimuth and elevation angles, with adaptive range limits (McDermott & Rife, 2022b). The portions of the lidar point cloud within the three adjacent voxels include parts of two trees and the gap between the trees. However, the lidar algorithm is unaware that the objects are trees. The algorithm simply recognizes that the local pattern of red dots needs to be shifted by a vector displacement to better align with the local pattern of blue dots within the same voxel. Similar vector displacements δyk are computed for all voxels k across the scan (the majority of which are not shown). A common method of computing the local offset within a voxel, used in the NDT and ICET, is to difference the centroids of the two local texture patterns within that voxel. If lidar points are scarce within a voxel, as in case of the middle of the three adjacent voxels shown in Figure 8, a valid displacement cannot be computed, and thus, the voxel is excluded from the pose-estimation process.

FIGURE 8

Voxel-based alignment of local textures

A current and reference scan, in red and blue, respectively, are approximately aligned. Three representative voxels are shown superimposed on the scans. Local offsets between the two texture patterns are evaluated within each voxel. A global pose correction is inferred from the vector field of local offsets computed over all voxels.

Similar to the opportunity-feature architecture, the scan-matching architecture relies on a reasonable initial estimate ξ0 of the lidar pose to align lidar imagery to reference data. In our current implementations of voxel-based algorithms, we start with a Global Positioning System (GPS) position to obtain a first fix on the lidar map and then use a lidar-estimated position afterward, updating the guess for the next epoch using a lidar-derived velocity estimate. For a future deployed system, we envision that the initial pose might be provided by an integrated positioning system featuring GNSS, INS, and wheel odometry (Nagai, Spenko, et al., 2024; Liu et al., 2023). Alternatively, the acquisition problem could be solved by applying lidar along with a large-scale map-matching method (Desrochers et al., 2015; Xu et al., 2022).

Our implementation of ICET incorporates several pre- and post-processing steps to mitigate adversities. A key post-processing step is a conventional monitor that checks the converged pose solution by flagging any voxel with a large residual discrepancy (above 10 cm); this test detects many moving objects and scene changes. A key pre-processing step is detection and removal of measurements parallel to extended surfaces, such as along a flat wall or in the ground plane; these measurement directions appear to have finite variance, capped by voxel width, but provide no meaningful localization information (McDermott & Rife, 2024). We also perform a motion-distortion correction that unwarps the effects of vehicle translation and rotation during the time required to form a full scan, typically about 0.1 s (McDermott & Rife, 2023). To conduct a complete integrity analysis, further work is needed to assess the effectiveness of these mitigations and the magnitude of errors that persist after mitigation.

4 QUALITATIVE CHARACTERIZATION

In this section, we consider the potential benefits of combining the three architectures. We also create a comprehensive catalog of adversities that affect the three lidar architectures, along with potential mitigations to those adversities.

4.1 Complementarity

The lidar architectures presented in this paper are complementary in the sense that a single vehicle might deploy more than one architecture in parallel. The architectures have different strengths, and their measurement errors and adversities are largely independent. In this section, we explore how the architectures can be leveraged in a complementary manner to better satisfy guidance-quality requirements.

Guidance-quality requirements characterize a navigation sensor’s error distribution in terms of its “width.” More specifically, guidance-quality requirements define limits on (i) the standard deviation of the nominal error distribution and (ii) the probability of the largest error that can be tolerated without a severe consequence, such as a vehicle crash. These two basic requirements correspond to the concepts of accuracy and integrity. Additional requirements (continuity, availability, resiliency) specify how often the navigation sensor needs to operate with its full capability. By briefly defining accuracy, integrity, continuity, availability, and resiliency concepts, we can contextualize how lidar architectures complement each other.

Accuracy: Accuracy is a target for the nominal magnitude of a navigation sensor’s pose-estimation errors. The accuracy requirement is usually defined as a bound that contains 95% of errors, roughly equivalent to twice the standard deviation for a one-dimensional Gaussian distribution.

Integrity: Integrity defines a maximum acceptable instantaneous risk that the sensor error exceeds a predefined upper bound without a timely warning. This bound on large errors is sometimes called an alert limit. Detailed quantifications of integrity risk and alert limits can be found in the navigation literature, such as the work by Liu et al. (1997) and Rife et al. (2023).

Continuity: Continuity refers to maintaining uninterrupted accuracy and integrity, with the exception of during predicted interruptions forecast at the start of a journey. A loss of continuity can result from an unanticipated change in conditions, for instance, when an integrity monitor alerts that features extracted from the current lidar scan are clearly inconsistent with available map data (for instance, because of construction).

Availability: Availability is the fraction of time that the navigation sensor is predicted, at the start of a journey, to meet its operational requirements for accuracy, integrity, and continuity.

Resiliency: Resiliency quantifies the delivery of degraded navigation capabilities when exposed to a specific sensor adversity, such as severe weather or malicious spoofing. For example, in a blizzard, a local government might leverage the internet of things to declare that a pair of northbound lanes should function together, effectively forming a single wider lane. This action reduces navigation risk at the expense of roadway capacity. The lidar system would support resilient operations if its measurement quality were sufficient to deliver the target integrity-risk level for operations in the double-wide lane.

Complementary architectures might be considered as a strategy to address whichever guidance-quality requirements are hardest to meet. For instance, if the limiting requirement is integrity, an integrity monitor might be defined to compare the outputs of two architectures and to provide an alert if outputs diverge, with the goal of detecting anomalous conditions that create different errors for different architectures. Alternatively, if the limiting requirement is accuracy, pose estimates generated by different architectures might be fused to enhance accuracy (e.g., in a Kalman filter that leverages any measure of independence of the estimation errors between architectures). Although benefits are possible, complementary architectures are not a panacea. Experience in designing integrity systems for the aviation domain suggests that complementary architectures will enhance one or perhaps two guidance-quality parameters, but not all of them.

Perhaps the most compelling use of complementary architectures is to enhance the availability of long routes. This aim might be accomplished by running multiple algorithms in parallel and weighting each based on a real-time performance prediction. Predicted performance will change in different operational domains. For example, a vehicle might travel from a heavily forested area (favoring scan matching) to a dense urban center (favoring landmarks) before ending its journey in a parking garage (favoring engineered targets). To understand this aspect of complementarity, one can consider the cost-based performance of each architecture across eight different operating environments, as summarized in Table 2. The table assigns a cost-based performance level to three lidar architectures (columns) operating in eight domains (rows). A fourth column characterizes the performance of GNSS, for comparison. The table assigns three levels of cost-based performance, including use cases that are strongly justified (blue), possibly justified (yellow), and challenging to justify (red). Levels were assigned based on author discussions and field tests (Wassaf et al., 2021; Nagai, Spenko, et al., 2024; McDermott & Rife, 2024), and not on cost–benefit analysis.

TABLE 2

Availability Depends on Lidar Architecture Feasibility in Various Operating Domains as Compared With GPS.

A strong justification (blue) was assigned in only two cases. In these cases, GNSS signal quality is poor, and lidar costs appear to be offset by a clear use case. The first of these strongly justified cases involves extended underground tunnels, where GNSS is not available. Tunnels tend to have relatively few existing landmarks (which is problematic for the landmark-based architecture); thus, it would likely be necessary to install engineered targets to enable absolute lidar positioning. Although installation costs might be high, the infrastructure would likely serve a large number of users, as tunnels themselves are expensive to construct and generally only built on high-value, highly trafficked routes. Additionally, tunnels are often public infrastructure, simplifying negotiation of sign placement; moreover, tunnel walls and ceilings are often blank, implying that sign placement is not likely to compete with aesthetic or commercial interests. The second strongly justified case involves dense urban settings, where landmarks are plentiful. Particularly in urban canyons, where GNSS quality may be limited, exploiting landmark-based lidar positioning has a strong potential benefit at low cost (as new infrastructure need not be installed).

At the other extreme, eleven entries are labeled as challenging to justify (red) because of the high anticipated cost per user for securing land rights, installing signage, and maintaining that infrastructure. The number of users is considered low for roadway infrastructure in areas of moderate or low population density (e.g., rural, forest, and mountain environments). If the density of useful existing landmarks is low (more than 60-m intervals), then new landmarks would need to be installed, most likely in the form of engineered targets; consequently, landmark-based and engineered-target architectures are essentially equivalent in these operating environments.

Eleven entries are labeled as possibly justified (yellow). In these cases, the value tradeoffs are difficult to assess without a more formal analysis. For example, let us consider the case of a parking garage. In this environment, existing structural elements (e.g., columns) may be sufficient to support landmark-based navigation, or if features are sparse, high-traffic density may justify the installation of engineered targets. Similarly, in dense urban environments, engineered targets might be viable in areas where existing landmarks are sparse or ambiguous, as in the case of a roadway sided by uniformly spaced lamp posts. As population density decreases (e.g., in suburban neighborhoods), engineered targets become less viable, although the landmark-based architecture may remain viable if lamp posts (and other existing, identifiable structures) are sufficiently dense. Wide open spaces (e.g., deserts and plains) are a special case. With low population density, these spaces have few existing landmarks. The cost of deploying a dense network of engineered targets is very high relative to usage; however, a low-density network of engineered targets might still be viable, if lane-keeping can be accomplished via road-relative navigation. In that case, absolute positioning might only be needed sporadically (e.g., near highway exits), a narrow application that would be well supported by the engineered-target infrastructure.

The scan-matching architecture is listed as possibly justified for all operating environments. We interpret scan matching as a general-purpose tool that does not take advantage of structure in the built environment but that works everywhere (i.e., a “jack of all trades but master of none”). Although physical infrastructure is not needed, map-related costs are higher than for other architectures, both because HD maps are data-intensive (more space on disk) and because they may need to be periodically updated to reflect construction or other durable scene changes. Given these tradeoffs, the scan-matching architecture is the only architecture that is potentially justified in most low-density environments. Even if environmental features are extremely sparse, as in tunnels or flat terrain like deserts or plains, we assume that scan matching can provide “lateral-only” coverage (cross-track but not along-track positioning) by using features of the road itself, such as painted markings and the road edge.

4.2 Adversities to Meeting Integrity Requirements

To leverage the benefits of multiple architectures, or even a single architecture, it will be necessary to resolve the significant technical challenges associated with developing a safety case for lidar positioning. The navigation community has significant experience with developing safety cases for navigation sensing in aviation applications, where the operating environment is highly constrained. In these environments, it has been largely possible to decouple analysis into two components: the mitigation of anomalous errors under faulted conditions and the bounding of far-tail errors under nominal conditions (Enge et al., 1996; Enge, 1999). In the less structured environment of ground transportation, these two cases are blurred, resulting in a spectrum of intermediate conditions between nominal and faulted.

To distinguish from the word fault, which implies the occurrence of a rare anomaly, we describe these intermediate conditions using the word adversity, which implies a challenging condition that is not always present but that is not necessarily rare. In this sense, adversities are a broader category than faults because frequency is not implied. In other words, a fault is a rare adversity. For lidar positioning, we go a step farther and define an adversity as any phenomenon diverging from the fundamental assumption that lidar data depict static features that have not changed in size, shape, position, or orientation since the creation of the reference map.

As a reasonable first step toward developing a safety case, this section compiles a list of known adversities for lidar positioning. The compiled list can be found in Table 3. The table characterizes each adversity in terms of a cause (or mode) that may have one or more effect. The entries in the table (checkboxes) link a particular mode (row) to its potential effects (columns).

View this table:
TABLE 3

Adversity Modes and Effects for Lidar Navigation

The table rows classify adversity modes in eight categories: lidar sampling, signal in space, hardware design, platform motion, map data, algorithm, spoofing, and scene changes. The definitions of these eight categories are expanded below.

The table columns identify five categories of effects: measurement errors (local and scanwise), feature-extraction issues, and data-association issues (local and scanwise). Measurement-error effects introduce random or systematic biases that perturb the locations of the points in a lidar cloud away from their true position. Local measurement errors impact small regions of the point cloud, such that errors are generally independent from one feature to another. Scanwise measurement errors impact the entire point cloud, such that errors can be correlated between features. Feature-extraction issues involve the conversion of the lidar scan into a set of recognizable features for comparison to mapped targets or landmarks. Data-association issues arise in matching scanned features to the map. Local data-association issues include problems in which individual features are not correctly represented in the map database. Scanwise data-association issues include problems in which the scan is not matched to the correct segment of the map, such that most, if not all, scan features are incorrectly matched.

In Table 3, the first set of adversity modes is compiled under the category of lidar sampling, which includes random and systematic effects degrading the position measurement associated with each individual lidar return. Although individual lidar returns are often formatted as three-dimensional position vectors, each point might better be visualized in spherical coordinates, where the range is measured based on the time the lidar beam takes to travel from the transmitter back to the receiver and where the pointing direction is inferred from sensor geometry. For a mechanically rotating lidar unit, the elevation angle is a preset offset from the horizontal, and the azimuth angle is inferred as a fractional rotation based on interpolating a time tag between the known start and end times for the current scan. In this context, we expect the error distribution for the return to be anisotropic, with different extents in the radial, elevation, and azimuthal directions. Factors contributing to the anisotropic error distribution include time-of-flight, beam width, variable scanning speed, calibration bias, and time-tagging bias effects.

  • Time-of-flight noise: The range measurement is obtained from the two-way time of flight for an infrared pulse traveling from the lidar unit to a surface in the world and back. Noise may be introduced into the ranging measurement from electronic noise in the detector or from clock jitter, for example.

  • Beam width: The lidar beam has finite width. A well-documented example is for the Velodyne VLP-16 sensor. When the beam leaves the sensor, it measures only millimeters across in any direction, but it diverges to a height of 8.5 cm and a width of 15 cm at a range of 50 m (Velodyne, 2019). This divergence corresponds to a full-beam angle of just under 0.1° in elevation and just under 0.2° in azimuth. The implication is that the pulse arriving at the detector is the integrated response of returns arriving from across the beam pattern. By extension, the photons that dominate the detected pulse may arrive from azimuth/elevation angles shifted from the beam centerline, and the photons that define the peak of the detected pulse may be blurred if arriving from an oblique or complex surface. In short, range, elevation, and azimuth measurements are not intrinsic properties of the sensor, as they reflect the shape of surfaces visualized across the beam pattern. In some cases, the detector may even measure multiple peaks (see multipath, below).

  • Variable scanning speed: For a rotating sensor, full rotations are measured with an encoder, but fractional azimuthal rotation is indirectly inferred from the time tag of each measurement. Assuming a constant rotation rate, the azimuth of the beam centerline can be computed by interpolating the measurement’s time tag between the rotation start and end times. Systematic errors may occur in inferring the centerline azimuth for each return if the motor speed controller does not maintain a constant rotation rate.

  • Calibration bias: Nominal boresight angles (e.g., azimuth and elevation) are specified for each beam. Small perturbations associated with fabrication may cause the transmitted beam to deviate slightly from its nominal angle. Moreover, thermal expansion may cause boresight angles to vary with ambient temperature.

  • Time-tagging bias: The lidar clock (used to tag each return) experiences a drifting bias relative to the system clock (used to fuse measurements from different sensors). The time offset injects velocity-dependent error into sensor fusion.

Adversity modes affecting the signal in space include random and systematic effects that occur along the path between transmission and reception. Signal-in-space effects generally distort the measurement range, making the measured range longer or shorter than expected. Examples include multipath, absorption, transmission, spectral reflection, crosstalk, shadowing, and refraction.

  • Multipath: When the beam pattern falls on a surface discontinuity, such as the corner of a building or the branches of a tree, multiple reflections can occur from near and far surfaces. These cases generate distinct peaks at the detector, any of which might be used to generate a range measurement for a given pointing direction. To help resolve this ambiguity, some lidar systems can be configured to record the first peak, the last peak, the strongest peak, or multiple peaks (Velodyne, 2019; Ouster, 2024).

  • Crosstalk: If two lidar units, both operating at the same wavelength, are mounted on vehicles that approach each other, then the transmissions from one unit can be received by the other (Hu, 2024). This adversity might occur for two vehicles passing each other or sitting adjacent in traffic. Crosstalk might also result from static lidar units used in infrastructure, for instance, to count passing vehicles. The effects of the crosstalk signal will be similar to multipath effects, as discussed above. With regard to multipath-ambiguity resolution, note that the crosstalk signal will always be stronger than the true signal (because energy is not lost as a result of scattering) but that the crosstalk is not necessarily the first return.

  • Absorption: Typical surfaces return a significant fraction of energy from the lidar transmitter back to its detector. For cases in which no strong return is detected, the lidar reports a non-return. In most cases, a non-return indicates that no object is present within the detection range of the lidar, with the detection range defined somewhat sharply by spreading losses (inverse of range to the fourth power). However, exceptions are possible, including in cases of highly absorptive materials. If a surface absorbs sufficient energy, the detected power will fall below the threshold triggering a non-return. In such cases, an object may be present but not detectable.

  • Transmission: Certain surfaces may not reliably produce a return. For example, when a lidar beam passes near the edge of an isolated object or interior gaps in a surface (such as a fence or tree branches), a non-return may result. Glass and other transparent surfaces may also fail to produce a return. Consequently, as with the absorption case above, an object may be present but not detectable. In many cases, transmission is highly dependent on lidar configuration, with slight changes of pose resulting in either a valid range measurement or a non-return. Such surfaces might be defined as generating probabilistic returns in the sense that detections are inconsistent, with no easily calibrated pattern describing when the surface will produce valid returns or non-returns.

  • Spectral reflection: Typical “matte” surfaces scatter light diffusely, in all directions. By contrast, “shiny” surfaces reflect some fraction of light spectrally, like a mirror. Unless the reflecting surface is viewed from the normal direction, spectral reflection redirects the lidar beam away from the lidar unit. In most cases, the reflected lidar beam travels into empty sky and dissipates (non-return). If the reflected beam hits a secondary surface, it may scatter back along the original ray path, bouncing off the reflector and returning to the lidar unit’s detector, thereby generating a non-line-of-site (NLOS) measurement. Whereas the non-return case represents missing data, which can make it difficult to reconstruct surface shape, the NLOS case manifests as a positive bias in the measured range. Either way, care is needed to infer scene geometry when windows, mirrors, polished metal, or shiny paint are present.

  • Shadowing: When viewing a scene from different angles, foreground objects may occlude background objects from certain angles, effectively shadowing the background objects from view (McDermott & Rife, 2022b). Objects with curved surfaces can also shadow themselves, a phenomenon sometimes called self-shadowing or perspective error (Rife & McDermott, 2024). Shadowing effects may distort the apparent shape of an object, making it harder to extract features and/or match the map. Shadow edges can also create “spurious” features along walls or across the ground.

  • Refraction: Lidar systems assume that light travels in a straight line from the emitter to the scattering surface and back to the detector. Refraction effects violate this assumption by bending the trajectory of light. Refraction occurs when light travels through a temperature and density gradient in the air, such as when passing above an asphalt roadway heated above the ambient air temperature by solar radiation (Yang et al., 2019). When the lidar beam’s trajectory is bent by refraction, the result is an increased time of flight and, accordingly, distortion of the inferred position of the scattering surface.

Adversity modes related to hardware design include spontaneous hardware faults and glitches related to system angular resolution.

  • Hardware fault: A lidar unit might experience a partial-power condition or a subcomponent might wear out, resulting in reduced measurement quality, also known as HMI. Such events are expected to be rare and occur stochastically, with little or no advanced warning.

  • Low angular resolution: Low lidar resolution (i.e., a low number of scan lines) makes it more difficult to identify small-scale features. When complicated objects with high spatial frequencies (relative to resolution) are imaged, smallscale features may be under-resolved, smoothed, or aliased, depending on the design of the lidar unit.

Adversity modes involving platform motion include bulk translation or rotation of the vehicle on which the lidar unit is mounted. Platform motion can cause apparent changes to the scene when the scene has not actually changed.

  • Scan boundary: Platform motion can cause objects to partially appear or disappear near the edges of the viewable domain. This adversity can impact accuracy and feature-extraction probabilities. For a mechanical rotating lidar, the boundaries of the viewable domain are usually defined by an upper and a lower elevation limit. Some units also have boundaries in the azimuthal direction. The viewable domain may be further restricted if parts of the ego vehicle carrying the lidar appear inside the field of view (in which case, a mask can be defined to exclude those regions).

  • Motion distortion: Sometimes called motion blur or rolling shutter, this adversity arises when the vehicle translates or rotates during the interval needed to capture a complete scan. For a rotating lidar unit, the time to complete a full 360° scan is typically 50–200 ms, far from instantaneous. During this interval, any non-zero translation or rotation will influence the position of the points in the scan. For example, a complete 360° rotation of the lidar rotor relative to the vehicle might occur as the vehicle rotates by 10° relative to the world. In the world frame, the resulting scan will capture either 350° (missing a slice of the scene) or 370° (overlapping a slice of the scene), depending on the relative directions of the two rotations. Motion distortion can be partially mitigated by estimating the vehicle translational and rotational speeds and applying a correction assuming that those speeds are constant; however, accelerations and vibrations remain difficult to correct.

Another set of adversity modes relate to the map data used for localization. These modes include the following:

  • Data anomalies: On rare instances, it is possible that map data will be corrupted by a communication fault (e.g., a bit error or a buffer overflow), a human mistake (e.g., scheduling the wrong upload), or the corruption of stored data (e.g., by a cosmic ray or a power surge). Examples of bad data uploads have not been discussed in the lidar literature, but they have been observed in the operation of satellite navigation systems (Pullen et al., 2001). Data anomalies are rare and unintentional.

  • Artifact or mislabeled feature: Map data are assumed to be highly reliable. However, it is possible that an artifact may appear, i.e., a fictitious piece of data may be injected during map construction that does not match the real world. It is also possible that a feature may be mislabeled, such as in the case when two lamp posts are very close and the map labels both as reliable features or only one as an unreliable feature. (Recall that an extraction fault can occur for closely spaced features, as discussed in Section 3.2.)

  • Large-scale ambiguity: Large-scale ambiguities occur in a map if two or more disparate locations have similar appearances. Examples might include similar intersections located on opposite sides of a city, patterns of lamp post spacings that appear repeatedly on different streets, or even a uniformly spaced series of features (e.g., cables on a suspension bridge). When large-scale ambiguities exist in a database, it may be difficult to initialize a pose solution. This situation is akin to an acquisition problem in GNSS (Misra & Enge, 2011) or a lost-robot problem in the field of robotics (Majdik et al., 2010).

The lidar algorithm itself can also result in adversity modes. In particular, we are concerned with aspects of the algorithm that are implementation-dependent and that can result in a significantly incorrect pose estimate.

  • Poor initialization: A nonlinear set of equations such as that in Equation (4) can be solved iteratively. The result of the iterations depends on having a reasonable initial guess. If the initial guess is outside the zone of convergence for the correct solution, then the estimated pose will converge to another local optimum, away from the true solution.

  • Divergence: Nonlinear equations admit multiple solutions. Moreover, if the measurements are inconsistent, the geometry is sparse, or the linearization is ill-conditioned, then the iterations may diverge away from the true solution, even with a good initial guess. If the solution diverges severely, then it is not difficult to detect divergence and exclude the solution. However, if the solution starts to diverge but later locks on to a secondary solution, then the solver will report a wrong answer. The first case is a continuity problem (where the adversity is detectable but prevents use of the lidar data), and the second case is an integrity problem (where the adversity introduces a localization error yet is not detectable).

Another adversity mode is intentional interference with lidar operation, in the form of spoofing. Spoofing can be either electronic or physical. Because spoofing is a malicious cyberattack, this adversity cannot be considered a rare, probabilistic event. Instead, engineers should assume that an attack could happen and address it with a combination of hardening, monitoring, and resilient procedures.

  • Electronic spoofing: Modes of electronic spoofing include lidar repeaters, transmitters, and infrared jammers. These devices might implant intentional patterns of false returns into a scene (similar to crosstalk, as discussed above, but intentional in nature). An electronic spoofing attack might also be launched via the transmission of a false data message (e.g., an alert to update the map data in an intentionally disruptive manner).

  • Physical spoofing: Perhaps the easiest way to attack lidar navigation is to intentionally disrupt the physical scene. One might transform a scene like a movie set, by bringing in props or covering otherwise visible features.

A final category of adversity mode is scene change. Scene changes include all categories of moving objects within an environment, other than spoofing attacks. Scene changes can have localized effects (on one feature) or scanwise effects (on all lidar points in a scene), depending on the nature of the effect. Scene changes can create local errors, false or failed feature extraction, or problems in associating features with the map data. The variety of scene changes is broad enough that we have introduced a separate table (Table 4) to categorize them. It is relevant to note that, by construction, the engineered-target and landmark-based architectures rely on objects in the environment that change infrequently; by contrast, the scan-matching architecture includes all visible objects, and thus, changes are expected to have a more prominent impact on scan matching.

View this table:
TABLE 4

Categorizing Scene Changes

Although we have been diligent in constructing the table of adversities (Table 3), it is possible that other adversities will be discovered in time. Our team is continuing to develop tools with the goal of isolating adversities in real data sets, in order to assess the severity of known adversities and identify new adversities should they appear (Choate & Rife, 2024).

4.3 Mitigations

The goal of identifying adversities is to mitigate them. The simplest adversities (e.g., noise and beam width effects) are nominal effects that should be included in a nominal error distribution. Rare adversities (e.g., hardware faults and data anomalies) may be mitigated by using reliable equipment and redundancy to ensure a low prior probability of a fault. For the most part, other adversities are neither nominal behaviors nor rare faults; thus, additional mitigations are necessary. It is our belief that such mitigations are possible via a combination of careful design practices and achievable research.

In this section, we explore potential avenues to mitigate adversities. For each adversity identified in this paper, a subset of reasonable mitigations is listed in Table 5. The table includes five categories of mitigation, each listed as a distinct column. The first mitigation approach is to manage risks, by leveraging a fault tree or other risk-management tools to account for rare events (Pullen et al., 2006). In this approach, system-safety engineers leverage the low prior probability of a fault and component redundancy to ensure that the overall risk is acceptable. Other approaches are needed to reduce system risk if adversities are not rare events. For instance, a second mitigation approach is to design systems to suppress risks. Safety-by-design approaches can eliminate certain well-characterized adversities through a combination of careful engineering, verification, and validation, before the system ever reaches the field. A third mitigation approach is to correct systematic errors. This approach addresses adversities algorithmically by removing repeatable biases, applying geometric constraints, or deterministically eliminating certain measurements under unsafe conditions. A fourth mitigation approach is to monitor the lidar signal to ensure its quality. Signal quality monitoring acts in real time to classify navigation measurements as either healthy or unsafe to use (Mitelman et al., 2000). A final mitigation approach is to bound errors that remain after other mitigations have been applied. Bounding involves establishing an upper limit on the magnitude of large errors as a confidence interval for the allowed integrity risk (Decleene, 2000; Rife et al., 2006). Error bounding helps to manage sources of random error, as well as small deterministic biases.

View this table:
TABLE 5

Adversity Mode Mitigations

Fault management (first mitigation column in Table 5) is a classical approach to accounting for and mitigating risks for rare events. The list of adversities includes only two rare events: hardware component failures and data anomalies. For both of these adversities, it seems reasonable to characterize a fault probability and combine that probability with appropriate redundancy to ensure an adequate upper bound on associated integrity risks (Lee et al., 1985).

Safety-by-design methods (second mitigation column in Table 5) promise to mitigate a wide range of adversities, particularly adversities that are frequent and well-structured. As examples, we believe that the following adversities can be resolved by developers, as a matter of sound engineering design: time-tagging biases, crosstalk, low angular resolution, data artifacts/mislabeling, and large-scale map ambiguities. For instance, time-tagging issues can potentially be resolved with an appropriate clock-triggering mechanism; crosstalk, by monitoring for high received-signal power; and low angular resolution, by using emerging high-resolution lidar systems. Map issues such as artifacts, mislabeling, and large-scale ambiguities can be mitigated by rigorous verification of the map data before use.

Correction methods (third mitigation column in Table 5) promise to address adversities that are largely geometric in nature or that are well modeled by physics. Examples of proposed correction algorithms include methods to address shadowing effects (McDermott & Rife, 2022b), field-of-view limitations (Choate & Rife, 2024), and motion distortion (Hong et al., 2010; Zhang & Singh, 2014; Al-Nuaimi et al., 2016; Inui et al., 2017; Setterfield et al., 2023; McDermott & Rife, 2023). In some cases, the same correction may address more than one adversity; for instance, a generalized form of motion-distortion correction could potentially address variable scanning speed. Although calibration of lidar intrinsic parameters has been studied (Bergelt, 2017; Lv, 2022), more work may be needed to ensure longer-term stability of calibration parameters. Similarly, models of porous or permeable scenes have been previously introduced (Browning et al., 2012), but they have not yet been adapted for use in a safety-critical system.

Online monitors (fourth mitigation column in Table 5) can exclude bad measurements or provide alerts if measurement quality degrades. Monitors may be particularly helpful for detecting scene changes or dynamic obstacles. To this end, a simple residual-based consistency check provides significant benefit (McDermott & Rife, 2024), and more sophisticated dynamic-object removal methods have been proposed to enhance performance (Wang et al., 2015; Pagad et al., 2020; Cai et al., 2023; Wang et al., 2023). Researchers are also tackling the subject of monitoring for cyberattacks, including physical and electronic spoofing (Cao et al., 2019; Changalvala & Malik, 2019; Liu & Park, 2021; You et al., 2021; Sun et al., 2020; Hu et al., 2024). One useful anti-spoofing technique, for instance, may be to monitor for returns with unusually high power. Work remains to characterize monitor integrity performance, especially considering the range of different scene changes described in Table 4. From an integrity perspective, feature-extraction monitors are among the best developed of current lidar monitors (Joerger et al., 2024).

Weather-related scene changes represent a particular concern for all architectures, as many researchers have pointed out impacts of weather on lidar measurements, including Levinson et al. (2007), Filgueira et al. (2017), Heinzler et al. (2019), and Chang et al. (2023). Monitors may help detect weather conditions, but they may not be able to mitigate all associated integrity risks. In some cases, however, particularly for extraction errors, landmark detection can be enhanced by careful system design, for example, by adding reflective features that maximize the distinction between landmarks and other non-reflective objects (Nagai, Ahmed, & Pervan, 2024) or by using additional information for classification, such as feature height (Nagai & Pervan, 2024).

Error bounding (fifth mitigation column in Table 5) captures residual effects that cannot be addressed by other mitigation methods. The error-bounding process, sometimes called uncertainty quantification, characterizes the probability density function for random and systematic errors. Error characterization is nontrivial in that lidar accuracy can rival that of many ground-truth systems (for example, carrier-phase GPS positioning), making it difficult to directly measure the lidar pose-estimation error. More work is also needed to generate reduced-order models that capture both scene dependencies and different error mechanisms including time-of-flight, beam width, variable scanning speed, calibration bias, multipath, spectral reflection, and porous object issues. Fortunately, lidar uncertainty quantification is a topic of growing interest in the research community (Stoyanov et al., 2012; Joerger et al., 2022; Yuan et al., 2023; McDermott & Rife, 2024).

4.4 Mitigation Through Complementary Architectures

While mitigation techniques can be developed and implemented within a single architecture, the deployment of parallel architectures is potentially a useful approach for mitigating many adversities. As suggested in Section 4.1, the same adversity may impact each architecture differently. As one example, consider spoofing. Engineered targets can be replicated and/or manipulated to spoof or deny positioning information. Infrastructure monitoring and/or target code authentication methods might help address this risk; however, another simple approach is to compare the engineered-target solution to the solution of a feature-based or scan-matching architecture. It is particularly difficult to spoof a scan-matching architecture, because the system uses the entire scene and not just a handful of extracted features.

As another example, consider scene changes. The engineered-target architecture is relatively resilient to undetected nominal scene change adversities. Even if a parked vehicle blocks an engineered target, it is unlikely that other features will be mistaken for the actual engineered target, given that the target is coded as shown in Figure 3 and Figure 4. In other words, the parked vehicle might cause a loss of continuity but not a loss of integrity. This resilience to scene changes could support change identification in a scan-matching system, particular for cases in which the map becomes outdated over a large area (because of construction, for instance).

5 DISCUSSION

The road to developing an integrity case for lidar-based navigation will be a long one. As discussed in the prior section, many adversities exist that could result in unsafe measurements. While some of these adversities are rare (e.g., hardware failures), others are relatively common (e.g., scene changes).

Given the wide range of system adversities and their different frequencies of appearance, it is no surprise that development efforts to date have primarily focused on characterizing system-level safety in terms of miles driven. Waymo, for instance, reports that its vehicles have driven 40 million miles as of April 2025 (Waymo, 2025). Unfortunately, system-level safety claims represent time-averaged safety, rather than an instantaneous value for a given operating domain. An average risk quantification is insufficient to ensure that a system is safe. Consider a case in which the risk of the lidar measurement error is particularly high for a brief period; in that case, the system should enunciate and mitigate instantaneous risk (e.g., via a different route plan).

By cataloging adversities to high-integrity lidar positioning, this paper provides a starting point for quantification of specific integrity. Future analysis will be needed to provide theoretical or data-driven models of resulting errors and to quantify adversity probabilities, with and without mitigation. Controlled benchmarks will be needed to characterize the impact of adversities on nominal errors (e.g., root-mean-square error) as well as rarer errors (e.g., in the far tail of the error distribution). Latencies and monitor response times should be characterized. Additionally, as new mitigation strategies are introduced, it will be important to consider their practicality, including tradeoffs between robustness and design factors such as processing power or code complexity.

As future work begins characterizing lidar adversities and mitigations to those adversities, it will become clearer which lidar architectures provide the best balance between safety and practicality and whether vehicles will need to implement multiple architectures in order to provide consistent integrity for a given route. Later, once the integrity case for lidar-based absolute positioning is better defined, it will also be possible to provide a more definitive integrity characterization for sensor-fusion systems that integrate lidar with other sensors (including GNSS, IMU, odometry, cameras, and radar) or that leverage emerging evolutions of lidar technology (e.g., recording multiple returns, ambient infrared, Doppler, or multi-frequency data).

6 SUMMARY

The goal of this paper was to provide a broad overview of geometry-based methods for estimating lidar attitude and position for safety-of-life applications. In the process, we identified three lidar architectures for estimating geo-registered pose, including engineered-target, landmark-based, and scan-matching methods. Each architecture offers different cost–benefit tradeoffs, in terms of the effort required to install physical infrastructure and the amount of map data that must be surveyed and distributed to users. By providing details about our implementations for each architecture, we revealed their differences but also their similarities. One notable similarity is that all of the architecture implementations we considered can be applied via the same linearized measurement equations, where the only difference is that each architecture uses a different type of feature. One notable difference among architectures is their value proposition in different environments. At a high level, the architectures also share a similar set of adversity modes, effects, and mitigations; however, the detailed mechanisms by which these adversities impact each architecture are largely different. Together, these similarities and differences make the lidar architectures complementary. Thus, we identified potential integrity benefits of combining the architectures into a single system.

HOW TO CITE THIS ARTICLE

Rife, J.H., Khanafseh, S., Pervan, B., & Wassaf, H. (2025). Toward high-integrity roadway applications of georeferenced lidar positioning: A review. NAVIGATION, 72(4). https://doi.org/10.33012/navi.719

ACKNOWLEDGMENTS

The authors wish to thank and acknowledge Kana Nagai, who aided in preparing some illustrations used in this manuscript. The authors also thank the U.S. Department of Transportation Joint Program Office and the Office of the Assistant Secretary for Research and Technology for sponsorship of this work. Opinions discussed here are those of the authors and do not necessarily represent those of the Department of Transportation or other affiliated agencies.

This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

REFERENCES

  1. Al-Nuaimi, A., Lopes, W., Zeller, P., Garcea, A., Lopes, C., & Steinbach, E. (2016). Analyzing LiDAR scan skewing and its impact on scan matching. Proc. of the 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Alcala de Henares, Spain, 18. https://doi.org/10.1109/IPIN.2016.7743598
  2. Aoki, Y., Goforth, H., Srivatsan, R. A., & Lucey, S. (2019). Pointnetlk: Robust & efficient point cloud registration using pointnet. Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, 71637172. https://doi.org/10.1109/CVPR.2019.00733
  3. Bar-Shalom, Y., & Fortmann, T. (1988). Tracking and data association. Mathematics in Science and Engineering, 179. Academic Press.
  4. Bergelt, R., Khan, O., & Hardt, W. (2017). Improving the intrinsic calibration of a Velodyne LiDAR sensor. Proc. of the 2017 IEEE SENSORS, Glasgow, UK, 13. https://doi.org/10.1109/ICSENS.2017.8234357
  5. Besl, P. J., & McKay, N. D. (1992). Method for registration of 3-D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures, Vol. 1611, 586606. SPIE. https://doi.org/10.1117/12.57955
  6. Biber, P., & Straßer, W. (2003). The normal distributions transform: A new approach to laser scan matching. In Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), Las Vegas, NV, 27432748. IEEE. https://doi.org/10.1109/IROS.2003.1249285
  7. Browning, B., Deschaud, J. E., Prasser, D., & Rander, P. (2012). 3D mapping for high-fidelity unmanned ground vehicle lidar simulation. The International Journal of Robotics Research, 31(12), 13491376. https://doi.org/10.1177/0278364912460288
  8. Cai, Y., Li, B., Zhou, J., Zhang, H., & Cao, Y. (2023). Removing moving objects without registration from 3D LiDAR data using range flow coupled with IMU measurements. Remote Sensing, 15(13), 3390. https://doi.org/10.3390/rs15133390
  9. Cao, Y., Xiao, C., Cyr, B., Zhou, Y., Park, W., Rampazzi, S., Chen, Q., Fu, K., & Mao, Z. M. (2019). Adversarial sensor attack on lidar-based perception in autonomous driving. Proc. of the 2019 ACM SIGSAC Conference on Computer and Communications Security, Taipei, Taiwan, 22672281. https://dl.acm.org/doi/abs/10.1145/3319535.3339815
  10. Chang, J., Hu, R., Huang, F., Xu, D., & Hsu, L. T. (2023). LiDAR-based NDT matching performance evaluation for positioning in adverse weather conditions. IEEE Sensors Journal, 23(20), 2534625355. https://doi.org/10.1109/JSEN.2023.3312911
  11. Changalvala, R., & Malik, H. (2019). LiDAR data integrity verification for autonomous vehicle. IEEE Access, 7, 138018138031. https://doi.org/10.1109/ACCESS.2019.2943207
  12. Cho, Y., Kim, G., & Kim, A. (2020). Unsupervised geometry-aware deep lidar odometry. Proc. of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 21452152. https://doi.org/10.1109/ICRA40945.2020.9197366
  13. Choate, D., & Rife, J. H. (2024). Characterizing lidar point-cloud adversities using a vector field visualization. Proc. of the International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2024), Baltimore, MD, 17711784. https://doi.org/10.33012/2024.19864
  14. Csanyi, N., & Toth, C. K. (2007). Improvement of lidar data accuracy using lidar-specific ground targets. Photogrammetric Engineering & Remote Sensing, 73(4), 385396. https://doi.org/10.14358/PERS.73.4.385
  15. DeCleene, B. (2000). Defining pseudorange integrity-overbounding. Proc. of the 13th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GPS 2000), Salt Lake City, UT, 19161924. https://www.ion.org/publications/abstract.cfm?articleID=1603
  16. Deng, J., Wu, Q., Chen, X., Xia, S., Sun, Z., Liu, G., Yu, W., & Pei, L. (2023). NeRF-LOAM: Neural implicit representation for large-scale incremental lidar odometry and mapping. Proc. of the IEEE/CVF International Conference on Computer Vision, Paris, France, 82188227. https://openaccess.thecvf.com/ICCV2023
  17. Desrochers, B., Lacroix, S., & Jaulin, L. (2015). Set-membership approach to the kidnapped robot problem. Proc. of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 37153720. http://doi.org/10.1109/IROS.2015.7353897
  18. Enge, P. (1999). Local area augmentation of GPS for the precision approach of aircraft. Proceedings of the IEEE, 87(1), 111132. http://doi.org/10.1109/5.736345
  19. Enge, P., Walter, T., Pullen, S., Kee, C., Chao, Y. C., & Tsai, Y. J. (1996). Wide area augmentation of the Global Positioning System. Proceedings of the IEEE, 84(8), 10631088. http://doi.org/10.1109/5.533954
  20. Filgueira, A., Gonzalez-Jorge, H., Laguela, S., Dıaz-Vilarino, L., & Arias, P. (2017). Quantifying the influence of rain in LiDAR performance. Measurement, 95, 143148. https://doi.org/10.1016/j.measurement.2016.10.009
  21. Hassani, A., & Joerger, M. (2023). Analytical and empirical navigation safety evaluation of a tightly integrated LiDAR/IMU using return-light intensity. NAVIGATION, 70(4). https://navi.ion.org/content/70/4/navi.623
  22. Heinzler, R., Schindler, P., Seekircher, J., Ritter, W., & Stork, W. (2019). Weather influence and classification with automotive lidar sensors. Proc. of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 15271534. https://doi.org/10.1109/IVS.2019.8814205
  23. Hong, S., Ko, H., & Kim, J. (2010). VICP: Velocity updating iterative closest point algorithm. Proc. of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, 18931898. https://doi.org/10.1109/ROBOT.2010.5509312
  24. Hu, X., Liu, T., Shu, T., & Nguyen, D. (2024). Spoofing detection for LiDAR in autonomous vehicles: A physical-layer approach. IEEE Internet of Things Journal, 11(1), 2067320689. https://doi.org/10.1109/JIOT.2024.3371378
  25. Huang, J. K., Clark, W., & Grizzle, J. W. (2022). Optimal target shape for LiDAR pose estimation. IEEE Robotics and Automation Letters, 7(2), 12381245. https://doi.org/10.1109/LRA.2021.3138779
  26. Huang, J. K., Wang, S., Ghaffari, M., & Grizzle, J. W. (2021). LiDARTag: A real-time fiducial tag system for point clouds. IEEE Robotics and Automation Letters, 6(3), 48754882. https://doi.org/10.1109/LRA.2021.3070302
  27. Huang, S., Gojcic, Z., Wang, Z., Williams, F., Kasten, Y., Fidler, S., Schindler, K., & Litany, O. (2023). Neural lidar fields for novel view synthesis. Proc. of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1823618246. https://openaccess.thecvf.com/ICCV2023
  28. Inui, K., Morikawa, M., Hashimoto, M., & Takahashi, K. (2017). Distortion correction of laser scan data from in-vehicle laser scanner based on Kalman filter and NDT scan matching. Proc. of the 14th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2017), Madrid, Spain, 329334. https://www.scitepress.org/papers/2017/64223/64223.pdf
  29. Jamoom, M. B. (2016). Unmanned aircraft system sense and avoid integrity and continuity [PhD dissertation, Illinois Institute of Technology]. http://www.navlab.iit.edu/uploads/5/9/7/3/59735535/jamoomthesis_final.pdf
  30. Joerger, M., Hassani, A., Spenko, M., & Becker, J. (2024). Wrong association risk bounding using innovation projections for landmark-based LiDAR/inertial localization. Proc. of the International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2024), Baltimore, MD, 268282. https://doi.org/10.33012/2024.19680
  31. Joerger, M., Jamoom, M., Spenko, M., & Pervan, B. (2016). Integrity of laser-based feature extraction and data association. Proc. of the 2016 IEEE/ION Position, Location and Navigation Symposium (PLANS), Savannah, GA, 557571. https://ieeexplore.ieee.org/document/7479746
  32. Joerger, M., Wang, J., & Hassani, A. (2022). On uncertainty quantification for convolutional neural network LiDAR localization. Proc. of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 17891794. https://doi.org/10.1109/IV51971.2022.9827445
  33. Kigotho, O. N., & Rife, J. H. (2021). Comparison of rectangular and elliptical alert limits for lane-keeping applications. Proc. of the 34th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2021), St. Louis, MO, 93104. https://doi.org/10.33012/2021.17904
  34. Kujur, B., Khanafseh, S., & Pervan. B. (2024). Optimal INS monitor for GNSS spoofer tracking error detection. NAVIGATION, 71(1). https://doi.org/10.33012/navi.629
  35. Lee, W. S., Grosh, D. L., Tillman, F. A., & Lie, C. H. (1985). Fault tree analysis, methods, and applications: A review. IEEE Transactions on Reliability, 34(3), 194203. https://doi.org/10.1109/TR.1985.5222114
  36. Levinson, J., Montemerlo, M., & Thrun, S. (2007). Map-based precision vehicle localization in urban environments. In Robotics: Science and systems, Vol. 4, 1. https://doi.org/10.15607/RSS.2007.III.016
  37. Li, Q., Wang, Y., Wang, Y., & Zhao, H. (2022). HDmapnet: An online HD map construction and evaluation framework. In 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, 46284634. http://doi.org/10.1109/ICRA46639.2022.9812383
  38. Liu, F., Murphy, T., & Skidmore, T. A. (1997). LAAS signal-in-space integrity monitoring description and verification plan. Proc. of the 10th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GPS), Kansas City, MO, 485497. https://www.ion.org/publications/abstract.cfm?articleID=2752
  39. Liu, J., & Park, J. M. (2021). Seeing is not always believing: Detecting perception error attacks against autonomous vehicles. IEEE Transactions on Dependable and Secure Computing, 18(5), 22092223. https://doi.org/10.1109/TDSC.2021.3078111
  40. Liu, X., Wen, W., & Hsu, L. T. (2023). GLIO: Tightly-coupled GNSS/LiDAR/IMU integration for continuous and drift-free state estimation of intelligent vehicles in urban areas. IEEE Transactions on Intelligent Vehicles, 9(1), 14121422. https://doi.org/10.1109/TIV.2023.3323648
  41. Lv, J., Zuo, X., Hu, K., Xu, J., Huang, G., & Liu, Y. (2022). Observability-aware intrinsic and extrinsic calibration of LiDAR-IMU systems. IEEE Transactions on Robotics, 38(6), 37343753. https://doi.org/10.1109/TRO.2022.3174476
  42. Majdik, A., Popa, M., Tamas, L., Szoke, I., & Lazea, G. (2010). New approach in solving the kidnapped robot problem. Proc. of the ISR 2010 (41st International Symposium on Robotics) and ROBOTIK 2010 (6th German Conference on Robotics), Munich, Germany, 16. https://ieeexplore.ieee.org/abstract/document/5756811
  43. Martello, T., Rife, J. H., & Wassaf, H. (2024). Proximity-event quantification for navigating automated vehicles in concurrent traffic. Proc. of the 2024 International Technical Meeting of the Institute of Navigation (ITM), Long Beach, CA, 186199. https://doi.org/10.33012/2024.19515
  44. McDermott, M., & Rife, J. (2022a). Enhanced laser-scan matching with online error estimation for highway and tunnel driving. Proc. of the 2022 International Technical Meeting of the Institute of Navigation, Long Beach, CA, 643654. https://doi.org/10.33012/2022.18249
  45. McDermott, M., & Rife, J. (2022b). Mitigating shadows in lidar scan matching using spherical voxels. IEEE Robotics and Automation Letters, 7(4), 1236312370. https://doi.org/10.1109/LRA.2022.3216987
  46. McDermott, M., & Rife, J. (2023). Correcting motion distortion for LiDAR scan-to-map registration. IEEE Robotics and Automation Letters, 9(2), pp. 15161523. https://doi.org/10.1109/LRA.2023.3346757
  47. McDermott, M., & Rife, J. (2024). ICET online accuracy characterization for geometric based laser scan matching of 3D point clouds. NAVIGATION, 71(2). https://doi.org/10.33012/navi.647
  48. McDermott, M., & Rife, J. (2025). A probabilistic formulation of LiDAR mapping with neural radiance fields. IEEE Robotics and Automation Letters (early access). https://doi.org/10.1109/LRA.2025.3557301
  49. Misra, P., & Enge, P. (2011). Global Positioning System: Signals, measurements, and performance (2nd ed.). Ganga-Jamuna Press. https://www.gpstextbook.com/
  50. Mitelman, A. M., Phelts, R. E., Akos, D. M., Pullen, S. P., & Enge, P. K. (2000). A real-time signal quality monitor for GPS augmentation systems. Proc. of the 13th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GPS 2000), Salt Lake City, UT, 862871. https://www.ion.org/publications/abstract.cfm?articleID=1483
  51. Nagai, K., Ahmed, S., & Pervan, B. (2024). Integrity with lidar incorrect extraction faults in adverse weather conditions. Proc. of the 2024 International Technical Meeting of the Institute of Navigation (ITM), Long Beach, CA, 10851094. https://doi.org/10.33012/2024.19535
  52. Nagai, K., Chen, Y., Spenko, M., Henderson, R., & Pervan, B. (2023). Integrity with extraction faults in lidar-based urban navigation for driverless vehicles. Proc. of the 2023 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, 10991106. https://doi.org/10.1109/PLANS53410.2023.10140132
  53. Nagai, K., Fasoro, T., Spenko, M., Henderson, R., & Pervan B. (2020). Evaluating GNSS navigation availability in 3-D mapped urban environments. Proc. of the 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, OR, 639646. https://doi.org/10.1109/PLANS46316.2020.9109929
  54. Nagai, K., & Pervan, B. (2024). Enhanced integrity of lidar localization: A study on feature extraction techniques. Proc. of the 34th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2024), Baltimore, MD, 14911505. https://doi.org/10.33012/2024.19735
  55. Nagai, K., Spenko, M., Henderson, R., & Pervan, B. (2024). Fault-free integrity of urban driverless vehicle navigation with multi-sensor integration: A case study in downtown Chicago. NAVIGATION, 71(1). https://doi.org/10.33012/navi.631
  56. Nagy, B., & Benedek, C. (2018). Real-time point cloud alignment for vehicle localization in a high resolution 3D map. Proc. of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany. https://openaccess.thecvf.com/ECCV2018
  57. On-Road Automated Driving Committee. (2021). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. SAE International. https://www.sae.org/standards/content/j3016_202104/
  58. Ouster, Inc. (2024). Sensor data: Dual return profile. Ouster sensor docs (version 1.0). https://static.ouster.dev/sensor-docs/image_route1/image_route2/sensor_data/sensor-data.html#dual-return-v2-x
  59. Pagad, S., Agarwal, D., Narayanan, S., Rangan, K., Kim, H., & Yalla, G. (2020). Robust method for removing dynamic objects from point clouds. Proc. of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 1076510771. https://doi.org/10.1109/ICRA40945.2020.9197168
  60. Pullen, S., Lee, J., Luo, M., Pervan, B., Chan, F. C., & Gratton, L. (2001). Ephemeris protection level equations and monitor algorithms for GBAS. Proc. of the 14th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GPS 2001), Salt Lake City, UT, 17381749. https://www.ion.org/publications/abstract.cfm?articleID=1852
  61. Pullen, S., Rife, J., & Enge, P. (2006). Prior probability model development to support system safety verification in the presence of anomalies. Proc. of IEEE/ION Position, Location, and Navigation Symposium, Coronado, CA, 11271136. https://doi.org/10.1109/PLANS.2006.1650720
  62. Reid, T. G., Houts, S. E., Cammarata, R., Mills, G., Agarwal, S., Vora, A., & Pandey, G. (2019). Localization requirements for autonomous vehicles. SAE International Journal of Connected and Automated Vehicles, 2(3), 173190. https://doi.org/10.4271/12-02-03-0012
  63. Reid, T. G. R., Neish, A., & Manning, B. (2023). Localization & mapping requirements for level 2+ autonomous vehicles. Proc. of the 2023 International Technical Meeting of the Institute of Navigation (ITM), Long Beach, CA, 107123. https://doi.org/10.33012/2023.18634
  64. Rife, J., Pullen, S., Enge, P., & Pervan, B. (2006). Paired overbounding for nonideal LAAS and WAAS error distributions. IEEE Transactions on Aerospace and Electronic Systems, 42(4), 13861395. https://doi.org/10.1109/TAES.2006.314579
  65. Rife, J. H., Elwood, P., & Wassaf, H. (2023). Event-based risk assessment for alert limits in automotive lane keeping. Proc. of th 2023 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, 621629. https://doi.org/10.1109/PLANS53410.2023.10140129
  66. Rife, J. H., Khanafseh, S., Pervan, B., & Wassaf, H. (2024). Fundamental architectures for high-integrity georeferenced lidar positioning. Proc. of the International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2024), Baltimore, MD, 245267. https://doi.org/10.33012/2024.19870
  67. Rife, J. H., & McDermott, M. (2024). Characterizing perspective error in voxel-based lidar scan matching. NAVIGATION, 71(1). https://doi.org/10.33012/navi.627
  68. Sefati, M., Daum, M., Sondermann, B., Kreisköther, K. D., & Kampker, A. (2017). Improving vehicle localization using semantic and pole-like landmarks. Proc. of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, 1319. https://doi.org/10.1109/IVS.2017.7995692
  69. Segal, A., Haehnel, D., & Thrun, S. (2009). Generalized-ICP. In Robotics: Science and systems, Vol. 2, No. 4. https://doi.org/10.15607/rss.2009.v.021
  70. Setterfield, T. P., Hewitt, R. A., Espinoza, A. T., & Chen, P. T. (2023). Feature-based scanning LiDAR-inertial odometry using factor graph optimization. IEEE Robotics and Automation Letters, 8(6), 33743381. https://doi.org/10.1109/LRA.2023.3266701
  71. Stoyanov, T., Magnusson, M., Andreasson, H., & Lilienthal, A. J. (2012). Fast and accurate scan registration through minimization of the distance between compact 3D NDT representations. The International Journal of Robotics Research, 31(12), 13771393. https://doi.org/10.1177/0278364912460895
  72. Sun, J., Cao, Y., Chen, Q. A., & Mao, Z. M. (2020). Towards robust LiDAR-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures. Proc. of the 29th USENIX Security Symposium (USENIX Security 20), Boston, MA, 877894. https://www.usenix.org/system/files/sec20-sun.pdf
  73. Teo, T.-A., & Chiu, C.-M. (2015). Pole-like road object detection from mobile lidar system using a coarse-to-fine approach. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 8(10), 48054818. https://doi.org/10.1109/JSTARS.2015.2467160
  74. Thrun, S., Burgard, W., & Fox, D. (2002). Probabilistic robotics. MIT Press. https://mitpress.mit.edu/9780262201629/probabilistic-robotics/
  75. Wang, D. Z., Posner, I., & Newman, P. (2015). Model-free detection and tracking of dynamic objects with 2D lidar. The International Journal of Robotics Research, 34(7), 10391063. https://doi.org/10.1177/0278364914562237
  76. Wang, Y., Yao, W., Zhang, B., Fu, J., Yang, J., & Sun, G. (2023). DRR-LIO: A dynamic-region-removal-based LiDAR inertial odometry in dynamic environments. IEEE Sensors Journal, 23(12), 1317513185. https://doi.org/10.1109/JSEN.2023.3269861
  77. Wassaf, H., Bernazzani, K., Dodova, M., Gandhi, P., Lu, J., Van Dyke, K., Barron, M., Rife, J., Flake, J., Herman, M., & Shallberg, K. (2022). Positioning, navigation, and timing (PNT) technology readiness for safe highly automated vehicle (HAV)/automated driving system (ADS) operations. Final report. Volpe National Transportation Center.
  78. Wassaf, H., Bernazzani, K., Gandhi, P., Lu, J., Van Dyke, K., Shallberg, K., Ericson, S., Flake, J., & Herman, M. (2021). Highly automated vehicle absolute positioning using lidar unique signatures. Proc. of the 34th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2021), St. Louis, MO, 2252. https://doi.org/10.33012/2021.17878
  79. Waymo. (2025). Refine how you move around Phoenix. https://waymo.com/
  80. Willers, O., Sudholt, S., Raafatnia, S., & Abrecht, S. (2020). Safety concerns and mitigation approaches regarding the use of deep learning in safety-critical perception tasks. Proc. of the International Conference on Computer Safety, Reliability, and Security SAFECOMP 2020 Workshops, 336350. https://doi.org/10.1007/978-3-030-55583-2_25
  81. Xu, D., Liu, J., Liang, Y., Lv, X., & Hyyppä, J. (2022). A LiDAR-based single-shot global localization solution using a cross-section shape context descriptor. ISPRS Journal of Photogrammetry and Remote Sensing, 189, 272288. https://doi.org/10.1016/j.isprsjprs.2022.05.005
  82. Yang, S., Zhang, J. Y., Yang, Y. Y., Huang, J. Y., Bai, Y. R., Zhang, Y., & Lin, X. C. (2019). Automatic compensation of thermal drift of laser beam through thermal balancing based on different linear expansions of metals. Results in Physics, 13. https://doi.org/10.1016/j.rinp.2019.102201
  83. You, C., Hau, Z., & Demetriou, S. (2021). Temporal consistency checks to detect LiDAR spoofing attacks on autonomous vehicle perception. Proc. of the 1st Workshop on Security and Privacy for Mobile AI, New York, NY, 1318. https://dl.acm.org/doi/abs/10.1145/3469261.3469406
  84. Yuan, M., Yau, W. Y., & Li, Z. (2018). Lost robot self-recovery via exploration using hybrid topological-metric maps. Proc. of the TENCON 2018–2018 IEEE Region 10 Conference, Jeju, Korea (South), 01880193. http://doi.org/10.1109/TENCON.2018.8650236
  85. Yuan, R. H., Taylor, C. N., & Nykl, S. L. (2023). Accurate covariance estimation for pose data from iterative closest point algorithm. NAVIGATION, 70(2). http://doi.org/10.33012/navi.562
  86. Zhang, J., & Singh, S. (2014). LOAM: Lidar odometry and mapping in real-time. Proceedings of Robotics: Science and Systems, 2(9), 19. https://doi.org/10.15607/RSS.2014.X.007
  87. Zhang, J., Zhang, F., Kuang, S., & Zhang, L. (2024). NeRF-Lidar: Generating realistic lidar point clouds with neural radiance fields. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 71787186. https://ojs.aaai.org/index.php/AAAI/article/view/28546
  88. Zhang, Z., Dai, Y., & Sun, J. (2020). Deep learning based point cloud registration: An overview. Virtual Reality & Intelligent Hardware, 2(3), 222246. https://doi.org/10.1016/j.vrih.2020.05.002
  89. Zhou, Y., & Tuzel, O. (2018). VoxelNet: End-to-end learning for point cloud based 3D object detection. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, 44904499. https://doi.org/10.1109/CVPR.2018.00472
Loading
Loading
Loading
Loading