Abstract
This paper proposes the cooperative use of zero velocity update (ZU) in a decentralized extended Kalman filter (DEKF)-based localization algorithm for multi-robot systems. The filter utilizes inertial measurement unit (IMU), ultra-wideband (UWB), and odometer-based velocity measurements to improve the localization performance of the system in a GNSS-denied environment. In this work, we evaluate the benefits of using ZU in a DEKF-based localization algorithm. The algorithm was tested with real hardware in a video motion capture facility and a robot operating system (ROS)-based simulation environment for unmanned ground vehicles (UGVs). Both simulation and real-world experiments were performed to determine the effectiveness of using ZU in one robot to reinstate the localization of the others in a multi-robot system. Experimental results from GNSS-denied simulation and real-world environments revealed that using ZU in the DEKF together with simple heuristics significantly improved the three-dimensional localization accuracy.
1 INTRODUCTION
Mobile robots rely on accurate localization estimates to perform specific tasks, including exploration, navigation, detection and tracking of objects, map building, and autonomous movement through space. Robots can use a localization application to enhance their ability to locate themselves accurately within an environment by fusing information from multiple sources. Positioning information is typically obtained using the global navigation satellite system (GNSS), which includes a global positioning system (GPS) and other similar systems. However, the availability of this system is often unreliable in urban, forested, and indoor areas because of obstructions that block access to signals from satellites (Merry & Bettinger, 2019).
Given the challenges posed by limited GNSS availability in specific environments, cooperative localization emerges as a valuable alternative as it may permit mobile robots to achieve accurate positioning. Cooperation among multiple robots is desirable for many tasks because several robots working together can perform many tasks more efficiently and robustly than a single robot (Gautam & Mohan, 2012). Cooperative localization is a method in which multiple robots can share information and perform relative measurements of one another to obtain more precise estimates of their locations compared to what could be achieved by a single robot (Gao et al., 2019). Robots can achieve this level of cooperation when they detect each other and share state estimates and covariance in the presence of relative measurements, as shown by Luft et al. (2018). The individual state estimates of the robots are commonly obtained by fusing the measurements from both proprioceptive (i.e., IMU) and exteroceptive (i.e., satellite signal receiver, laser scanner, camera) sensors.
An agent in a cooperative multi-robot system may still benefit from GNSS information even in the absence of direct access if their counterparts have access to this information (Qu & Zhang, 2011). The fusion of IMU-based dead reckoning and visual odometry (VO) will be useful toward solving this problem in GNSS-denied/degraded environments and can be used cooperatively (Queralta et al., 2022). However, one cannot base these measurements on the characteristics of the environment, as is typically done in visual-based methods, because the accuracy of this system is not reliable on uniform terrains with few landmarks.
Zero velocity update (ZU) is widely used to aid pedestrian inertial navigation (Foxlin, 2005; Kwakkel et al., 2008; Norrdine et al., 2016; Zhang et al., 2017) and is one of the important techniques used for error suppression and compensation in high-precision positioning systems (Skog et al., 2010). The main advantages of ZU for the localization task are that these updates can provide a boundary for the velocity error, accurately calibrate the IMU sensor biases, and limit the rate of INS localization drift (Groves, 2013). Of note, ZU can be used in wheeled robots when stationary conditions are detected. ZU can be utilized passively, as an opportunistic navigational update, such as when wheeled robots need to stop for external reasons (Gutierrez et al., 2022) or actively, with robots programmed for periodic stopping (Kilic et al., 2019) and/or those that decide when to stop autonomously (Kilic et al., 2021). ZU can only be used when the robot is stationary; it may be challenging to create a scenario in which all the robots stop simultaneously. However, in a cooperative localization system, only some of the robots may need to stop actively. The other robots can leverage ZU in an opportunistic way, such as when they need to stop for other reasons (e.g., avoiding obstacles, planning, waiting for pedestrians, or stopping at traffic lights).
Cooperative localization for multi-robot systems can be classified into centralized and decentralized methods. In centralized methods, each robot in the system transmits its measurements to a central server that provides localization estimations for all robots in the group. This usually results in high communication and computational costs to the server (Bailey et al., 2011). Also, the failure of a central processing unit in a centralized system usually leads to catastrophic results. By contrast, decentralized methods are more resilient to failures and result in decreased computational and communication costs (Bailey et al., 2011). For example, a decentralized multi-robot system may still function in the event of a malfunction of a single robot in the group.
The impact of individual updates in a decentralized localization system could be noticeably beneficial to the localization performance of the entire system. For example, one segment of the multi-robot system that is capable of performing GNSS updates could enhance the state estimates for the entire group. For example, one can envision a scenario in which some of the agents in the multi-robot system were placed an area in which the GNSS signal was interrupted; by contrast, the other robots can obtain sufficient signals. In this situation, robots with adequate positioning information could share this information with other members of the system during relative updates. This notion could easily be extrapolated to the use of other updates. For example, when a robot is in a stationary condition, it can perform ZU to calibrate the IMU biases to keep the INS-based localization reliable when other sensor measurements (e.g., GNSS and VO) are not available. Thus, the other robots in the system can benefit from this update even they do not use ZU.
2 PROBLEM STATEMENT
Here we explore the potential of leveraging the benefits of ZUs observed in decentralized cooperative localization studies. In this paper, we assume that each agent in a multi-robot system is capable of performing INS-based dead-reckoning. The localization algorithm can leverage ZU under certain conditions depending on the robot type, including landing (Groves, 2013) and hovering (Gross et al., 2019) for aerial robots or stopping and using non-holonomicity (Kilic et al., 2019) for ground robots. These specific conditions will allow the algorithm to perform ZU. We adopt a decentralized architecture for the filter estimator that enables robots to decouple their states, thereby reducing computational costs via distributed computations (Bailey et al., 2011). The robots can utilize ultra wide band (UWB) sensors to update these relative ranging measurements. This was done by coupling the states and the covariance of the robots participating in the update. The individual robots can take advantage of ZUs which collectively improve their localization performance in feature-poor areas without significant changes to robot operations. This enhancement, similar to the effect of using a GNSS update for a single robot, benefits the overall localization performance of the multi-robot system during relative updates.
One assumes that all sensors have noise, and the robot’s knowledge of its state is based on assumptions about the consistency of its representations of the real world. As a result, the error in the robot state can accumulate over time (Engelson & McDermott, 1992; Thrun et al., 2005). Substantial error accumulation increases the robot’s uncertainty with respect to localization estimates and can also generate a false belief about the place where it is located (Choset et al., 2005). The kidnapped robot problem represents a case in which a robot is moved to another position without its being told (Engelson & McDermott, 1992; Fox et al., 2001). Many authors have explored the kidnapped robot problem in numerous different contexts (Bukhori & Ismail, 2017; Choset et al., 2005; Thrun et al., 2005; Yi & Choi, 2011; Yu et al., 2020). In our case, similar to the kidnapped robot problem, we evaluated several cases of lost robots. In these cases, the robots have significant position errors and covariance even before they were moved to another position. Some of the lost robot stories might be observed when the primary localization sensors have been disabled, or when the error accumulation in the localization is larger than what might be sustained. In this study, we will focus on a specific implementation of this problem, i.e., how to maintain a reliable localization and how to reinstate the localization of the other robots that have been lost in the cooperative DEKF architecture.
In our previous work, we analyzed several schemes that might be used to help us to decide when to introduce the information from pseudo-measurements (e.g., ZU) to improve the localization performance in a DEKF-based cooperative localization system. Our findings were carefully analyzed and compared in a simulation environment with ground robots (Gutierrez et al., 2022).
In this work, our specific contributions are summarized as follows:
We further generalize the method and significantly expand on it by implementing the algorithm in real-world experiments with small ground robots as described previously by Gutierrez et al. (2022)
We include an additional algorithm that uses velocity information from odometry (wheel and/or visual) in the DEKF.
We demonstrate the benefits of using ZU for the lost robot cases in a multi-robot DEKF system by qualitatively comparing the localization performance with a video motion capture system solution.
We have made our software and data sets publicly available1.
The rest of this paper is organized as follows. Section 3 details and explains the implementation of the algorithm used. Section 4 describes the simulation and real-world experiments assembled with the robots to be used. Section 5 provides insights into the results of simulation experiments. Finally, Section 6 discusses future studies that might improve the system.
3 METHODOLOGY
In this work, the base DEKF algorithm is implemented by closely following the work described by Luft et al. (2016). This algorithm is embedded into the ROS for localizing N vehicles navigating in a GNSS-denied/degraded environment. Apart from the re-implementation of the base DEKF algorithm into ROS, an error state rather than a total-state EKF was used. This will keep the accumulated error more sustainable over longer sequential predictions without measurement updates. Also, modeling the error states is more straightforward because they behave in a less complicated fashion than the highly non-linear total states (Kilic, 2021; Roumeliotis et al., 1999).
Additionally, the available sensor modalities are reduced such that the sensor fusion is performed using only IMU, UWB, wheel encoders, and zero velocity updates. For example, we assume that the robots cannot acquire bearing measurements from each other through the use of exteroceptive sensors, are unable to identify any landmarks within the environment and operate in a GNSS-denied setting. This differs from the approach presented by Luft et al. (2016). The algorithm uses the IMU as the primary source for localization estimates. Because the INS-based localization may drift over time, the estimate will need to be improved or corrected. Wheel encoders and cameras that provide velocity information are widely utilized to improve INS-based localization by sensor fusion techniques (Cho et al., 2011; Cobos et al., 2010). While GNSS updates can provide reliable information for correcting the localization drift, these updates are not always available. ZU can improve the localization estimation by calibrating the IMU and bounding the velocity error estimation. This can be used whenever the robot is stationary. The relative update is performed using only UWB-ranging information. This update is also utilized as a communication bridge between robots to facilitate pairwise sharing of state information.
In the error-state DEKF architecture, each robot can perform three different algorithms to estimate and improve the overall state:
INS-based dead-reckoning, in which each robot propagates its error state and updates its total state using IMU measurements
Private updates2, in which each robot receives a measurement update based on sensor availability and robot-environment constraints that are not shared directly with the group, (e.g., odometry velocity, ZU, and GNSS)
Relative updates2, which are performed when two robots are within a specified proximity that facilitates coupling of the states, covariances, and cross-correlated values.
These three algorithms are presented in detail in the following subsections. Figure 1 highlights the algorithms and the sensors used in the ROS system.
3.1 INS-Based Dead-Reckoning
The INS implementation is the state propagation step of the error-state D-EKF implementation. The error-state formulation is presented in a local navigation frame (i.e., East-North-Up frame) based on the formulation described by Groves (2013) and Kilic et al. (2021). This formulation is further simplified based on the assumption that the robots have low velocity profiles which will allow us to neglect the Earth’s rotation and craft-rate terms in the INS mechanization. This is done to simplify INS initialization so that it will not be dependent on the robot’s absolute position on Earth and will have limited performance impact with low-cost microelectromechanical systems (MEMS) IMU-sensors.
The error-state vector, , is formed in a local navigation frame as shown in Equation (1):
1
where, is the attitude error, is the velocity error, δ pb is the position error, ba is the IMU acceleration bias, and bg is the IMU gyroscope bias. Once the inertial biases ba and bg are estimated, they are removed from the raw IMU measurements before their implementation in the filter time propagation update. The error-state vector is assumed to be defined by Equation (1) and the total state vector, , as shown in Equation (2):
2
where each of the (nine) total states corresponds to the first nine error states. In the inertial navigation equations, following the notation in (Groves, 2013), the symbols (−) and (+) are employed to indicate the values at the beginning and end of the processing cycle of the navigation equations, respectively. The attitude update is given with the assumption that we can neglect the rotation of Earth and transport rate as shown in Equation (3):
3
where is the coordinate transformation matrix from the body frame to the local level frame, I3 is a 3-by-3 identity matrix, is the skew-symmetric matrix of the IMU angular rate measurement, and ΔtI is the IMU sampling interval. The velocity update is as indicated in Equation (4):
4
where is the velocity update, aIMU is the acceleration measurements from the IMU sensor, is the gravity vector defined as .
The position update is as shown in Equation (5):
5
The error-state dynamics is linearized as shown in Equation (6):
6
where ∧ is the skew-symmetric matrix of a vector.
Then, the first-order error-state transition matrix, Φ, is given as shown in Equation (7):
7
where it is used for propagating the error state vector, xerr, the covariance, P, and the cross-correlated terms, σAB, between robots. The state estimates and covariance that are propagated over time are denoted using the superscript “−”; those after the measurement updates later described in the following subsections are indicated using the superscript “+” as shown in Equations (8), (9), and (10):
8
9
10
where QINS is the INS system noise covariance matrix which is generated by assuming the propagation intervals are small based on the work described by Groves (2013). The input power spectral density for the error states is modeled by considering not only white noise in the accelerations and angular rates but also biases and scale factor errors inherent in the inertial sensors. The construction of the QINS matrix includes an accounting for bias and noise sources such as gyro in-run bias stability, angular random walk (ARW), accelerometer in-run bias stability, and velocity random walk (VRW). These are converted to their respective power spectral densities (PSDs) and scaled according to the time interval. The resulting covariance matrix, QINS, is formed by populating the diagonal blocks with the computed PSDs for each noise source. This will ensure an accurate representation of noise characteristics in the system.
3.2 Private Updates
Following the nomenclature described by Luft et al. (2016), private updates are performed individually by each robot and are not shared with the group directly. These updates depend on the robot’s individual relationships with the environment. The information from some of these updates may not be available based on the mission environment profile.
3.2.1 GNSS and Motion Capture Update
One standard way to use the private updates is by leveraging the GNSS updates which are assumed to correct the localization drift and provide reliable position estimates. As in our previous work (Gutierrez et al., 2022), the GNSS update can be performed in a loosely coupled manner with the following structure. In this work, GNSS updates were not used because of the indoor testing setting. However, we provided the structure of this update in the following for the sake of completeness.
The measurement innovation is given as shown in Equation (11):
11
where vPU represents the velocity and rPU represents the position measurement obtained from GNSS measurement. In our case, the velocity and position measurement coordinate frames were set to match with the corresponding state and the lever arm for the GNSS antenna was assumed to mount to the location of the IMU sensor. The Kalman gain for the private update is calculated as shown in Equation (12):
12
where HPU represents the Jacobian of the GNSS measurement model and can be given as indicated in Equation (13):
13
where I represents the identity matrix. The measurement noise covariance matrix, RPU, is given as indicated in Equation (14):
14
where , , and are the variances of the velocity measurement noise and , , and are the variances of the position measurement noise in the x, y, and z directions, respectively. Using the calculated Kalman gain, the error state and covariance are updated as shown in Equations (15) and (16):
15
16
Finally, the terms that cross-correlate with the rest of the robots in the system are updated as shown in Equation (17):
17
where the σAB represents the cross-correlation terms that link an individual robot to the rest of the robots in the system.
A video motion capture system (VICON) positioning update can be used as a private update to generate a reliable localization estimation for indoor settings and as a testing proxy for GNSS inside a laboratory. The VICON update can be used in a manner similar to the GNSS update framework. In our tests, we used the VICON system to obtain ground truth and also to initialize the robot’s position. Assuming the VICON solution frame matches the robot’s navigation frame, we have Equation (18):
18
where rvicon represents the position measurement obtained from the VICON measurement. The Jacobian of the measurement model and the measurement noise covariance matrix for VICON can be given as shown in Equations (19) and (20):
19
20
where , , and are the variances of the VICON measurement noise.
3.2.2 Odometry Velocity Update
The odometry velocity update can be utilized further to improve the state estimation of individual robots. In this update, the algorithm collects only the velocity information from the associated sensor. For example, wheeled robots can use wheel encoders to obtain state information. Also, any external odometry solution can be used in this update, including velocity information from the visual odometry. This update can be omitted if there are no sensors available.
After the frame rotation between the velocity sensor and the navigation frame, i.e., with vVU in the navigation frame, the measurement innovation is given as shown in Equation (21):
21
where vVU represents the velocity measurement obtained from the odometry source. Updates of the error state, covariance, and cross-correlated terms follow the same structure through Equations (12)–(17). The Jacobian of the measurement model, HVU, can be given as shown in Equation (22):
22
and the measurement noise covariance matrix, RVU, can be constructed by the variances of the velocity measurement noise.
3.2.3 Zero Velocity Update
In this work, ZU was applied using a combination of linear and angular velocity. To use this update properly, stationary conditions must be identified accurately. Otherwise, the rover’s state will yield incorrect updates which will lead to poor localization performance (Ramanandan et al., 2012). We used two different indicators to identify stationary conditions, including the velocity command provided by the autonomous controller that determines the movement of the robot as well as the wheel encoder measurements. We assume that robots do not slip under the conditions used to determine stationary conditions and that the robots do not perform any turning maneuvers when they stop. The measurement innovation for ZU is given as indicated in Equation (23):
23
where ωIMU represents gyro-rate measurements. Similarly, updates to the error state, covariance, and cross-correlated terms follow the same structure through Equations (12)–(17). The Jacobian of the ZU measurement model is described as shown in Equation (24):
24
The measurement noise covariance 6-by-6 matrix for ZU, RZU, can be constructed similarly using the information provided previously in other private updates.
3.3 Relative Updates
Relative updates were performed with pairwise ranging whenever two robots were within a specific range and were assumed to occur only when the robots were within a specific distance from one another. This relative update model is based on work published by Luft et al. (2016). In our work, UWB range measurements were used to trigger the relative updates. Whenever a relative update is performed, one robot detects the other robot to be included in the update and receives the state, covariance, and cross-correlated terms from it. The decentralized architecture of the error-state EKF allows the robots to decouple their individual states, covariances, and cross-correlated terms to construct a combined covariance matrix as shown in Equation (25):
25
where Pglobal represents the combined covariance matrix, PA represents the individual covariance matrix for robot A, PB represents the individual covariance matrix for robot B, and ΣAB represents the combined correlated values from robot A and B.
The Jacobian of the ranging measurement model is described as shown in Equation (26):
26
where rA and rB represent the 3D position values for robot A and B, respectively, such that rA =[xA, yA, zA] and rB =[xB, yB, zB]. The non-linear measurement model is described as shown in Equation (27):
27
The Kalman gain for the relative update is calculated as indicated in Equation (28):
28
Using the calculated Kalman gain, the error state is updated as shown in Equation (29):
29
where xAerr and xBerr represent the error state of the robots performing relative update and zUWB represents the ranging measurement. Also, the covariance is updated as indicated in Equation (30):
30
Once the relative update has been completed, the error state and covariance estimates are returned to the robot that was initially detected. The decomposition of the updated correlated values for the robot that performed the update was selected as σAB ← I; the robot detecting and performing the update will keep the updated correlated term. The decomposition of the updated correlated values for the robot that was detected was selected as σBA = ∑BA. The robot receiving the update will be provided with an identity matrix, following the same decomposition described by Luft et al. (2016).
Finally, the cross-correlated terms of the robots performing the update with the robots that were not present are updated with Equation (31).
31
where σAC represents the terms that were cross-correlated with the robots that were not included in the update.
4 EXPERIMENTS
Two different experiments were performed to evaluate the performance of the algorithm, including one set in a simulation environment and another set in a real-world environment using wheeled robots. The simulation environment was built in Gazebo/ROS and the real-world experiment was performed in a video motion capture facility with instrumented i-Robot Create platforms. In both experiments, our goal was to determine the efficiency of using ZU to reinstate the localization of lost robots. In the following subsections, the experimental settings, robots, and sensors used are explained in detail.
4.1 Simulation Experiment Setup
Our intention in the simulation tests was to reinstate the localization of lost robots in a subterranean environment. It is assumed that the localizations of the two robots will be unreliable due to the erroneous estimates and thus they will stop moving. Another robot is then deployed to restore the localization estimates of the lost robots. The scenario begins as Robot 2 enters a cave just after receiving a set of reliable GNSS signals. The other robots (Robot 0 and Robot 1) cannot use visual perception or GNSS for localization purposes because of the poor lighting conditions in the cave and the blockade of the GNSS signals. Robot 2 traverses a straight line on the x-axis using its IMU sensor and wheel encoders to estimate its position while leveraging ZU under stationary conditions. The robots can perform relative updates in a pairwise fashion whenever they are in close proximity to one another; this proximity is set by a predetermined threshold. This case is illustrated in Figure 2, where the gray area in the highlighted view (sub-figure on the right) represents the GNSS-denied subterranean environment.
The tests in the simulation environment include three TurtleBot3 (Amsters & Slaets, 2020) simulations with the sensor models for wheel encoders IMU, GNSS, and UWB. White Gaussian noise values are added to the default outputs of the sensors to simulate the effects of random processes. The rates and noise characteristics of the sensors are shown in Table 1.
In simulation tests, communication and ranging measurements were limited to distances shorter than 2.5 meters to simulate an obstructed line of sight. All robots move with a velocity of 0.2 m/s and the simulation environment was assumed to be a flat region; the algorithm would then be able to provide three-dimensional position estimation. Robot 2 was able to stop autonomously and to perform ZU whenever any of the diagonal elements of the position error covariance reached a predetermined threshold of 5 m2. Given the sensor parameters and scenario settings, this threshold was set based on engineering judgment with respect to the reliability of the localization estimate and the traversal rate. If the robot was not able to reduce the position error covariance to a value lower than the defined threshold, then the robot will begin to perform periodic ZU to maintain the localization performance as reliable as possible. In this respect, the selection of the ZU scheme was dependent on several factors, as elaborated in our previous work (Gutierrez et al., 2022). The use of an autonomous stopping heuristic presents an advantage over periodic stopping as it will increase the traversal rate and thus allows robots to make use of the ZU when the estimated position error covariance exceeds the specified threshold. This is particularly useful for mitigating the effects of IMU drift while minimizing interruptions to the robot’s mission. However, the robot’s ability to estimate its position error covariance in a reliable manner may diminish due to prolonged external aiding outages, sensor errors, and/or environmental conditions. In such cases, performing ZU at fixed time intervals regardless of the estimated error may provide consistent and regular correction to the IMU drift with a decreasing cost to the traversal rate For these reasons, we opted to use the autonomous stopping criteria first and then switched to the periodic stopping criteria. This permitted us to take advantage of the benefits of both schemes while minimizing the drawbacks.
The stopping condition was verified through the robot’s wheel encoders and velocity command output after receiving a stop command. Then, after verifying that it had stopped completely, the robot waits 0.5 seconds before it utilizes the zero updates. This stopping duration was conservatively selected based on engineering judgment and considerations of reliable ROS message delivery during the stopping phase. After a defined waiting time, the robot resumes moving. The actions the robot takes, from verifying its stopping condition to resuming movement, are dictated by a Boolean-based state machine framework. The framework models the behavior of a system using a finite number of states, each associated with a set of actions. The system then transitions between these states based on certain conditions.
In the context of the rover-stopping mechanism, it is important to mention potential challenges, including false positives and false negatives. If a robot mistakenly assumes it has stopped (false positive), this could introduce errors. By contrast, if the robot fails to recognize a stop (false negative), potential drift correction opportunities could be missed which will also lead to less accurate localization. To handle these challenges, the state machine framework uses Boolean indicator flags in ROS to determine when the robot should stop and start moving again. These determinations are based on feedback from the wheel encoders and velocity command output. The threshold parameters for autonomous ZU, the duration of movement during periodic ZU, and the stopping duration can be easily customized in the code provided and adjusted to accommodate the mission scenario and robot constraints.
4.2 Real-World Experiment Setup
Real-world experiments also aim to restore the localizations of the lost robots. However, in contrast to the simulation setup, the lost robots do not remain in stationary conditions before their first relative update. The real-world experiments were performed in a 3 × 3-meter room with a video motion capture system. Also, VICON was used to simulate a GNSS update in the indoor test settings for initializing the Robot 2 pose and generating the ground truth solution for all robots. Robots are assumed to be unable to use visual odometry because of poor lighting conditions and an overall lack of sufficient features in the environment. However, robots can use their IMU, wheel encoders, and UWB sensors.
In these experiments, which are similar to the cave simulation experiments, it is assumed that Robot 2 begins to move with a known and accurate localization whereas Robot 0 and Robot 1 become lost while on patrol in this area. Robot 2 patrols the x-axis, Robot 0 moves diagonally across the room, and Robot 1 patrols the x-axis with an offset in the y-axis to make certain that Robot 2 and Robot 1 can only perform relative updates with Robot 0 (i.e., Robot 2 and Robot 1 cannot detect or communicate with each other). With this setup, the localization performance of Robot 1 depends on the localization performance of Robot 0, and the performance of Robot 2 when it is or is not using ZU will affect the entire system. Robots travel with a velocity of 0.2 m/s with a detection limit set to 1 m because of dimension restrictions. The test environment is shown in Figure 3.
The robots in i-Robot Create were used in the real-world experiments. To share the information between robots and record the VICON solution data, the robots and the VICON system were connected to the same network via Wi-Fi. The specification and noise characteristics of the IMU, ADIS16405, are shown in Table 2. The UWB sensor, DWM1001-DEV, measurement specifications are given as ±15 cm and ±30 cm for 2D and 3D noise, respectively. In these experiments, we followed the same autonomous stopping heuristic previously described for Robot 2 in the simulation experiments. In this case, the threshold for stopping is predetermined at 2 m2 which will permit us to observe the effectiveness of ZU before the first relative update between robots because of dimension restrictions. Note that, while the environment used in our experiments was rigid, flat, and benign, with well-behaved dynamics on the i-Robot Create and slow velocity inputs, we acknowledge that in harsher environments or with excessive slippage, additional indicators may be needed to ensure that the robot stops completely.
5 EVALUATION
5.1 Simulation Results
For the simulation experiments, we first analyze the localization performance of Robot 2 and determine whether or not ZU was leveraged during its traversal. The position errors for both cases and the average improvement for each metric are shown in Table 3. Application of ZU can bound the velocity error, calibrate IMU sensor biases (see Figure 4), and limit the rate of INS localization error growth Groves, 2013.
In environments in which robots cannot use their visual sensors or rely on GNSS signals, ZU provides a significant improvement and maintains a reliable localization estimate in all axes as shown in Table 3.
Leveraging ZU in a single robot can benefit the overall performance of the multi-robot system in a cooperative localization framework as shown in Table 4 and Figure 5. Table 4 shows the total horizontal error correction of the lost robots and the percentage improvement in cases when Robot 2 does or does not utilize ZU. The correction is calculated by using the initial and the end-of-mission horizontal errors for each robot. The approximate time of each relative pairwise update between robots and the horizontal errors are shown in Figure 5. The red dots represent the horizontal errors of each robot when Robot 2 does not use ZU, and the blue dots represent the horizontal error when Robot 2 does use ZU. Green and orange rectangles indicate the Robot 2-to-Robot 0 and Robot 0-to-Robot 1 relative updates, respectively. At the beginning of the test, Robot 2 is assumed to start moving with an accurate position estimation. Robot 0 remains stationary with a high localization estimation error until it communicates with Robot 2. Similar to Robot 0, Robot 1 has high localization error and remains stationary. During Robot 2’s traversal, it performs autonomous ZU to maintain a reliable estimation. As seen in Figure 5, the use of ZU by Robot 2 facilitates additional positioning correction for Robot 0. Once the relative update between Robot 2 and Robot 0 has been completed, Robot 0 traverses through the _y-axis to Robot 1. Once it has completed a relative update via interactions with Robot 0, Robot 1 gains a better localization estimation and begins to move along the x-axis. Note that only Robot 2 performs ZU. Robot 1 and Robot 0 do not leverage ZU; however, Robot 0 corrects more error when Robot 2 uses ZU, which also benefits Robot 1 when it performs a relative update with Robot 0. In other words, the benefit of utilizing ZU in Robot 2 is carried over to Robot 0 to Robot 1, even though Robot 2 and Robot 1 do not share information directly.
To visualize the impact of ZU more effectively, the position estimations of the robots in ENU frame with the 3σ error covariance values and the ground truth are shown in Figure 6. As shown, the error for Robot 2 does not change significantly even when it is performing relative updates with lost robots. This is because the filter properly considers which robot’s localization is more likely to be accurate based on the scale of the covariance. Leveraging ZU increases the effectiveness of relative updates between robots. For example, because Robot 0 starts the test with an incorrect estimation and high covariance, the position error and the covariance can only be reduced when this robot can perform a relative update with Robot 2. This interaction also affects Robot 1’s localization performance. The dominant errors in the vertical direction can be primarily attributed to the fact that the robots are able to estimate their positioning based on IMU and WO measurements. While WO measurements provide information about the robot’s translational motion on the x and y axes, they do not account for the vertical motion (i.e., on the z-axis) directly. The robots rely on IMU measurements to estimate their positioning in the vertical direction. As a result, the localization system is inherently more dependent on IMU measurements for the estimation of the vertical position, which makes it more susceptible to errors due to sensor noise, biases, and inaccuracies in the gravity model. Another factor that can contribute to the vertical estimation error is the alignment of the gravity vector with the vertical axis; this can lead to increased drift in vertical measurements. This drift may be more pronounced in the vertical as opposed to the horizontal direction because of the inherent sensitivity of vertical motion to inaccuracies in the gravity model and sensor biases. Imperfect gravity modeling can also contribute to the observed vertical error. The use of a more accurate gravity model and a simulated IMU sensor could help to reduce the vertical error in the estimation.
5.2 Real-World Experimental Results
To verify and evaluate the algorithm further, we analyzed the localization performance of Robot 2 in an indoor environment with real robots and real sensors in cases in which ZU is or is not leveraged throughout its traversal. In Table 5, the 3D position error for both cases is shown. In the same table, the average improvement is also shown for each metric. Applying ZU can limit the velocity error, calibrate IMU sensor biases, and limit the rate of INS localization error growth even in a leveled, low-slip environment and short-distance scenarios. The improvement observed is a clear indication of the effectiveness of maintaining a reliable localization estimation by using ZU in a single robot. This can also be observed in Table 5.
Analogous to the simulation results, the real-world experiments show a similar trend with regard to correcting errors and confirm that the use of ZU by one robot results in improved overall localization performance of the entire multi-robot system. In Figure 7, the same color-coded representations were used as those featured in the simulation experiments (i.e., red dots represent the estimation error when Robot 2 does not use ZU and the blue dots represent the estimation error when Robot 2 does use ZU). The experiment begins with a good initial estimation and a small covariance error. As shown in Figure 7 the horizontal error for Robot 2 remains stable even when it is performing relative updates with lost robots. This is because the covariance of the lost robots is much higher than the covariance of Robot 2. The benefit of ZU can be clearly observed in this case, because the robots exchange information more effectively when one of them is able to perform ZU.
The position estimations of the robots in the ENU frame with the 3σ error covariance values and ground truth are presented in Figure 8 and facilitate visualization of covariance changes during relative updates. For example, because Robot 0 begins the test with an incorrect estimation and high covariance, the position error and the covariance can only be reduced when this robot can perform a relative update with Robot 2. This also affects the Robot 1’s localization performance. Similarly, the position error in the U-axis is substantially reduced when Robot 2 uses ZU; this leads to a significant amount of correction after the relative update with Robot 0 on the same axis. Because the robots used in this study can only limit the velocity in E and N axes by using wheel encoder measurements, the position error in U-axis can be reduced only by ZU or a relative update.
6 CONCLUSION
In this paper, we have proposed an error-state DEKF algorithm for the cooperative localization of mobile robots in GNSS-denied/degraded environments using ZU, IMU, UWB, and odometry measurements. This work significantly expands on our previous work (Gutierrez et al., 2022), as it generalizes the utilization of ZU to improve the localization performance in a DEKF-based cooperative localization system. The proposed algorithm was implemented and tested with real hardware in a video motion capture facility and an ROS-based simulation environment for unmanned ground vehicles (UGVs) in which our goal was to re-localize robots that had become lost in the system.
The main contributions of this work include (1) a novel method to leverage ZU in a decentralized cooperative localization framework, (2) integration of odometry velocity measurements into the DEKF algorithm, (3) the use of ZU for reinstating lost robots in a multi-robot system, and (4) real-world validation of the algorithm with multiple robots. Our analyses and results demonstrate that using ZU in a cooperative DEKF algorithm greatly benefits the performance of localization estimations. Thus, this may be a potential fail-safe mechanism in case other methods fail or become unreliable in settings that might include warehouse stocking, factory automation, and retail spaces.
While ZU provides significant benefits with respect to localization, it is worth noting several potential misuse scenarios. For instance, misuse could occur in cases of over-reliance on ZU in environments in which it may be challenging to determine stationary conditions. Overuse of ZU could also potentially affect the robot’s traversal rate. Additionally, the necessity and impact of ZU depend on the quality and availability of external aids. In high-quality systems that provide uninterrupted positioning data (e.g., high-end GNSS, lidar SLAM), there may be minimal need to rely on ZU. However, in environments with unreliable or absent external aids, the use of ZU becomes more critical. This point was highlighted in this study that featured a system equipped with an IMU, wheel encoders, and UWB.
In our future work, we plan to apply different constraints (e.g., non-holonomicity, hovering, and landing) for other types of locomotion. This will allow us to observe the performance of the algorithm in different situations. Additionally, we plan to incorporate obstacle avoidance strategies to ensure the safety of the robots and prevent any potential collisions, especially in cases where the position uncertainty is larger than the UWB detection range. Moreover, we plan to explore the use of other types of sensors, including cameras or lidars, as a means to enhance localization performance. We also plan to apply the proposed method to different types of robots, for example, aerial or underwater vehicles. Finally, we plan to incorporate adaptive stopping strategies (e.g., determining optimal frequency and duration of stopping) using machine learning techniques to optimize the cooperative localization performance.
HOW TO CITE THIS ARTICLE
Kilic, C., Gutierrez, E., & Gross, J. N. (2023). Evaluation of the benefits of zero velocity update in decentralized extended kalman filter-based cooperative localization algorithms for GNSS-denied multi-robot systems. NAVIGATION, 70(4). https://doi.org/10.33012/navi.608
CONFLICT OF INTEREST
The authors declare no potential conflict of interest.
ACKNOWLEDGMENTS
This work was supported in part through a subcontract with Kinnami Software Corporation under the STTR project FA864921P1634. The authors thank Dr. Yu Gu for allowing us to use the instrumented iRobot Create platforms and Jonas Bredu and Shounak Das for assisting with the tests.
Footnotes
↵2 Since the base D-EKF algorithm is implemented from the work described in Luft et al. (2016), we opted to use the same naming conventions
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.