Navigating Together: The CoNaV Testbed and Framework for Benchmarking Cooperative Localization

  • NAVIGATION: Journal of the Institute of Navigation
  • December 2025,
  • 72
  • (4)
  • navi.722;
  • DOI: https://doi.org/10.33012/navi.722

Abstract

This paper presents CoNaV, a comprehensive framework for creating a multi-vehicle cooperative localization (CL) testbed designed to support the benchmarking, development, and deployment of cooperative navigation algorithms. Given the essential role of CL in improving localization accuracy for both defense and civilian applications, CoNaV provides a robust environment for rigorously validating algorithms under real-world conditions. By establishing a benchmark for CL algorithms, CoNaV lays a foundation for advancing research into more sophisticated and distributed CL solutions. This framework highlights the potential of cooperative navigation to enhance multi-vehicle operations and offers a scalable, practical approach for future developments in CL technology.

Keywords

1 INTRODUCTION

Precise localization is fundamental to autonomous navigation systems, with global navigation satellite systems (GNSSs) as the most widely used solution. However, GNSS-based localization faces significant vulnerabilities, including susceptibility to adversarial threats such as jamming and spoofing (Iyidir & Ozkazanc, 2004; Warner & Johnston, 2003), as well as challenges in urban or obstructed environments where occlusions and multipath effects are prevalent (Osechas et al., 2015). To overcome these limitations, alternative methods such as vision-aided navigation (Arafat et al., 2023; Lu et al., 2018), magnetic anomaly-based navigation (MagNav) (Canciani & Raquet, 2017), and signal-of-opportunity (SoOP) navigation (Kassas et al., 2020) have been developed.

Solutions for vision-aided navigation problems can generally be categorized as either map-based or map-independent, depending on how features are detected and used in the localization process. In environments with limited distinguishable features, additional computational resources are required to process dense information across all images. Additionally, vision-based navigation systems may experience performance degradation during rapid motion or in low-light conditions, reducing their effectiveness in operationally challenging environments. Addressing these challenges while performing a task is particularly difficult for a single vehicle, owing to constraints on power and computational resources.

MagNav relies on significant magnetic anomaly gradients for accurate localization; however, its precision decreases in regions with weak or shallow gradients. Furthermore, existing magnetic maps contain spatially correlated errors that are not well characterized (Hiatt & Taylor, 2022), leading to increased navigation inaccuracies. Despite these limitations, MagNav can still be suitable for applications involving high-speed navigation over large areas (McNeil, 2022). SoOPs can be utilized through techniques such as those based on the angle of arrival (AOA), time of arrival (TOA), received signal strength (RSS), and time difference of arrival (TDOA). Each method presents its own challenges: the AOA is susceptible to directional errors, the TOA requires precise clock synchronization between transmitters and receivers, the RSS technique is affected by interference and multipath propagation, and the TDOA depends on data sharing between multiple receivers. Moreover, SoOP-based navigation requires a sufficient number of detectable signals or transmitters for accurate triangulation, which is particularly difficult to achieve in environments with sparse signal availability or a limited sensing range (Kapoor et al., 2017).

Cooperative localization (CL) leverages the distributed sensing and computing capabilities of multiple vehicles to enhance localization accuracy and system resilience. The benefits of CL are most evident in feature-sparse or contested environments where reliance on MagNav or visual-inertial odometry alone is insufficient (Yang et al., 2018). By sharing sensor data and state estimates, CL improves situational awareness and robustness, making it especially valuable in GNSS-denied settings. Applications span defense and civilian sectors, ranging from autonomous munitions (Frelinger et al., 1998) and surveillance (Matlock et al., 2009; Saptharishi et al., 2002) to package delivery (Betti Sorbelli et al., 2022) and urban air mobility (Rajendran & Srinivas, 2020). Aggregating data from multiple agents allows CL systems to sustain high localization performance, even when individual sensors are degraded, a feature that is critical for reliable navigation in adversarial environments.

Numerous CL algorithms have been developed using sequential filters such as the extended Kalman filter (EKF) (Sharma & Taylor, 2008) and particle filter (Fox et al., 2000) and batch methods such as maximum likelihood estimation (Howard et al., 2002) and maximum a posteriori (MAP) methods (Nerurkar et al., 2009). Centralized CL serves as a benchmark for distributed approaches, which aim to reduce bandwidth demands and avoid scalability and single-point failure issues (Chakraborty et al., 2019). Distributed CL often relies on synchronous communication, which is impractical in many real-world settings. To address this, approximate fusion methods such as covariance intersection have been proposed, although they often yield conservative estimates (Carrillo-Arce et al., 2013; Zhu & Kia, 2019). Batch processing methods offer more accurate estimates despite communication delays (Nerurkar et al., 2009; Sahawneh & Brink, 2017), but have primarily been tested in simulations or ground robot settings. Extending their evaluation to aerial platforms is critical for assessing their effectiveness in dynamic, large-scale environments.

Several researchers have contributed open-source, multi-modal data sets with diverse sensor suites to support experimental validation in multi-vehicle navigation (Liu et al., 2025; Rizzoli et al., 2023; Yuan et al., 2024). These data sets, designed for tasks such as simultaneous localization and mapping, object detection, and tracking, often include heterogeneous sensors. Ultra-wideband (UWB) sensors, particularly those using TDOA measurements, are well suited for CL because of their low computational cost and minimal data association requirements, leading to the release of UWB-based data sets, primarily from indoor environments using Decawave modules (Queralta et al., 2020). Additional data sets include those based on single unmanned aerial system (UAS) operations in diverse settings (Arjmandi et al., 2020) and multi-UAS formation flying (Zhang et al., 2025). However, there remains a need for outdoor multi-UAS data sets tailored to validate CL algorithms under generic, unconstrained flight trajectories.

Testbeds are crucial for validating CL algorithms and establishing performance benchmarks under real-world conditions. Several indoor testbeds have been employed to validate CL with ground robots. For instance, omnidirectional cameras have been utilized for bearing measurements in cooperative setups (Sharma et al., 2013), and UWB-based Decawave sensors have been tested on ground robots (Lv et al., 2017). Outdoor testbeds, however, introduce additional complexities related to long-range sensing and communication. UWB-based software-defined radios (TDSR), for example, have been implemented to facilitate inter-vehicle and vehicle-to-landmark range measurements in quadrotor systems (Guo et al., 2016).

This work introduces a new outdoor testbed for validating CL in real-world UAS applications. Built with commercially available components, such as P440 TDSR radios, the system supports both inter-vehicle and absolute landmark ranging to form a relative position measurement graph (RPMG), where UASs are nodes and range measurements define the edges. To address sensor noise and unknown biases, particularly in accelerometer and range measurements, bias correction methods are integrated to improve estimation accuracy. The testbed enables rigorous benchmarking of centralized and distributed CL algorithms using both sequential (EKF) and batch (factor graph [FG]-based MAP) estimation techniques. A central challenge in CL is ensuring system observability, as unobservable systems produce unreliable state estimates. While theoretical results suggest that at least two landmarks are sufficient for observability with range-only or range-and-bearing measurements (Sharma et al., 2013; Shi et al., 2019; Sun et al., 2018), real-world conditions often demand more landmarks owing to sensor and environmental limitations. This work investigates how localization performance relates to observability through variations in landmark count and placement, UAS trajectories, and network topology.

The key contributions of this work include the following: (1) The development of an open-source, multi-agent UAS testbed for CL, supporting diverse platforms and sensors. All setup instructions and data sets have been publicly released for reproducibility and are available in the CoNaV data set (Boyinine et al., 2025). (2) Implementation and benchmarking of a centralized CL algorithm using both EKF and MAP approaches, with compensation for sensor biases to ensure robust performance across varying conditions. (3) A comprehensive observability analysis using both theoretical and experimental methods, examining the effects of landmark geometry, UAS motion, and network structure on localization accuracy.

The remainder of this paper is organized as follows: Section 2 presents the centralized cooperative pose estimation algorithm, along with the motion and sensor models used for validation. Section 3 discusses the concept of observability and the unobservability index in multi-UAS systems. Section 4 describes the design of the multi-UAS testbed. Section 5 compares localization performance and system observability across different RPMG topologies. Finally, Section 6 concludes the paper and outlines directions for future research.

2 COOPERATIVE LOCALIZATION

In CL, vehicles share sensor data and jointly estimate their states, enhancing sensing coverage and enabling cross-correlations across all vehicle states. This approach improves overall system efficiency and robustness. While decentralized and distributed CL methods exist, centralized CL remains optimal for accurately maintaining these cross-correlations. This section presents a centralized cooperative pose estimation framework for multi-rotor UASs, estimating both position and heading using sequential (EKF) and batch (MAP) filtering techniques. The approach is adaptable to other vehicle types with suitable motion models. Each UAS is equipped with an inertial measurement unit (IMU), GNSS receiver, and UWB TDSR radio, with inter-vehicle communication via Wi-Fi. Challenges such as latency, packet loss, and synchronization in distributed sensing are addressed through a loosely coupled estimation framework. Pose estimates are validated against GNSS data to assess the performance of the UWB-based CL system. Building on prior work (Ammapalli et al., 2024), this study incorporates both a four-state model for scenarios with reliable odometry and an extended seven-state model for cases lacking odometry, improving localization robustness.

Let us consider N multi-rotors flying simultaneously. pi=[pni,pei,pdi] is the position vector of the i-th multi-rotor, with i ∈ ℳ = {1, 2, …, N}, where pn, pe, and pd are the north, east, and down positions. vi=[ui,vi,wi] is the velocity vector, where u, v, and w are the body frame velocities. Φi = [φi, θi, ψi] represents the attitude, where φ, θ, and ψ represent the roll, pitch, and yaw of the multi-rotor, and Ωi = [pi, qi, ri] represents the angular velocity vector. Multi-rotor motion can be modeled by using the following equations:

p·i=Rbivviv·i=(Ωi×vi)+g[sinθicosθisinφicosθicosφi]+[00Timi][φ˙iθ˙iψ˙i]=[1sinφitanθicosφitanθi0cosφisinφi0sinφicosθicosφicosθi]ΩiΩ˙i=Ji1(Ωi×JiΩi+[τφiτθiτψi])1

where m and J are the mass and moment of inertia matrix of the multi-rotor. T, τφi, τθi, and τψi are the thrust and torques generated by the propellers of the multi-rotor. Rbv is the rotation matrix from the body frame to the inertial frame, given by the following:

Rbiv=R(ψi)R(θi)R(ϕi)R(ψi)=[cosψisinψi0sinψicosψi0001], R(θi)=[cosθi0sinθi010sinθi0cosθi], R(ϕi)=[1000cosϕisinϕi0sinϕicosϕi] 2

When reliable odometry measurements are available, the following model can be used for estimating the pose of each multi-rotor:

p˙iψ˙i=Rbivviqisinϕisecθi+ricosϕisecθi 3

where the velocity vector vi, roll φi, pitch θi, and angular velocities pi, qi, ri are available from the odometry topic and are considered as input ui. It should be noted that although the heading ψi is also available from the magnetometer in the Pixhawk, it might not always be reliable owing to interference from electronics onboard; hence, the heading is estimated.

When odometry measurements are not available, velocity should also be estimated from the body frame acceleration vector ai=[a1i,a2i,a3i] available from the IMU topic. In most cases, the raw measurements from an IMU have an unknown bias owing to several factors. Estimating these biases and removing them from sensor data are crucial steps for accurate estimation. A constant unknown bias bai=[ba1i,ba2i,ba3i] is assumed to be present in the acceleration data. Hence, this bias is also estimated along with pose and velocity. The following model is used for estimating pose and velocity, along with accelerometer biases:

p˙iψ˙iv˙ib˙ai=Rbivviqisinϕisecθi+ricosϕisecθiΩi×vi+Rbivaibaig03×1 4

where g=[0,0,g] is the acceleration due to the gravity component. For convenience, the models described in Equation (3) and Equation (4) are referred as the four-state model and seven-state model, respectively, throughout the paper.

Each multi-rotor is equipped with a TDSR radio. Some of the radios are placed in the environment at known locations, serving as known features in the environment ℱ = {1, 2, …, L}, which is useful for localization. The TDSR radios on the multi-rotors can measure the range from each other as well. The range measurements for UAS i ∈ ℳ from landmark j ∈ ℱ and another UAS k ∈ ℳ\{i} are modeled as follows:

ρlmji=(pnilmnj)2+(peilmej)2+(pdilmdj)2 5a

ρik=(pnipnk)2+(peipek)2+(pdipdk)2 5b

The range measurements from TDSR radios tend to be noisy and have an unknown bias, owing to an offset between the TDSR radio sensor frame and the body frame of the UAS, coupled with possible experimental errors in determining the exact locations of the landmarks. The importance of estimating the bias in range measurements and its effect on localization have been demonstrated in the conference version of this work (Ammapalli et al., 2024). Hence, it is crucial to estimate and incorporate the bias in range measurements as a state. Let blmji and bji denote the bias in ρlmji and ρik, respectively. Assuming a constant bias model, the bias states are propagated as b˙j,ki=[b˙lmji,b˙ki]=[0,0]. Assuming a zero-mean Gaussian noise distribution, N(0,σρ2), where σρ denotes the standard deviation, the corresponding sensor measurements can be modeled as follows:

ylmji=ρlmji+blmji+N0,σρ2,yiji=ρlji+bji+N0,σρ2 6

Most commercial off-the-shelf UAVs are available with a built-in altimeter to measure height, which can be modeled as follows:

hi=pdi 7

When odometry measurements are available, they can be updated in the seven-state model instead of switching back to the four-state model. The corresponding measurement model equation is given as follows:

v^i=[ui  vi  wi] 8

Therefore, the state-space model of each UAS for the four-state and seven-state models is given as follows:

x˙i=[p˙i,ψ˙i,b˙j,ki],ui=[vi,ϕi,θi,Ωi],yi={ylmii}{yjii} 9a

x˙i=[p˙i,ψ˙i,v˙i,b˙ai,b˙j,ki],ui=[ai,ϕi,θi,Ωi],yi={ylmji}{yjii},iM,jF,kM{i} 9b

Here, the cumulative state, control, and input vectors are given as follows:

x=x1,x2,,xN,u=u1,u2,,uN,y=y1,y2,,yN 10

The state estimation problem can be posed as determining the hidden states xt, based on external sensor data yt and control inputs ut−1, where t ∈ {1,2, …, τ} represents the time step, which is given by the following equation:

x=argmaxx px0,x1,,xτ|y1,y2,,yτ,u0,u1,,uτ1 11

With the use of Bayes’ rule, the Markov property, and the conditional independence of the measurements, this expression simplifies as follows:

x=argmaxxt=1τpyt|xtpxt|xt1,ut1px0 12

The above equation can be solved simultaneously or recursively by using Bayesian filtering. Both of these approaches are outlined below:

ALGORITHM 1

EKF: Prediction

 Initialize x^0=x0+N(μ,σ2)

for m in 1 - NN, where NN is the number of predictions before a measurement update, do

  x^t+=x^t+(TsNN)x˙t

  A=x^˙tx^t

  P+=P+(TsNN)(AP+PAT+Q)

end for

ALGORITHM 2

EKF: Measurement Update

 if ρlmji is received, where the range measurement is between the i-th vehicle and j-th landmark, then

  Hlmji=ρlmjixk (Jacobian of Equation (6))

  Llmji=P+Hlmji(σρ2+Llmji PHlmji)1

  x^t++=x^t++L(ρ0iρ^0i)

  P++ = (IL0iH0i)P+

end if

if ρij is received and ij, where the range measurement is between the i-th and j-th vehicle, then

  Hij=ρijx^k (Jacobian of Equation (6))

  Lij=P+HijT(σρ2+HijP+HijT)1

  x^t++=x^t++L(ρijρ^ij)

  P++ = (ILijHij)P+

end if

ALGORITHM 3

Levenberg–Marquardt Algorithm

Require: x^c=x^q:τ;uq:τ1;yq:τ

Initialize λ, i

while True do

  ii + 1

  Compute bc and Lc at x^c using Equations (16) and (17)

  Solve (LcLc+λI)Δx^c=bc

  Compute bn at x^n=x^c+Δx^c using Equation (16)

  if bn < bc then

   x^cx^n

   λλ/10

  else

   λλ × 3

  end if

  if (i > imax) or (∥bnbc∥ < ε) then

   break

  end if

end while

return x^c*

ALGORITHM 4

Marginalization of Factor Graph

Require: x^c=x^q1:τ;uq1:τ1;yq1:τ

 Reorganize x^c such that x^cq1 is on top

 Construct b and L at x^c from Equations (16) and (17)

 Define LsLi,q−1:q : Li,q−1 ≠ 0

 Define bsbiq1:q

 [Qf, Rf] = QRfactorization (L)

bsQfbs

x¯qbsq+Rfqbsq

ΣqRfqRfq

return x^cx^q:τ,uq:τ1,yq:τ,x¯q,Σq

2.1 Sequential Filtering: Extended Kalman Filter

Sequential filters are recursive, rendering them computationally light and memory-efficient. An EKF, a variant of the Kalman filter, is a recursive Bayesian filter used when the noise in the distribution is Gaussian. An EKF has a prediction step defined by the underlying dynamics and an update step defined by the external measurement update. After the initialization, the prediction step is usually run at a high frequency, whereas the update step is called when external sensor information is available. The prediction and measurement update algorithms are shown in Algorithms 1 and 2, respectively.

2.2 Batch Filtering: Factor Graphs

In batch filtering, the state estimation problem posed in Equation (12) is solved across all time steps simultaneously. Although this methodology is effective in terms of accuracy, it demands significant computational and memory resources. The FG is a bipartite graph that effectively illustrates the state estimation problem (see Figure 1). Each node within the graph represents the state vector at a specific time step. Nodes that are interrelated are linked by edges, with the corresponding odometry measurement factors, indicated by blue squares, utilized to connect the nodes. Additionally, any external measurements available at a specific time step are depicted by gray squares and are linked to the relevant node. In centralized CL, the state vector spanning all vehicles can be represented as a single unified FG, where each factor corresponds to the states of all vehicles involved. Assuming a Gaussian process and measurement noise model, Equation (12) translates to a weighted nonlinear least-squares formulation given by the following:

x=argminx(12x¯0x0Σ02+12t=1τyth(xt)R2+12t=1τxtf(xt1,ut1)Q2) 13

FIGURE 1

Schematic of an FG with marginalization, with a sliding window of three factors

where h(xt) represents the measurement estimate at time step t and f(x, u) is the state transition vector of state x. x¯t and Σt are the prior factors at time step t.

When a new measurement is available from any vehicle, a new factor is generated and incorporated into the graph alongside the new measurement. This process results in a swift expansion of the FG over time. To mitigate the computational and memory challenges posed by this growing size, a sliding window approach is employed, which retains only a fixed number of the most recent factors for processing. Factors that exit the sliding window are marginalized and integrated into the graph as a single prior factor, as illustrated in Figure 1. If we take nf as the number of factors in the sliding window and set q = τ − (nf − 1), then Equation (13) can be rewritten for the sliding window as follows:

xs=argminxs(12x¯qxsqΣq2+12t=qτyth(xt)R2+12t=qτxtf(xt1,ut1)Q2) 14

The formulation in Equation (14) is equivalent to the following:

Δx*=argminΔxLΔxb2 15

Here, b is created by gathering all of the residuals at each time step into a column vector, and L is the Jacobian of b at a small perturbation of ∆x, both of which are outlined in Equations (16) and (17), where F and H are the Jacobians of f and y, respectively:

b=[q1/2(x¯qxq),R-1/2(yqhq(xq)),Q1/2(xq+1f(xq,uq)),,R-1/2(yτhτ(xτ)),Q-1/2(xτf)(xτ1,uτ1)] 16

L=[q1/2R1/2H(xq)Q-1/2F(xq,uq)Q1/2R-1/2H(xq+1)Q-1/2F(xq+1,uq+1)Q-1/2Q-1/2F(xτ-1uτ-1)Q-1/2R-1/2H(xτ)] 17

Iterative algorithms such as the gradient descent, Gauss–Newton (GN), or Levenberg–Marquardt (LM) algorithm can be used for solving the FG optimization problem. The LM algorithm is robust, combining the properties of the gradient descent and GN algorithms, and has the capability to adapt based on the closeness of the current solution to the minimum. Hence, the LM algorithm outlined in Algorithm 3 is used in this paper.

The L matrix is a sparse matrix, and its sparseness can be exploited (LU decomposition, QR factorization, etc.) for solving Equation (15). L L computed at x* is the Fisher information matrix of the system. Hence, the covariance of the system is given by (L L)−1. Additionally, the sparse structure of L can be exploited for marginalization. The marginalization algorithm for eliminating the (q − 1)th factor from x^q1:τ is described in Algorithm 4. The initial value of the damping parameter is set to 1. The tuning process follows practical recommendations from the Optimization Toolbox of MathWorks (The MathWorks, Inc., 2024). However, in practice, when the resulting search direction does not lead to a reduction in errors, the damping parameter is increased by a factor of 3 to improve search capabilities. While this increased damping parameter may slow the identification of optimal search directions, it has never been observed to become a significant issue. The convergence parameter is set to 10−6, which is fixed based on practical observations. The algorithm terminates when the change in error is less than this threshold. Additional details about FG formulation and marginalization can be found in the work by Taylor and Gross (2024). For both estimation approaches, the localization accuracy is contingent upon the observability of the system. In the subsequent section, observability is quantified to facilitate a comparison between estimation accuracy and observability.

3 OBSERVABILITY AND UNOBSERVABILITY INDEX

The observability of a system refers to a system’s ability to uniquely determine its states using external observations. A system is fully observable if its observability matrix has full rank, which bounds estimation errors and covariances; conversely, unobservable systems exhibit diverging estimation errors. The observability matrix has been defined by Silverman and Meadows (1967) for linear systems and by Hermann and Krener (1977) for nonlinear systems. Nonlinear observability relies on the rank condition over a short time interval for a local state x(t0). If the condition x(t)n holds, then the system is locally observable.

The classical rank condition uses Lie derivatives, requiring smooth measurement functions and high-order derivative computations, which are impractical with discrete sensor data. Alternatively, local observability can be approximated by using a sensitivity matrix with respect to the initial state x(t0) (Van Willigenburg et al., 2022). This approach, based on linear sensitivity dynamics, provides computational efficiency and approximates system sensitivity via the output sensitivity matrix over t ∈ [t0, tF]:

yx(t0)=[yx(t0)(t0)  yx(t1)(t1)    yx(tF)(tF)] 18

where yx(tk)=x(tk)h(x(tk))Φ(t0,tk) and Φ(t0, tk) is the state transition matrix. Full observability requires rank(yx(t0)=n), which is achieved if (F + 1)m > n, with m measurements per time step.

The theoretical observability rank condition has limitations: (1) It may be unreliable with noisy, discrete data. (2) It does not quantify the ease of observability. For instance, let us consider a planar system with and without odometry measurements:

x˙=[x˙y˙ψ˙]=[vcosψvsinψω],x˙=[x˙y˙ψ˙v˙]=[vcosψvsinψωa] 19

Let O1 and O2 represent the observability matrices of the two systems. Observability requires the range or bearing from at least two landmarks (Sharma et al., 2013; Shi et al., 2019). The systems in Equation (19) have a rank deficiency of 1 with range from only one landmark. The reduced row echelon form (RREF) of the corresponding observability Gramian is as follows:

RREF(O1O1)=[10ylmy01(xlmx)000],RREF(O2O2)=[10ylmy001(xlmx)000010000] 20

Although the theoretical observability Gramian is singular, practical numerical data often yield a Gramian with full rank with a minimum eigenvalue ɛ, a small positive quantity. This minimum eigenvalue serves as a measure of observability. The unobservability index (UI), defined as the inverse of the minimum singular value of the Gramian, quantifies the observability (Krener & Ide, 2009), with a high UI indicating weak observability. Table 1 shows UI values observed when using one or two landmarks, demonstrating a clear decrease in UI as the system transitions from unobservable to observable with two landmarks. When two landmarks are used, the minimum eigenvalue exceeds 1, leading to a UI below 1, reflected as a value < 0 on a logarithmic scale. Hence, the UI serves as a valuable metric for measuring the observability of the system.

View this table:
TABLE 1

Average UI Observed on a Logarithmic Scale as the Number of Landmarks Increases from 1 to 2

The observability property reflects the quality of landmark and inter-vehicle measurements, which can be improved to enhance local observability. Such improvements may involve actively controlling vehicle motion to increase connectivity (Bai & Taylor, 2020; Sharma, 2014; Yu et al., 2011) or improving the quality of inter-vehicle measurements (Boyinine et al., 2022). Alternatively, landmark placement can be optimized within the operational environment (Boyinine et al., 2023; Wang et al., 2019). However, owing to the dynamic RPMG topology of multi-agent networks, designing observability-aware landmark placement and control strategies remains a significant challenge and an active area of research. In the results section, the observability of the multi-UAS system is compared against estimation results for different sensing conditions and RPMG topologies. The analysis compares the UI across varying numbers of landmarks, different landmark configurations, and diverse trajectory patterns. The next section outlines the design of the testbed.

4 DESCRIPTION OF THE MULTI-UAS TESTBED

The multi-UAS testbed is designed to enable CL and consists of the following primary components: (1) vehicle platform, (2) autopilot or flight control unit (FCU), (3) onboard computer, (4) sensors, (5) communication system, (6) software infrastructure, and (7) ground station. This section provides an overview of each component, detailing the setup and integration of these elements to create a robust platform for CL research.

4.1 UAS Platform

For this testbed multi-rotor-type UAS, specifically the Aurelia X6 standard, a hexacopter UAS was selected based on its payload-carrying capabilities and flight endurance. The X6 is powered by two 16, 000-mAh lithium-polymer (LiPo) batteries providing a flight time of approximately 35 min without an additional payload. Two auxiliary power cables are included for powering other payloads. The platform has a payload capacity of 11 lb, which is crucial for carrying sensors and onboard computers. Additionally, the platform is modular enough to accommodate custom payloads. Sensors and components required for enabling CL are housed in a custom-designed three-dimensionally-printed compartment, as illustrated in Figure 3(b), and mounted onto the platform as shown in Figure 3(a). These key components are further detailed in Figure 4.

FIGURE 2

Testbed flow chart showing the flow of information from individual sensors to a centralized ground station

FIGURE 3

Aurelia X6 platform attached with critical components (a) Aurelia X6 platform (b) Carrying compartment

Note: POE = Power over Ethernet

FIGURE 4

Key components in the platform: (a) Pixhawk Cube Orange as the FCU, (b) TDSR P440 ranging radio, (c) Intel NUC WSKi5 as an onboard computer, (d) Ubiquiti Rocket AC with a 13-dBi omnidirectional antenna as the Wi-Fi access point, (e) Ubiquiti Bullet M5/AC Wi-Fi station

4.2 Software Infrastructure

The testbed relies on the robot operating system (ROS) as the middleware for facilitating communication between onboard systems and the ground station (Shane, 2024). ArduPilot, an open-source autopilot system capable of supporting different types of small uncrewed vehicles, is used in the FCU. ArduPilot supports Micro Aerial Vehicle Link (MAVLink) communication protocol with the ground station. The MAVLink communication is complemented by the MAVROS package (Voon, 2024), which translates MAVLink messages into ROS topics. This setup allows for efficient data management and processing across the system, supporting various components, including telemetry, navigation, and localization.

4.3 Flight Control Unit

The Pixhawk Cube Orange+ is used as the FCU for the UAS (Ardupilot, 2024). This FCU includes three IMU sensors, two barometers, and a magnetometer, providing accurate and reliable flight data. The Pixhawk is connected to a Here 3+ GNSS module, enabling precise localization for autonomous missions managed through Mission Planner software. The ArduPilot firmware is used in all FCUs, along with MAVROS, facilitating easy integration with the ROS. Essential flight data, such as linear acceleration, angular velocities, and attitude (roll, pitch, and yaw), are published as IMU messages. The firmware also runs multiple instances of EKFs, fusing information from internal sensors and publishing filtered IMU data. With external sensors, such as the GNSS module, the firmware also provides raw and filtered position estimates and odometry data. Additionally, the FCU can generate low-level control commands, such as motor torques, based on high-level mission requirements. Furthermore, the onboard EKF estimates vehicle position in the inertial frame, velocities, and angular orientations (roll, pitch, yaw) by fusing information from gyroscope, accelerometer, compass (magnetometer), Global Positioning System (GPS), airspeed, and barometric pressure measurements.

4.4 Onboard Computer

An onboard computer provides real-time perception and decision-making capabilities by processing sensor information and handling localization and control operations without relying on other external sources. An Intel NUC WSKi5 mini-PC (Intel, 2024) was chosen because of its compact form factor and ability to operate in a range of environmental conditions. Although single-board computers, such as the Odroid XU4 (Odroid, 2024), can be used for localization with UWB sensors, they tend to overheat at high temperatures. Therefore, the use of single-board computers with active cooling in extreme conditions is recommended. The mini-PC is powered by a dedicated power bank, instead of relying on the auxiliary LiPo power source of the UAS, to prevent voltage fluctuations and extend flight duration.

4.5 Sensors

The testbed includes a range of sensors to support CL. In addition to the Pixhawk’s onboard sensors, P440 UWB modules from TDSR serve as extrinsic sensors (TDSR, 2024). These UWB radios perform ranging via the two-way time-of-flight (TOF) method, in which a requester module sends a packet to a responder and the responder returns the packet. The range between the two communicating modules is computed via the time difference between sent and received signals, accounting for the speed of light. Modules can be configured as tags or anchors, with tags connected to onboard computers on each UAS and anchors positioned at known locations as landmarks. The P440 modules have their own power source, as shown in Figure 6(b).

An ROS wrapper was developed to integrate the UWB radios with the ROS, publishing range measurements and module IDs as ROS topics to provide extrinsic measurements for localization. Performance comparisons between UWB modules and GNSS measurements are shown in Figure 5. Figures 5(a), 5(b), and 5(c) show range comparisons between UAS-1 and landmarks 1 and 4, between UAS-2 and landmarks 1 and 4, and between UAS-1 and UAS-2, respectively. Observations include varying biases across landmarks and inter-vehicle ranges, time-varying biases, and outliers. To investigate the source and magnitude of UWB ranging bias, static calibration tests were conducted following the manufacturer’s recommendations to ensure consistent antenna positioning, revealing low biases of 2–63 mm and standard deviations of approximately 50 mm for UAS-to-UAS and 350 mm for UAS-to-landmark measurements, as summarized in Table 2. Despite hardware-level mitigations, such as proper antenna orientation and elevated placement, we observed time-varying range biases during the experiment, as shown in Figure 5, which can be due to Fresnel zone interference and multipath effects, consistent with TDSR radio behavior. Additional factors such as radio frequency (RF) compression at close ranges and small antenna misalignments during flight can further distort TOF accuracy. To address these issues, conservative standard deviation values of 3–4 m are used during flight to prevent estimator overconfidence, and bias states for each measurement pair are introduced into the estimation framework. This approach is essential for maintaining consistent and reliable performance in the presence of real-world measurement imperfections.

FIGURE 5

Comparison between the measured range and true range between landmarks and UASs (a) ρlm1 and ρlm4 measured from UAS-1 (b) ρlm1 and ρlm4 measured from UAS-2 (c) ρ211 measured between UAS-2 and UAS-1

View this table:
TABLE 2

Static UWB Range Test Results for Different Measurement Pairs

4.6 Communication System

A local Wi-Fi network enables communication across the testbed area, using a Ubiquiti Rocket paired with a 13-dBi omnidirectional antenna, as shown in Figure 4(d). Each UAS is equipped with a Ubiquiti Bullet unit for connecting to the access point. A step-up transformer powers each Bullet using the auxiliary LiPo battery power onboard the UAS. A ground station computer connects to the access point, establishing a link with the onboard computers on each UAS (Figure 6(a)).

FIGURE 6

Ground-level infrastructure used in the testbed (a) Ground station computer connected to Wi-Fi access point (b) TDSR P440 used as landmarks

ROS messages from telemetry and ranging radio topics are transmitted from the onboard computers to the ground station. Performance evaluations of the network are conducted to ensure reliable latency and data transfer rates across the testbed, supporting real-time CL experiments. The communication latency measured with three UASs operating simultaneously is observed to range from 5 to 12 ms, with a mean of 7 ms per UAS.

4.7 Ground Station

A ground station computer provides centralized control and monitoring capabilities, connecting to the UAS via the Wi-Fi access point. The ground station receives ROS messages from each UAS, allowing for real-time tracking and data logging. Mission Planner software enables autonomous mission management by providing precise localization and telemetry data. The ground station setup is shown in Figure 6(a).

The testbed and CL algorithms are designed to be adaptable to different vehicle platforms, with modifications to the vehicle models as needed. Detailed hardware configuration instructions are provided at this link. Figure 2 illustrates the interconnections among the different components of the testbed.

5 RESULTS AND DISCUSSION

This section describes the experimental setup, presents the localization results for the two-state equation models presented in Section 2 under different sensing conditions, and numerically evaluates the observability of the CL setup.

5.1 Experimental Setup

Flight tests were conducted using five landmarks and three hexacopter-type UASs, designated as UAS-1, UAS-2, and UAS-3, shown in Figure 3(a). The ground station and Wi-Fi access point are depicted in Figure 6(a). Landmarks were mounted on tripods (Figure 6(b)) to mitigate ground effects. Tests were performed under clear sky conditions to ensure robust satellite connectivity for accurate GNSS-based positioning. UAS-1 was additionally equipped with a real-time kinematic GPS module to provide high-precision reference data. The experimental setup was arranged to maintain an unobstructed line of sight between all vehicles, enabling reliable inter-vehicle ranging and communication.

The UAS trajectories and landmark locations are shown in Figure 8, designed to provide diverse range measurements crucial for observability and localization performance. Two UASs follow opposing figure-8 trajectories (curves and straight lines), while the third UAS follows a rectangular path for additional variation. All UASs operate autonomously via the ArduPilot Mission Planner. UAS-2 and UAS-3 recorded data at the intended rate of 20 Hz, whereas UAS-1 experienced power issues, resulting in a reduced rate of 8 Hz. UAS-2 briefly lost communication at 20–40 s, during which time the CL maintained reasonable accuracy by using inter-vehicle ranging. The flight test video is available at this link, and data sets and code are accessible at this repository.

The estimator tuning parameters were empirically determined via iterative offline testing using flight data. Both the EKF and FG estimators were evaluated. Initially, conservative (higher) noise values were assigned to odometry and ranging sensors, while bias uncertainties were adjusted to produce stable state estimates. These parameters were then fine-tuned by progressively lowering the measurement noise levels until consistent and reliable performance was observed across test runs. The tuning process emphasized estimator consistency and robustness to real-world sensor noise and dropouts. The localization performance was evaluated based on two primary metrics: (i) estimation errors computed by comparing estimated states with GPS ground truth and (ii) standard deviations (σ) derived from the covariance matrix of the estimator. The analysis focused on planar localization (specifically, heading and horizontal [north, east] positions) excluding vertical estimates, as altitude was directly measured in the state vector.

5.2 Odometry

In this section, the EKF and FG approaches are compared in the context of evaluating the effects of the number and placement of landmarks, vehicle trajectories (see Figure 14), and network topologies that satisfy the minimum observability criteria (see Figure 7) for the four-state model, assuming the availability of odometry measurements. The standard deviations (σ) for the IMU measurements are set as follows: 0.01 m/s for linear velocities u and v, 0.05 m/s for w, 0.02 rad for Euler angles φ and θ ; 0.001 rad/s for angular rates p and q, and 0.01 rad/s for r. The range measurement noise is set to 3 m for UAS-to-landmark measurements and 4 m for UAS-to-UAS measurements. The uncertainty in bias is set to 0.79 m for landmark ranges and 0.94 m for inter-vehicle ranges.

FIGURE 7

Different sensing topologies created by a system with three multi-rotors and two landmarks, satisfying the minimum sufficient observability conditions (a) RPMG-1 (b) RPMG-2 (c) RPMG-3 (d) RPMG-4 (e) RPMG-5

FIGURE 8

Inertial frame trajectories of the three UASs, as observed from above, along with the positions of landmarks used in the flight test

Prior to an analysis of the localization performance under varying sensing conditions, the impact of sliding window size in FG optimization was evaluated (Figure 9). The localization accuracy remains largely unaffected across window sizes of 10, 20, and 30; however, the computation time increases from 0.05 s to 0.29 s. A window size of 10 enables real-time execution at the estimator frequency, whereas larger sizes offer diminishing accuracy gains at higher computational costs. A window size of 20 was chosen for subsequent analysis to balance performance and measurement utilization, especially under low-frequency ranging conditions.

FIGURE 9

Comparing the effect of sliding window size in the FG framework on localization accuracy (a) Comparison of position estimation errors for different sliding window sizes (b) Comparison of heading estimation errors for different sliding window sizes

The efficiency of CL is demonstrated in Figure 10, where inter-UAS communication enables information sharing from landmarks accessible to only a subset of UASs. In this setup, landmarks 1 and 2 are visible only to UAS-1, landmark 3 is visible only to UAS-2, and landmarks 4 and 5 are visible only to UAS-3. Enabling CL reduces position errors by approximately 30% for UAS-1 and 15% for UAS-3. Pairwise t-tests confirm the statistical significance of these improvements at the 5% level. UAS-2 shows no difference in errors between cooperative and non-cooperative cases, as it already achieves low errors using only IMU data; notable improvement occurs only after the addition of landmark measurements (Figure 12(b)), especially with the second landmark. Additional landmarks have a limited effect, and cooperative performance is observed to degrade, suggesting that poor estimates from neighboring UASs can degrade the localization accuracy of well-performing UASs. Heading errors improve slightly for the EKF and more significantly with cooperation, whereas the FG performs worse for ψ estimation unless cooperation is enabled. Across all UASs, the confidence in pose estimates improves with CL, further validated by significant t-test results.

FIGURE 10

Comparison of estimation errors and uncertainty σ across vehicles, showcasing the advantage of CL (a) Comparison of errors in position (b) Comparison of σ in pn (c) Comparison of errors in ψ (d) Comparison of σ in ψ

Trends in the UI align with estimation uncertainty (σ) as the number of landmarks increases, as shown in Figure 11. CL yields tighter uncertainty bounds than non-cooperative methods, consistent with expectations. The FG and EKF show similar estimation uncertainties. The position errors in Figure 12 reveal that CL generally reduces errors for all UASs except UAS-2, where little improvement is observed. In some cases (e.g., 1–3 landmarks for UAS-3 and 1 landmark for UAS-1), FG performs slightly better in non-cooperative mode. This trend occurs when a vehicle already has sufficient landmark access; fusing less accurate inter-vehicle data may degrade performance. UAS-2 shows consistently strong localization independently, offering limited gains from CL, while UAS-1 obtains the greatest benefit owing to sparser IMU data and limited landmark access. FG outperforms EKF for UAS-1 and UAS-2 with ≥ 2 landmarks, whereas EKF performs better for UAS-3.

FIGURE 11

Estimation results showcasing the average of metrics such as the UI and estimation uncertainty σ with odometry measurements as the landmark number increases from 1 to 5, with a comparison across cooperative and non-cooperative scenarios (a) Evolution of the average UI as the landmark number increases from 1 to 5 (b) Evolution of the average σ in pn as the landmark number increases from 1 to 5 (c) Evolution of the average σ in ψ as the landmark number increases from 1 to 5 (d) Evolution of the average σ in pe as the landmark number increases from 1 to 5

FIGURE 12

Average position error for each UAS with respect to the number of landmarks for corresponding cooperative and non-cooperative scenarios (a) Average position error for UAS-1 as the landmark number increases (b) Average position error for UAS-2 as the landmark number increases (c) Average position error for UAS-3 as the landmark number increases

Although the overall UI in Figure 11(a) shows a decreasing trend, it does not clearly differentiate observable from unobservable systems. However, analyzing the pose-related submatrix of the observability Gramian reveals that the UI becomes negative on a logarithmic scale (Figure 13(a)) when more than two landmarks are used, aligning with the simulation results in Table 1. This trend may be less apparent under limited sensing because of the discrete and variable nature of real sensor data. The state estimator remains consistent across scenarios with defined noise parameters. The temporal evolution of the errors and 3σ bounds for UAS-1 are shown for the EKF in Figures 16(a), 16(d), and 16(g) and for FG in Figures 16(b), 16(e), and 16(h). Additionally, Table 3 presents the UI, estimation errors, and σ for RPMG topologies (Figure 7) meeting minimum observability criteria. As the number of landmarks remains constant, UI magnitudes and average errors are consistent across topologies, with reduced uncertainty observed when more vehicles receive direct measurements.

FIGURE 13

Evolution of the UI of the submatrix that excludes all bias states, as the landmark number increases from 1 to 5 (a) With odometry measurements (b) Without odometry measurements

View this table:
TABLE 3

Comparison of Average Position Errors from the Four-State Model for Three UASs Under the Various Sensing Topologies Shown in Figure 7

To evaluate the impact of landmark placement and relative vehicle trajectories on localization accuracy, the scenarios shown in Figure 14 are analyzed. Corresponding average position errors, estimation uncertainties, and UI values are summarized in Table 4. The scenarios in Figures 14(a)14(e) explore different four-landmark configurations. Scenario 3 yields the lowest UI, indicating the highest observability, which correlates with reduced localization error and uncertainty. However, some scenarios show deviations from this trend, likely due to nonlinear measurement models, estimator inconsistencies, and sensor noise variations, as discussed in Section 4.5. Scenarios 6 and 7 assess the impact of vehicle trajectory profiles on localization accuracy. In both cases, UAS-2 localizes using range measurements to five landmarks. In scenario 6, UAS-1 relies on inter-vehicle sensing with UAS-2, whereas in scenario 7, UAS-3 does so instead. The differing trajectories of UAS-1 and UAS-3 enable a comparison of trajectory effects on performance. Scenario 7 exhibits lower position error and estimation uncertainty than scenario 6, aligning with its lower UI and underscoring the importance of relative vehicle positioning for accurate localization.

View this table:
TABLE 4

Comparison of Average Position Errors from the Four-State Model for Three UASs Under the Various Sensing Topologies Shown in Figure 14

FIGURE 14

Various scenarios are considered to evaluate the effects of landmark positioning and vehicle trajectories on localization accuracy. (a) Scenario 1 (b) Scenario 2 (c) Scenario 3 (d) Scenario 4 (e) Scenario 5 (f) Scenario 6 (g) Scenario 7

5.3 Without Odometry

This section presents localization results for the seven-state EKF, assuming that odometry measurements are unavailable. The standard deviations (σ) for the IMU measurements are set as follows: 10 m/s2 for linear accelerations a1, a2, and a3, 0.2 rad for Euler angles φ and θ, and 0.1 rad/s for angular rates p, q, and r. The range measurement noise is set to 3 m for UAS-to-landmark measurements and 4 m for UAS-to-UAS measurements. The uncertainty in bias is set to 0.79 m for landmark ranges, 0.94 m for inter-vehicle ranges, and 0.64 m/s2 for accelerometer measurements.

The estimation uncertainty σ with increasing landmark number is shown in Figure 15 for both cooperative and non-cooperative scenarios, where CL consistently outperforms the non-cooperative case, especially when landmark availability is limited. Corresponding estimation errors are shown in Figure 17. Despite satisfying the theoretical observability, high estimation errors and σ values are observed in the absence of odometry. Additional landmarks reduce both metrics, although improvements taper off as the landmark count increases. CL yields lower errors overall, except in the three-landmark case for UAS-2, where the performance degrades slightly because of fusion with less reliable estimates from UAS-3. Meanwhile, UAS-3 benefits from this cooperation, highlighting the trade-off in shared information under asymmetric estimation quality.

FIGURE 15

Estimation results showcasing the average of metrics such as the UI and estimation uncertainty σ in the absence of odometry measurements as the number of landmarks increases from 1 to 5, with a comparison across cooperative and non-cooperative scenarios (a) Evolution of the average UI as the landmark number increases from 1 to 5 (b) Evolution of the average σ in pn as the landmark number increases from 1 to 5 (c) Evolution of the average σ in ψ as the landmark number increases from 1 to 5 (d) Evolution of the average σ in pe as the landmark number increases from 1 to 5

As shown in Figure 15(a), the UI exhibits a decreasing trend with more landmarks, similar to previous observations. However, a clear distinction between observable and unobservable cases is not evident. Analyzing the submatrix corresponding to only pose and velocity states reveals a trend in which the UI becomes negative on the logarithmic scale, although this behavior is less pronounced than in the odometry-enabled case. The consistency of the estimator is illustrated through the temporal evolution of errors and associated 3σ bounds for UAS-1 in the five-landmark cooperative scenario (Figures 16(c), 16(f), and 16(i)).

FIGURE 16

Evolution of error-3σ plots over time for UAS-1, for the five-landmark case with CL, based on the availability of odometry measurements (a) Error evolution in pn for the EKF over time with integrated odometry measurements (b) Error evolution in pn for the FG over time with integrated odometry measurements (c) Error evolution in pn over time in the absence of odometry measurements (d) Error evolution in pe for the EKF over time with integrated odometry measurements (e) Error evolution in pe for the FG over time with integrated odometry measurements (f) Error evolution in pe over time in the absence of odometry measurements (g) Error evolution in ψ for the EKF over time with integrated odometry measurements (h) Error evolution in ψ for the FG over time with integrated odometry measurements (i) Error evolution in ψ over time in the absence of odometry measurements

FIGURE 17

Average position error for each UAS with respect to the number of landmarks for corresponding cooperative and non-cooperative scenarios (a) Average position error for UAS-1 as the number of landmarks increases (b) Average position error for UAS-2 as the number of landmarks increases (c) Average position error for UAS-3 as the number of landmarks increases

6 CONCLUSIONS AND FUTURE WORK

CoNaV, a novel multi-vehicle testbed for CL, facilitates CL algorithm development, benchmarking, and deployment. Key contributions include an open-source, scalable testbed supporting multiple vehicle types and sensors, a centralized CL algorithm benchmarked with sequential and batch estimation techniques, and observability analysis. Experimental results show enhanced localization accuracy and robustness, highlighting the effectiveness of cooperative navigation in multi-agent systems. The current open-source data set includes two trajectory profiles, which will be expanded to include practical applications such as surveillance, area monitoring, target encirclement, and tracking.

Future work will expand CoNaV’s capabilities to support decentralized CL architectures by developing point-to-point communication networks, reducing reliance on a centralized ground station, and enabling greater scalability and resilience. Real-time processing will be enhanced to achieve reliable performance in high-density, low-latency network environments. Additionally, future efforts will investigate the impact of communication latency and packet loss on localization accuracy and observability. Expanding CoNaV’s compatibility with advanced estimation techniques and robust protocols will further solidify its role as a key resource for advancing CL technologies.

HOW TO CITE THIS ARTICLE

Boyinine, R., Ammapalli, J., Chakraborty, A., Sharma, R., Brink, K., & Taylor, C. (2025). Navigating together: The CoNaV testbed and framework for benchmarking cooperative localization. NAVIGATION, 72(4). https://doi.org/10.33012/navi.722

CONFLICT OF INTEREST

The authors declare no conflicts of interest.

ACKNOWLEDGMENTS

This research was supported by the Air Force Research Laboratory Munitions Directorate under Grant No. FA8651-21-1-0020, Distributed Cooperative Navigation & Path Planning. The authors thank the AirMasters RC Flying Club (North Bend, OH) for access to their flight test facility and the Robust Intelligent Sensing and Controls Lab members for their assistance with multi-vehicle testing. The views expressed are those of the authors and do not represent the official policy of the Department of Defense or the U.S. Government.

This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

REFERENCES

  1. Ammapalli, J., Boyinine, R., Thayer, R., Chakraborty, A., & Sharma, R. (2024). Developing RF ranging-based multi-rotor test-bed for cooperative localization. Proc. of the AIAA SCITECH 2024 Forum, Orlando, FL. https://doi.org/10.2514/6.2024-1780
  2. Arafat, M. Y., Alam, M. M., & Moh, S. (2023). Vision-based navigation techniques for unmanned aerial vehicles: Review and challenges. Drones, 7(2), 89. https://doi.org/10.3390/drones7020089
  3. Ardupilot. (2024). Pixhawk cube orange overview. Retrieved February 3, 2024, from https://ardupilot.org/copter/docs/common-thecubeorange-overview.html
  4. Arjmandi, Z., Kang, J., Park, K., & Sohn, G. (2020). Benchmark dataset of ultra-wideband radio based UAV positioning. Proc. of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 18. https://doi.org/10.1109/itsc45102.2020.9294440
  5. Bai, H., & Taylor, C. N. (2020). Future uncertainty-based control for relative navigation in GPS-denied environments. IEEE Transactions on Aerospace and Electronic Systems, 56(5), 34913501. https://doi.org/10.1109/taes.2020.2974052
  6. Betti Sorbelli, F., Corò, F., Das, S. K., Palazzetti, L., & Pinotti, C. M. (2022). Greedy algorithms for scheduling package delivery with multiple drones. Proc. of the 23rd International Conference on Distributed Computing and Networking, 3139. https://doi.org/10.1145/3491003.3491028
  7. Boyinine, R., Ammapalli, J., Chakraborty, A., Sharma, R., Brink, K., & Taylor, C. (2025). Navigating together: The CoNaV testbed and framework for benchmarking cooperative localization [dataset] [MIT License]. https://doi.org/10.5281/zenodo.15844813
  8. Boyinine, R., Khanapuri, E., Chakraborty, A., & Sharma, R. (2023). On-demand landmark activation to aid navigation for advanced air mobility. Proc. of the AIAA SCITECH 2023 Forum, National Harbor, MD. https://doi.org/10.2514/6.2023-2707
  9. Boyinine, R., Sharma, R., & Brink, K. (2022). Observability based path planning for multi-agent systems to aid relative pose estimation. Proc. of the 2022 International Conference on Unmanned Aircraft Systems (ICUAS), Dubrovnik, Croatia, 912921. https://doi.org/10.1109/icuas54217.2022.9836088
  10. Canciani, A., & Raquet, J. (2017). Airborne magnetic anomaly navigation. IEEE Transactions on Aerospace and Electronic Systems, 53(1), 6780. https://doi.org/10.1109/TAES.2017.2649238
  11. Carrillo-Arce, L. C., Nerurkar, E. D., Gordillo, J. L., & Roumeliotis, S. I. (2013). Decentralized multi-robot cooperative localization using covariance intersection. Proc. of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 14121417. https://doi.org/10.1109/iros.2013.6696534
  12. Chakraborty, A., Misra, S., Sharma, R., Brink, K. M., & Taylor, C. N. (2019). Cooperative localization: Challenges and future directions. In Cooperative localization and navigation, 493519. CRC Press. http://dx.doi.org/10.1201/9780429507229-24
  13. Fox, D., Burgard, W., Kruppa, H., & Thrun, S. (2000). A probabilistic approach to collaborative multi-robot localization. Autonomous Robots, 8(3), 325344. https://doi.org/10.1023/a:1008937911390
  14. Frelinger, D. R., Kvitky, J., & Stanley, W. (1998). Proliferated autonomous weapons: An example of cooperative behavior. RAND Corporation. https://www.rand.org/pubs/documented_briefings/DB239.html
  15. Guo, K., Qiu, Z., Miao, C., Zaini, A. H., Chen, C.-L., Meng, W., & Xie, L. (2016). Ultra-wideband-based localization for quadcopter navigation. Unmanned Systems, 4(1), 2334. https://doi.org/10.1142/s2301385016400033
  16. Hermann, R., & Krener, A. (1977). Nonlinear controllability and observability. IEEE Transactions on Automatic Control, 22(5), 728740. https://doi.org/10.1109/tac.1977.1101601
  17. Hiatt, J., & Taylor, C. N. (2022). A comparison of correlation-agnostic techniques for magnetic navigation. Proc. of the 2022 25th International Conference on Information Fusion (FUSION), Linköping, Sweden, 17. https://doi.org/10.23919/fusion49751.2022.9841293
  18. Howard, A., Matark, M. J., & Sukhatme, G. S. (2002). Localization for mobile robot teams using maximum likelihood estimation. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and System, 1, Lausanne, Switzerland, 434439. https://doi.org/10.1109/irds.2002.1041428
  19. Intel. (2024). Intel NUC board/kit/minipc. Retrieved April 18, 2024, from https://www.intel.com/content/dam/support/us/en/documents/intel-nuc/NUC12WS-TechProdSpec.pdf
  20. Iyidir, B., & Ozkazanc, Y. (2004). Jamming of GPS receivers. Proc. of the IEEE 12th Signal Processing and Communications Applications Conference, 2004, Kusadasi, Turkey, 747750. https://doi.org/10.1109/siu.2004.1338639
  21. Kapoor, R., Ramasamy, S., Gardi, A., & Sabatini, R. (2017). UAV navigation using signals of opportunity in urban environments: A review. Energy Procedia, 110, 377383. https://doi.org/10.1016/j.egypro.2017.03.156
  22. Kassas, Z. M., Khalife, J., Abdallah, A., & Lee, C. (2020). I am not afraid of the jammer: Navigating with signals of opportunity in GPS-denied environments. Proc. of the 33rd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2020), 15661585. https://doi.org/10.33012/2020.17737
  23. Krener, A. J., & Ide, K. (2009). Measures of unobservability. Proc. of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, Shanghai, China, 64016406. https://doi.org/10.1109/cdc.2009.5400067
  24. Liu, Y., Sun, Z., Xi, L., Zhang, L., Dong, W., Chen, C., Lu, M., Fu, H., & Deng, F. (2025). MMFW-UAV dataset: Multi-sensor and multi-view fixed-wing UAV dataset for air-to-air vision tasks. Scientific Data, 12(1). https://doi.org/10.1038/s41597-025-04482-2
  25. Lu, Y., Xue, Z., Xia, G.-S., & Zhang, L. (2018). A survey on vision-based UAV navigation. Geo-spatial Information Science, 21(1), 2132. https://doi.org/10.1080/10095020.2017.1420509
  26. Lv, Q., Wei, H., Lin, H., & Zhang, Y. (2017). Design and implementation of multi robot research platform based on UWB. Proc. of the 2017 29th Chinese Control and Decision Conference (CCDC), Chongqing, China, 72467251. https://doi.org/10.1109/ccdc.2017.7978492
  27. Matlock, A., Holsapple, R., Schumacher, C., Hansen, J., & Girard, A. (2009). Cooperative defensive surveillance using unmanned aerial vehicles. Proc. of the 2009 American Control Conference, St. Louis, MO, 26122617. https://doi.org/10.1109/ACC.2009.5160051
  28. McNeil, A. J. (2022). Magnetic anomaly absolute positioning for hypersonic aircraft [Master’s thesis, Air Force Institute of Technology]. https://scholar.afit.edu/etd/5457
  29. Nerurkar, E. D., Roumeliotis, S. I., & Martinelli, A. (2009). Distributed maximum a posteriori estimation for multi-robot cooperative localization. Proc. of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 14021409. https://doi.org/10.1109/robot.2009.5152398
  30. Odroid. (2024). Odroid XU4. Retrieved March 19, 2024, from https://wiki.odroid.com/odroid-xu4/odroid-xu4
  31. Osechas, O., Kim, K. J., Parsons, K., & Sahinoglu, Z. (2015). Detecting multipath errors in terrestrial GNSS applications. Proc. of the 2015 International Technical Meeting of the Institute of Navigation, Dana Point, CA, 465474. https://www.ion.org/publications/abstract.cfm?articleID=12645
  32. Queralta, J. P., Martinez Almansa, C., Schiano, F., Floreano, D., & Westerlund, T. (2020). UWB-based system for UAV localization in GNSS-denied environments: Characterization and dataset. Proc. of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, 45214528. https://doi.org/10.1109/iros45743.2020.9341042
  33. Rajendran, S., & Srinivas, S. (2020). Air taxi service for urban mobility: A critical review of recent developments, future challenges, and opportunities. Transportation Research Part E: Logistics and Transportation Review, 143, 102090. https://doi.org/10.1016/j.tre.2020.102090
  34. Rizzoli, G., Barbato, F., Caligiuri, M., & Zanuttigh, P. (2023). SynDrone – multi-modal UAV dataset for urban scenarios. arXiv. https://doi.org/10.48550/arxiv.2308.10491
  35. Sahawneh, L. R., & Brink, K. M. (2017). Factor graphs-based multi-robot cooperative localization: A study of shared information influence on optimization accuracy and consistency. Proc. of the 2017 International Technical Meeting of the Institute of Navigation, Monterey, CA, 819838. https://doi.org/10.33012/2017.14895
  36. Saptharishi, M., Spence Oliver, C., Diehl, C. P., Bhat, K. S., Dolan, J. M., Trebi-Ollennu, A., & Khosla, P. K. (2002). Distributed surveillance and reconnaissance using multiple autonomous ATVs: CyberScout. IEEE Transactions on Robotics and Automation, 18(5), 826836. https://doi.org/10.1109/tra.2002.804501
  37. Shane, L. (2024). ROS Noetic. Retrieved March 19, 2024, from https://wiki.ros.org/noetic
  38. Sharma, R. (2014). Observability based control for cooperative localization. Proc. of the 2014 International Conference on Unmanned Aircraft Systems (ICUAS), Orlando, FL, 134139. https://doi.org/10.1109/icuas.2014.6842248
  39. Sharma, R., Quebe, S., Beard, R. W., & Taylor, C. N. (2013). Bearing-only cooperative localization: Simulation and experimental results. Journal of Intelligent & Robotic Systems, 72(3–4), 429440. https://doi.org/10.1007/s10846-012-9809-z
  40. Sharma, R., & Taylor, C. (2008). Cooperative navigation of MAVs in GPS-denied areas. Proc. of the 2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Seoul, Korea (South), 481486. https://doi.org/10.1109/mfi.2008.4648041
  41. Shi, Q., Cui, X., Zhao, S., Wen, J., & Lu, M. (2019). Range-only collaborative localization for ground vehicles. Proc. of the 32nd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2019), Miami, FL, 20632077. https://doi.org/10.33012/2019.16886
  42. Silverman, L. M., & Meadows, H. E. (1967). Controllability and observability in time-variable linear systems. SIAM Journal on Control, 5(1), 6473. https://doi.org/10.1137/0305005
  43. Sun, Q., Tian, Y., & Diao, M. (2018). Cooperative localization algorithm based on hybrid topology architecture for multiple mobile robot system. IEEE Internet of Things Journal, 5(6), 47534763. https://doi.org/10.1109/jiot.2018.2812179
  44. Taylor, C., & Gross, J. (2024). Factor graphs for navigation applications: A tutorial. NAVIGATION, 71(3). https://doi.org/10.33012/navi.653
  45. TDSR. (2024). UWB ranging and localization development kit. Retrieved February 3, 2024, from https://tdsr-uwb.com/ranging-and-localization-kit
  46. The MathWorks, Inc. (2024). Optimization toolbox user’s guide (R2024a). The MathWorks, Inc. https://www.mathworks.com/help/optim/ug/least-squares-model-fitting-algorithms.html
  47. Van Willigenburg, L. G., Stigter, J. D., & Molenaar, J. (2022). Sensitivity matrices as keys to local structural system properties of large-scale nonlinear systems. Nonlinear Dynamics, 107(3), 25992618. https://doi.org/10.1007/s11071-021-07125-4
  48. Voon. (2024). MAVROS. Retrieved March 19, 2024, from, https://github.com/mavlink/mavros
  49. Wang, B., Rathinam, S., & Sharma, R. (2019). Landmark placement for cooperative localization and routing of unmanned vehicles. Proc. of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, 3342. https://doi.org/10.1109/icuas.2019.8798276
  50. Warner, J. S., & Johnston, R. G. (2003). GPS spoofing countermeasures. Homeland Security Journal, 25(2), 1927. https://rntfnd.org/wp-content/uploads/GPS-Spoofing-Countermeasures-Los-Alamos-2003.pdf
  51. Yang, C., Strader, J., Gu, Y., Hypes, A., Canciani, A., & Brink, K. (2018). Cooperative UAV navigation using inter-vehicle ranging and magnetic anomaly measurements. Proc. of the 2018 AIAA Guidance, Navigation, and Control Conference, Kissimmee, FL. https://doi.org/10.2514/6.2018-1595
  52. Yu, H., Sharma, R., Beard, R. W., & Taylor, C. N. (2011). Observability-based local path planning and collision avoidance for micro air vehicles using bearing-only measurements. Proc. of the 2011 American Control Conference, San Francisco, CA, 46494654. https://doi.org/10.1109/acc.2011.5991095
  53. Yuan, S., Yang, Y., Nguyen, T. H., Nguyen, T.-M., Yang, J., Liu, F., Li, J., Wang, H., & Xie, L. (2024). MMAUD: A comprehensive multi-modal anti-UAV dataset for modern miniature drone threats. Proc. of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 27452751. https://doi.org/10.1109/icra57147.2024.10610957
  54. Zhang, C., Tang, C., Wang, H., Lian, B., & Zhang, L. (2025). Data set for UWB cooperative navigation and positioning of UAV cluster. Scientific Data, 12(1). https://doi.org/10.1038/s41597-025-04808-0
  55. Zhu, J., & Kia, S. S. (2019). Cooperative localization under limited connectivity. IEEE Transactions on Robotics, 35(6), 15231530. https://doi.org/10.1109/tro.2019.2930404
Loading
Loading
Loading
Loading