Skip to main content

Main menu

  • Home
  • Current Issue
  • Archive
  • About Us
    • About NAVIGATION
    • Editorial Board
    • Peer Review Statement
    • Open Access
  • More
    • Email Alerts
    • Info for Authors
    • Info for Subscribers
  • Other Publications
    • ion

User menu

  • My alerts

Search

  • Advanced search
NAVIGATION: Journal of the Institute of Navigation
  • Other Publications
    • ion
  • My alerts
NAVIGATION: Journal of the Institute of Navigation

Advanced Search

  • Home
  • Current Issue
  • Archive
  • About Us
    • About NAVIGATION
    • Editorial Board
    • Peer Review Statement
    • Open Access
  • More
    • Email Alerts
    • Info for Authors
    • Info for Subscribers
  • Follow ion on Twitter
  • Visit ion on Facebook
  • Follow ion on Instagram
  • Visit ion on YouTube
Review ArticleReview
Open Access

Integrity of Visual Navigation—Developments, Challenges, and Prospects

Chen Zhu, Michael Meurer, and Christoph Günther
NAVIGATION: Journal of the Institute of Navigation June 2022, 69 (2) navi.518; DOI: https://doi.org/10.33012/navi.518
Chen Zhu
1Institute of Communications and Navigation, German Aerospace Center (DLR), Oberpfaffenhofen, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: [email protected]
Michael Meurer,
1Institute of Communications and Navigation, German Aerospace Center (DLR), Oberpfaffenhofen, Germany
2Chair of Navigation, RWTH Aachen University, Aachen, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Christoph Günther
1Institute of Communications and Navigation, German Aerospace Center (DLR), Oberpfaffenhofen, Germany
3Chair of Communications and Navigation, Technical University of Munich (TUM), Munich, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Supplemental
  • References
  • Info & Metrics
  • PDF
Loading

REFERENCES

  1. ↵
    1. Al Hage, J.,
    2. Xu, P., &
    3. Bonnifait, P.
    (2019). High integrity localization with multi-lane camera measurements. 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France. https://doi.org/10.1109/ivs.2019.8813988
  2. ↵
    1. Bay, H.,
    2. Tuytelaars, T., &
    3. Van Gool, L.
    (2006). SURF: Speeded up robust features. Computer Vision – ECCV 2006, Graz, Austria, 404–417. https://doi.org/10.1007/11744023_32
  3. ↵
    1. Bhamidipati, S., &
    2. Gao, G. X.
    (2019). SLAM-based integrity monitoring using GPS and fish-eye camera. Proc. of the 32nd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2019). Miami, FL, 4116–4129. https://doi.org/10.33012/2019.17117
  4. ↵
    1. Bhamidipati, S., &
    2. Gao, G. X.
    (2020). Integrity monitoring of Graph-SLAM using GPS and fish-eye camera. NAVIGATION, 67(3), 583–600. https://doi.org/10.1002/navi.381
  5. ↵
    1. Blanch, J.,
    2. Ene, A.,
    3. Walter, T., &
    4. Enge, P.
    (2007). An optimized multiple hypothesis RAIM algorithm for vertical guidance. Proc. of the 20th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS 2007), Fort Worth, TX, 2924–2933. https://www.ion.org/publications/abstract.cfm?articleID=7644
  6. ↵
    1. Blanch, J.,
    2. Walker, T.,
    3. Enge, P.,
    4. Lee, Y.,
    5. Pervan, B.,
    6. Rippl, M.,
    7. Spletter, A., &
    8. Kropp, V.
    (2015). Baseline advanced RAIM user algorithm and possible improvements. IEEE Transactions on Aerospace and Electronic Systems, 51(1), 713–732. https://doi.org/10.1109/taes.2014.130739
  7. ↵
    1. Bulusu, S.,
    2. Kailkhura, B.,
    3. Li, B.,
    4. Varshney, P. K., &
    5. Song, D.
    (2020). Anomalous instance detection in deep learning: A survey. 42nd IEEE Symposium on Security and Privacy. Princeton, NJ. https://www.osti.gov/servlets/purl/1631092
  8. ↵
    1. Cadena, C.,
    2. Carlone, L.,
    3. Carrillo, H.,
    4. Latif, Y.,
    5. Scaramuzza, D.,
    6. Neira, J.,
    7. Reid, I., &
    8. Leonard, J. J.
    (2016). Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Transactions on Robotics, 32(6), 1309–1332. https://doi.org/10.1109/tro.2016.2624754
  9. ↵
    1. Calhoun, S. M., &
    2. Raquet, J.
    (2016). Integrity determination for a vision based precision relative navigation system. 2016 IEEE/ION Position, Location, and Navigation Symposium (PLANS), Savannah, GA. https://doi.org/10.1109/plans.2016.7479713
  10. ↵
    1. Calhoun, S. M.,
    2. Raquet, J., &
    3. Peterson, G.
    (2015). Vision-aided integrity monitor for precision relative navigation systems. Proc. of the 2015 International Technical Meeting of the Institute of Navigation, Dana Point, CA, 756–767. https://www.ion.org/publications/abstract.cfm?articleID=12668
  11. ↵
    1. Campos, C.,
    2. Elvira, R.,
    3. Rodríguez, J. J. G.,
    4. Montiel, J. M. M., &
    5. Tardós, J. D.
    (2021). ORB-SLAM3: An accurate open-source library for visual, visual–inertial, and multimap SLAM. IEEE Transactions on Robotics, 37(6), 1874–1890. https://doi.org/10.1109/tro.2021.3075644
  12. ↵
    1. Chen, C.,
    2. Wang, B.,
    3. Lu, C. X.,
    4. Trigoni, N., &
    5. Markham, A.
    (2020). A survey on deep learning for localization and mapping: Towards the age of spatial machine intelligence. https://www.arxiv-vanity.com/papers/2006.12567
  13. ↵
    1. Cvišić, I.,
    2. Ćesić, J.,
    3. Marković, I., &
    4. Petrović, I.
    (2017). SOFT-SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles. Journal of Field Robotics, 35(4), 578–595. https://doi.org/10.1002/rob.21762
  14. ↵
    1. Delmerico, J.,
    2. Cieslewski, T.,
    3. Rebecq, H.,
    4. Faessler, M., &
    5. Scaramuzza, D.
    (2019). Are we ready for autonomous drone racing? the UZH-FPV drone racing dataset. 2019 International Conference on Robotics and Automation (ICRA), Montreal, Canada. https://doi.org/10.1109/icra.2019.8793887
  15. ↵
    1. DeTone, D.,
    2. Malisiewicz, T., &
    3. Rabinovich, A.
    (2018). Superpoint: Self-supervised interest point detection and description. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, 224–236. https://doi.org/10.1109/cvprw.2018.00060
  16. ↵
    1. Engel, J.,
    2. Koltun, V., &
    3. Cremers, D.
    (2018). Direct sparse odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(3), 611–625. https://doi.org/10.1109/tpami.2017.2658577
  17. ↵
    1. Engel, J.,
    2. Schöps, T., &
    3. Cremers, D.
    (2014). LSD-SLAM: Large-Scale Direct Monocular SLAM. In D. Fleet, T. Pajdla, B. Schiele, & T. Tuytelaars (Eds.), Computer Vision – ECCV 2014. Lecture Notes in Computer Science (Vol 8690, pp. 834–849). Springer. https://doi.org/10.1007/978-3-319-10605-2_54
  18. ↵
    European Space Agency. (2011). Integrity. Navipedia. https://gssc.esa.int/navipedia/index.php/Integrity
  19. ↵
    EUSPA. (2021a). Report on aviation user needs and requirements (Report No. EUSPA-MKD-AV-UREQ-250287). European Union Agency for the Space Programme. https://www.gsc-europa.eu/sites/default/files/sites/all/files/Report_on_User_Needs_and_Requirements_Aviation.pdf
  20. ↵
    EUSPA. (2021b). Report on rail user needs and requirements (Report No. GSA-MKD-RL-UREQ-250286). European Union Agency for the Space Programme. https://www.gsc-europa.eu/sites/default/files/sites/all/files/Report_on_User_Needs_and_Requirements_Rail.pdf
  21. ↵
    EUSPA. (2021c). Report on road user needs and requirements (Report No. GSA-MKD-RD-UREQ-250283). European Union Agency for the Space Programme. https://www.gsc-europa.eu/sites/default/files/sites/all/files/Report_on_User_Needs_and_Requirements_Road.pdf
  22. ↵
    1. Fischler, M. A., &
    2. Bolles, R. C.
    (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395. https://doi.org/10.1145/358669.358692
    CrossRefWeb of Science
  23. ↵
    1. Fryer, J. G., &
    2. Brown, D. C.
    (1986). Lens distortion for close-range photogrammetry. Photogrammetric Engineering and Remote Sensing, 52(1), 51–58. https://www.asprs.org/wp-content/uploads/pers/1986journal/jan/1986_jan_51-58.pdf
    Web of Science
  24. ↵
    1. Fu, L.,
    2. Zhang, J.,
    3. Li, R.,
    4. Cao, X., &
    5. Wang, J.
    (2015). Vision-aided RAIM: A new method for GPS integrity monitoring in approach and landing phase. Sensors, 15(9), 22854–22873. https://doi.org/10.3390/s150922854
  25. ↵
    1. Gao, X.,
    2. Wang, R.,
    3. Demmel, N., &
    4. Cremers, D.
    (2018). LDSO: Direct sparse odometry with loop closure. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 2198–2204. https://doi.org/10.1109/IROS.2018.8593376
  26. ↵
    1. Geiger, A.,
    2. Lenz, P., &
    3. Urtasun, R.
    (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite. 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI. https://doi.org/10.1109/cvpr.2012.6248074
  27. ↵
    1. Hartley, R., &
    2. Zisserman, A.
    (2003). Multiple view geometry in computer vision. Cambridge University Press. https://doi.org/10.1017/cbo9780511811685
  28. ↵
    1. Heikkila, J., &
    2. Silven, O.
    (1997). A four-step camera calibration procedure with implicit image correction. Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rick, 1106–1112. https://doi.org/10.1109/cvpr.1997.609468
  29. ↵
    1. Joerger, M.,
    2. Chan, F.-C., &
    3. Pervan, B.
    (2014). Solution separation versus residual-based RAIM. NAVIGATION, 61(4), 273–291. https://doi.org/10.1002/navi.71
  30. ↵
    1. Kaess, M.,
    2. Johannsson, H.,
    3. Roberts, R.,
    4. Ila, V.,
    5. Leonard, J. J., &
    6. Dellaert, F.
    (2011). iSAM2: Incremental smoothing and mapping using the Bayes tree. The International Journal of Robotics Research, 31(2), 216–235. https://doi.org/10.1177/0278364911430419
    Web of Science
  31. ↵
    1. Kendall, A., &
    2. Cipolla, R.
    (2016). Modelling uncertainty in deep learning for camera relocalization. 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 4762–4769. https://doi.org/10.1109/icra.2016.7487679
  32. ↵
    1. Kendall, A.,
    2. Grimes, M., &
    3. Cipolla, R.
    (2015). PoseNet: A convolutional network for real-time 6-DOF camera relocalization. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2938–2946. https://doi.org/10.1109/iccv.2015.336
  33. ↵
    1. Kumar, A.,
    2. Braud, T.,
    3. Tarkoma, S., &
    4. Hui, P.
    (2020). Trustworthy AI in the age of pervasive computing and big data. 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Austin, TX. https://doi.org/10.1109/percomworkshops48775.2020.9156127
  34. ↵
    1. Langley, R. B.
    (1999). The integrity of GPS. GPS World, 10(3), 60–63. http://www2.unb.ca/gge/Resources/gpsworld.march99.pdf
  35. ↵
    1. Lepetit, V.,
    2. Moreno-Noguer, F., &
    3. Fua, P.
    (2009). EPnP: An accurate o(n) solution to the PnP problem. International Journal of Computer Vision, 81(2). https://doi.org/10.1007/s11263-008-0152-6
  36. ↵
    1. Liu, W.,
    2. Wang, Z.,
    3. Liu, X.,
    4. Zeng, N.,
    5. Liu, Y., &
    6. Alsaadi, F. E.
    (2017). A survey of deep neural network architectures and their applications. Neurocomputing, 234, 11–26. https://doi.org/10.1016/j.neucom.2016.12.038
  37. ↵
    1. Lowe, D. G.
    (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110. https://doi.org/10.1023/b:visi.0000029664.99615.94
    CrossRef
  38. ↵
    1. Lu, X. X.
    (2018). A review of solutions for perspective-n-point problem in camera pose estimation. Journal of Physics: Conference Series, 1087(5). https://doi.org/10.1088/1742-6596/1087/5/052009
  39. ↵
    1. Lynen, S.,
    2. Zeisl, B.,
    3. Aiger, D.,
    4. Bosse, M.,
    5. Hesch, J.,
    6. Pollefeys, M.,
    7. Siegwart, R., &
    8. Sattler, T.
    (2020). Large-scale, real-time visual–inertial localization revisited. The International Journal of Robotics Research, 39(9), 1061–1084. https://doi.org/10.1177/0278364920931151
  40. ↵
    1. Mario, C., &
    2. Rife, J.
    (2010). Integrity monitoring of vision-based automotive lane detection methods. Proc. of the 23rd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS 2010), Portland, OR, 245–255. https://www.ion.org/publications/abstract.cfm?articleID=9152
  41. ↵
    1. Meer, P.,
    2. Mintz, D.,
    3. Rosenfeld, A., &
    4. Kim, D. Y.
    (1991). Robust regression methods for computer vision: A review. International Journal of Computer Vision, 6(1), 59–70. https://doi.org/10.1007/bf00127126
  42. ↵
    1. Mur-Artal, R., &
    2. Tardós, J. D.
    (2017). ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Transactions on Robotics, 33(5), 1255–1262. https://doi.org/10.1109/TRO.2017.2705103
  43. ↵
    1. Polic, M.,
    2. Steidl, S.,
    3. Albl, C.,
    4. Kukelova, Z., &
    5. Pajdla, T.
    (2020). Uncertainty based camera model selection. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA. https://doi.org/10.1109/cvpr42600.2020.00603
  44. ↵
    1. Qin, T.,
    2. Li, P., &
    3. Shen, S.
    (2018). VINS-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics, 34(4), 1004–1020. https://doi.org/10.1109/tro.2018.2853729
  45. ↵
    1. Radwan, N.,
    2. Valada, A., &
    3. Burgard, W.
    (2018). VLocNet++: Deep multitask learning for semantic visual localization and odometry. IEEE Robotics and Automation Letters, 3(4), 4407–4414. https://doi.org/10.1109/lra.2018.2869640
  46. ↵
    RTCA. (2004). Minimum aviation system performance standards for local area augmentation system (Report No. RTCA DO-245).
  47. ↵
    1. Rublee, E.,
    2. Rabaud, V.,
    3. Konolige, K., &
    4. Bradski, G.
    (2011). ORB: An efficient alternative to SIFT or SURF. 2011 International Conference on Computer Vision, Barcelona, Spain. https://doi.org/10.1109/iccv.2011.6126544
  48. ↵
    1. Sattler, T.,
    2. Zhou, Q.,
    3. Pollefeys, M., &
    4. Leal-Taixé, L.
    (2019). Understanding the limitations of CNN-based absolute camera pose regression. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA. https://doi.org/10.1109/cvpr.2019.00342
  49. ↵
    1. Seibold, C.,
    2. Hilsmann, A., &
    3. Eisert, P.
    (2017). Model-based motion blur estimation for the improvement of motion tracking. Computer Vision and Image Understanding, 160, 45–56. https://doi.org/10.1016/j.cviu.2017.03.005
  50. ↵
    1. Shi, J., &
    2. Tomasi, C.
    (1994). Good features to track. Proc. of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA. https://doi.org/10.1109/cvpr.1994.323794
  51. ↵
    1. Shytermeja, E.,
    2. Garcia-Pena, A., &
    3. Julien, O.
    (2014). Proposed architecture for integrity monitoring of a GNSS/MEMS system with a fisheye camera in urban environment. International Conference on Localization and GNSS, Helsinki, Finland. https://doi.org/10.1109/ICL-GNSS.2014.6934179
  52. ↵
    1. Sinha, A.,
    2. Namkoong, H.,
    3. Volpi, R., &
    4. Duchi, J.
    (2018). Certifying some distributional robustness with principled adversarial training. 6th International Conference on Learning Representations, Vancouver, Canada.
  53. ↵
    1. Sturm, P.,
    2. Ramalingam, S.,
    3. Tardif, J.-P.,
    4. Gasparini, S., &
    5. Barreto, J. a.
    (2011). Camera models and fundamental concepts used in geometric computer vision, 6(1–2). https://doi.org/10.1561/0600000023
  54. ↵
    1. Tossaint, M.,
    2. Samson, J.,
    3. Toran, F.,
    4. Ventura-Traveset, J.,
    5. Hernandez-Pajares, M.,
    6. Juan, J. M.,
    7. Sanz, J., &
    8. Ramos-Bosch, P.
    (2007). The Stanford – ESA integrity diagram: A new tool for the user domain SBAS integrity assessment. NAVIGATION, 54(2), 153–162. https://doi.org/10.1002/j.2161-4296.2007.tb00401.x
  55. ↵
    1. Wang, S.,
    2. Zhan, X.,
    3. Fu, Y., &
    4. Zhai, Y.
    (2020). Feature-based visual navigation integrity monitoring for urban autonomous platforms. Aerospace Systems, 3(3), 167–179. https://doi.org/10.1007/s42401-020-00057-8
  56. ↵
    1. Yang, C.,
    2. Vadlamani, A.,
    3. Soloviev, A.,
    4. Veth, M., &
    5. Taylor, C.
    (2018). Feature matching error analysis and modeling for consistent estimation in vision-aided navigation. NAVIGATION, 65(4), 609–628. https://doi.org/10.1002/navi.265
  57. ↵
    1. Zhang, Z.
    (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), 1330–1334. https://doi.org/10.1109/34.888718
    CrossRefWeb of Science
  58. ↵
    1. Zhao, H.,
    2. Shi, Y.,
    3. Tong, X.,
    4. Ying, X., &
    5. Zha, H.
    (2020). A simple yet effective pipeline for radial distortion correction. 2020 IEEE International Conference on Image Processing (ICIP), Aby Dhabi, United Arab Emirates. https://doi.org/10.1109/icip40778.2020.9191107
  59. ↵
    1. Zhou, L., &
    2. Kaess, M.
    (2019). An efficient and accurate algorithm for the perspecitve-n-point problem. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China. https://doi.org/10.1109/iros40897.2019.8968482
  60. ↵
    1. Zhou, Q.,
    2. Sattler, T.,
    3. Pollefeys, M., &
    4. Leal-Taixé, L.
    (2020). To learn or not to learn: Visual localization from essential matrices. 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France. https://doi.org/10.1109/icra40945.2020.9196607
    1. Zhu, C.
    (2020). Cooperative vision for swarm navigation [Doctoral dissertation, Technische Universität München]. https://mediatum.ub.tum.de/doc/1486567/1486567.pdf
  61. ↵
    1. Zhu, C.,
    2. Giorgi, G., &
    3. Günther, C.
    (2017). Planar pose estimation using a camera and single-station ranging measurements. Proc. of the 30th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2017), Portland, OR. https://doi.org/10.33012/2017.15219
  62. ↵
    1. Zhu, C.,
    2. Giorgi, G., &
    3. Günther, C.
    (2018). 2D relative pose and scale estimation with monocular cameras and ranging. NAVIGATION, 65(1), 25–33. https://doi.org/10.1002/navi.223
  63. ↵
    1. Zhu, C.,
    2. Joerger, M., &
    3. Meurer, M.
    (2020). Quantifying feature association error in camera-based positioning. 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, OR. https://doi.org/10.1109/plans46316.2020.9109919
  64. ↵
    1. Zhu, C.,
    2. Steinmetz, C.,
    3. Belabbas, B., &
    4. Meurer, M.
    (2019a). Feature error model for integrity of pattern-based visual positioning. Proc. of the 32nd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2019), Miami, FL, 2254–2268. https://doi.org/10.33012/2019.16956
  65. ↵
    1. Zhu, C.,
    2. Steinmetz, C.,
    3. Belabbas, B., &
    4. Meurer, M.
    (2019b). Six degrees-of-freedom dilution of precision for integrity of camera-based localization. Proc. of the 32nd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2019), Miami, FL. https://doi.org/10.33012/2019.17020
    1. Zhu, Z., &
    2. Taylor, C.
    (2017). Conservative uncertainty estimation in map-based vision-aided navigation. IEEE Transactions on Aerospace and Electronic Systems, 53(2), 941–949. https://doi.org/10.1109/taes.2017.2667278
PreviousNext
Back to top

In this issue

NAVIGATION: Journal of the Institute of Navigation: 69 (2)
NAVIGATION: Journal of the Institute of Navigation
Vol. 69, Issue 2
Summer 2022
  • Table of Contents
  • Index by author
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on NAVIGATION: Journal of the Institute of Navigation.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Integrity of Visual Navigation—Developments, Challenges, and Prospects
(Your Name) has sent you a message from NAVIGATION: Journal of the Institute of Navigation
(Your Name) thought you would like to see the NAVIGATION: Journal of the Institute of Navigation web site.
Citation Tools
Integrity of Visual Navigation—Developments, Challenges, and Prospects
Chen Zhu, Michael Meurer,, Christoph Günther
NAVIGATION: Journal of the Institute of Navigation Jun 2022, 69 (2) navi.518; DOI: 10.33012/navi.518

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Integrity of Visual Navigation—Developments, Challenges, and Prospects
Chen Zhu, Michael Meurer,, Christoph Günther
NAVIGATION: Journal of the Institute of Navigation Jun 2022, 69 (2) navi.518; DOI: 10.33012/navi.518
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
    • Summary
    • 1 INTRODUCTION
    • 2 THE BASICS OF VISUAL NAVIGATION
    • 3 INTRODUCTION OF INTEGRITY
    • 4 RECENT DEVELOPMENTS IN VISUAL POSITIONING INTEGRITY
    • 5 AN INTEGRITY DESCRIPTION FRAMEWORK FOR VISUAL POSITIONING
    • 6 FUTURE TRENDS AND OPEN ISSUES
    • 7 CONCLUSION
    • HOW TO CITE THIS ARTICLE
    • AUTHOR CONTRIBUTIONS
    • CONFLICT OF INTEREST
    • ACKNOWLEDGMENTS
    • Footnotes
    • REFERENCES
  • Figures & Data
  • Supplemental
  • References
  • Info & Metrics
  • PDF

Related Articles

  • Google Scholar

Cited By...

  • No citing articles found.
  • Google Scholar

Similar Articles

Keywords

  • integrity
  • robust
  • safety
  • vision

Unless otherwise noted, NAVIGATION content is licensed under a Creative Commons CC BY 4.0 License.

© 2025 The Institute of Navigation, Inc.

Powered by HighWire