Skip to main navigation Skip to search Skip to main content

Recovering the position and orientation of a mobile robot from a single image of identified landmarks

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

This paper introduces a novel self-localization algorithm for mobile robots, which recovers the robot position and orientation from a single image of identified landmarks taken by an onboard camera. The visual angle between two landmarks can be derived from their projections in the same image. The distances between the optical center and the landmarks can be calculated from the visual angles and the known landmark positions based on the law of cosine. The robot position can be determined using the principle of trilateration. The robot orientation is then computed from the robot position, landmark positions and their projections. Extensive simulation has been carried out. A comprehensive error analysis provides the insight on how to improve the localization accuracy.

Original languageEnglish
Title of host publicationProceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2007
Pages1065-1070
Number of pages6
DOIs
StatePublished - 2007
Event2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2007 - San Diego, CA, United States
Duration: Oct 29 2007Nov 2 2007

Publication series

NameIEEE International Conference on Intelligent Robots and Systems

Conference

Conference2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2007
Country/TerritoryUnited States
CitySan Diego, CA
Period10/29/0711/2/07

Fingerprint

Dive into the research topics of 'Recovering the position and orientation of a mobile robot from a single image of identified landmarks'. Together they form a unique fingerprint.

Cite this