View on GitHub


Real-Time Appearance-Based Mapping

Download this project as a .zip file Download this project as a tar.gz file



RTAB-Map logo

RTAB-Map (Real-Time Appearance-Based Mapping) is a RGB-D, Stereo and Lidar Graph-Based SLAM approach based on an incremental appearance-based loop closure detector. The loop closure detector uses a bag-of-words approach to determinate how likely a new image comes from a previous location or a new location. When a loop closure hypothesis is accepted, a new constraint is added to the map’s graph, then a graph optimizer minimizes the errors in the map. A memory management approach is used to limit the number of locations used for loop closure detection and graph optimization, so that real-time constraints on large-scale environnements are always respected. RTAB-Map can be used alone with a handheld Kinect, a stereo camera or a 3D lidar for 6DoF mapping, or on a robot equipped with a laser rangefinder for 3DoF mapping.

Illumination-Invariant Visual Re-Localization

Lidar and Visual SLAM

Simultaneous Planning, Localization and Mapping (SPLAM)

Multi-session SLAM

Loop closure detection


ROS Ubuntu Mac OS X Windows iOS Google Tango  Raspberry Pi Docker youtube





Privacy Policy

RTAB-Map App on Google Play Store or Apple Store requires access to camera to record images that will be used for creating the map. When saving, a database containing these images is created. That database is saved locally on the device (on the sd-card under RTAB-Map folder). While location permission is required to install RTAB-Map Tango, the GPS coordinates are not saved by default, the option “Settings->Mapping…->Save GPS” should be enabled first. RTAB-Map requires read/write access to RTAB-Map folder only, to save, export and open maps. RTAB-Map doesn’t access any other information outside the RTAB-Map folder. RTAB-Map doesn’t share information over Internet unless the user explicitly exports a map to Sketchfab or anywhere else, for which RTAB-Map needs the network. If so, the user will be asked for authorization (oauth2) by Sketchfab (see their Privacy Policy here).

This website uses Google Analytics. See their Privacy Policy here.


What’s new

November 2023

We had new papers published this year on a very fun project about underground mines scanning. Here is a video of the SLAM part of the project realized with RTAB-Map (it is an early field test we did before going in the mines): Watch the video

Related papers:

March 2023

New release v0.21.0!

June 2022

A new paper has been published: Multi-Session Visual SLAM for Illumination-Invariant Re-Localization in Indoor Environments. The general idea is to remap multiple times the same environment to capture multiple illumination variations caused by natural and artificial lighting, then the robot would be able to localize afterwards at any hour of the day. For more details, see this page and the linked paper. Some great comparisons about robustness to illumination variations between binary descriptors (BRIEF/ORB, BRISK), float descriptors (SURF/SIFT/KAZE/DAISY) and learned descriptors (SuperPoint). Illumination-invariant

January 2022

Added demo for car mapping and localization with CitySim simulator and CAT Vehicle: Watch the video

December 2021

Added indoor drone visual navigation example using move_base, PX4 and mavros: Watch the video

More info on the rtabmap-drone-example github repo.

June 2021

December 2020

New release v0.20.7!

August 2020

New release v0.20.3!

July 2020

New release v0.20.2!

November 2019


September 2017

July 2017

March 2017

February 2017

October 2016

July 2016

June 2016

February 2016

October 2015

September 2015

August 2015

October 2014

September 2014

August 2014

July 2014

June 2014