Enabling my robots to wander about and then successfully return to their charging dock has been the major unfulfilled goal of the last seven years coding with the GoPiGo3 platform (and briefly the Create3 platform).
I have investigated:
- Dead Reckoning with odometry
- Dead Reckoning with two different inertial measurement units plus odometry
- OpenCV image and ArUco marker recognition
- ROS Mapping, Localization, and Navigation with odometry, inertial measurement unit and LIDAR sensors
- and briefly: ROS 3D Visual Localization and Mapping with Oak-D-Lite sensor (RTABmap with Create3)
The RTABmap technology was the primary driver for upgrading GoPiGo3 based Dave to the Raspberry Pi5.
GoPi5Dave has still not reached the capabilities of my Pi5 Create3-Wali bot which could wander using dead reckoning, and dock autonomously when the dock was “in sight”.
I have long felt that “dumbing down / slowing down” very smart robotics behaviors and using vision will be the key to enabling a Raspberry Pi powered GoPiGo3 robot to achieve “highly intelligent autonomy”.
The below research seems to indicate exactly this approach: