May get there one day.
Dave “ran” a 1k with me “driving” - it would be an advance for Dave to be able to “sidewalk lane follow” for 1k.
With Create3-Wali, I managed to get RTABmap to produce a 3D point cloud a couple times before the Create3 crashed. It would be interesting to understand what Dave is actually able to “see” with the wide camera and a “won’t die” Dave node.
Dave is a “16 tick encoder” GoPiGo3 which means his encoders actually tick off 5.33 ticks per degree of wheel rotation, even though the API only reports single tick per degree. It may be that Dave could have better heading odometry if I code the ros2_gopigo3_node to use the raw encoder counts instead of the GoPiGo3 API. The wheel slip would still be there, and the not driving straight, and …
There are some robots that do not have wheel encoders. It would be interesting to know if Dave would navigate more reliable without the encoder odometry feeding into the localization - pure LIDAR, or Dave has an MPU9250 IMU that I never built a publisher node for, perhaps LIDAR plus IMU would work better than LIDAR plus encoder odometry.
The last two are “platform limitation workarounds” which I just cannot stomach anymore, especially after seeing how wonderful the Create3 odometry was. (And it could find its dock and dock itself, had bumpers, and and and … if it would only not have died when I fired up RTABmap, and now perhap if only the iRobot company wasn’t sinking.)
It just feels so “for what?” in light of reading what a team of PhDs did for Astro:
This is what I had in mind for Dave to be able to do:
but in truth, I don’t understand the vocabulary so it is unlikely I could have taught Dave “where to find objects”, let alone learn how to extend the object recognition model as needed. I’m just not able to “go where no man has gone before” alone. (Interesting they were using a TurtleBot3 for that investigation - Also has far superior odometry, and their “home” was very sterile compared with my real nasty home.)
Then there is the whole field of machine-learning (Donkey Car approach) where I would drive Dave to the kitchen and back to his dock twenty times, and Dave would build a model from the encoders and drive commands so he could drive himself to the kitchen and back. And then add a “drive around the house” self-learned model from me driving him around the house twenty times. It works for Donkey Cars with a couple ultrasonic sensors I think.