Rather than “translate” the HoROS code, I went searching for “ROS 2 scan wander” with the confidence someone had already “done the work”. I found a very complete set of “ROS 2 Explorer/Wanderer” nodes built around the TurtleBot3.
After a few fixes to bring the code up to ROS 2 Humble conventions, I was able to launch my “ROS2 Wanderer”, but it simply spun around claiming there were obstacles all around.
Many examples are designed for simulation, where message passing never drops a topic and sensors are perfect.
The code had no “real sensor” accommodation, so the zero readings look like obstacles
The HoROS Chapter 8 wanderAround node also assumes perfect sensor and perfect msg channel
I understand why books such as Hands On ROS For Robotics Programming and ROS.org tutorials introduce concepts in simulation (simplifies learning, standardizes learning environment), but the “Hands On” with an autonomous, self-contained, physical GoPiGo3 robot with actual sensors in a real world environment is my definition of success.
Why does that always end up being an “exploration”?
*** range index - left 420 front 279 back 560 right 140
left: 0.801 cnt: 310
front: 2.519 cnt: 113 <<-- 70% of readings were zero!
back: 0.733 cnt: 378
right: 1.398 cnt: 174 <<-- 54% of readings were zero!
scan_msg.ranges: 2.562 always 0? no
scan_msg.ranges: 2.526 always 0? no
scan_msg.ranges: 2.519 always 0? no
scan_msg.ranges: 2.583 always 0? no
scan_msg.ranges: 2.726 always 0? no
In my limited learning of ROS, one thing I saw is the importance of coding to the limitations of the sensors. For example, if a range sensor is known to be inaccurate within (near) or beyond (far) certain limits, one’s code must take that into account. In addition, if there are “unlikely” or “impossible” readings, that also needs to be taken into account. Finally, there is coding to deal with a malfunction.
A range reading of exactly 0.000000… could fall into one or more of these categories. In the absence of true knowledge, a reading of exactly zero sure “feels” like an error. How (and when and how often) is the lidar calibrated?
The YDLIDAR X4 driver has some parameters to set minimum valid range and max range to report, and a million other parameters. The LIDAR doesn’t need calibration, but your URDF laser_frame link has to be accurate. I documented a procedure for tuning the laser_frame link
Right but most of the examples folks teach with are for simulated sensors in a simulated world, so they conveniently ignore coding students will need to move the idea of the sensor to the reality of the sensor.
The most important reality ignored is the real world does not have big non-black-floor-to-above-sensor-height walls, so robots with LIDAR will stall the motors on office chair legs, my black filing cabinet, my two black UPS, and black Computer case, (unless you are @keithW and you build a simulated reality playground for your robot.)
The second most important reality ignored is sending an “all stop” /cmd_vel if the user presses cntrl-c. I always have to have my teleop_keyboard running in a separate window, and immediately after pressing cntrl-c to quit a program, I have to quickly click in the teleop_keyboard terminal window and press spacebar to stop the robot. Simulations are so convenient, and reality is so not.
Interestingly I wrote some “motor stall sensing” functions for Carl, but was not satisfied with the performance. Stall sensing really should be stressed more in robot programming. (Another iRobot Create3 status easily accessible that I did not try to implement in my ROS 2 GoPiGo3 node.)
Actually, this may be the new “most important” missing item from my ROS 2 GoPiGo3 node. In testing my wanderer program all was going well until Dave got stuck under a cabinet, which stalled the motors, which put so much drain on the power system that the Raspberry Pi was instantly “out cold”.
(No red light on processor board … this is not good…)
Also learned Dave cannot walk and talk at the same time under ROS (at least not in the same spin cycle).
I’m going to have to create a ROS 2 TTS node and send it stuff to say.
A robust robot would not allow a faulty program to kill the robot. Full stop.
Now one could suggest that after more than five years of folks building ROS GoPiGo3 robots, the ROS GoPiGo3 node would contain self-protection features, but perhaps all the ROS GoPiGo3 users were mostly interested in learning the basics of ROS / ROS 2 and with careful, correct programming of their ROS programs a basic ROS GoPiGo3 node has served everyone well.
I as well, decided not to get distracted trying to improve the ROS 2 GoPiGo3 node, and worked carefully to correct and tune my ROS 2 GoPiGo3 Wanderer.
This morning Humble Dave was able to wander 10 minutes before needed assistance. It seems like another 50mm safe distance (350mm “closest reading to a wall”) will do the trick.
It would be easy to go deep attempting to improve the exception case handling for my wanderer program, but it is not a good idea. The purpose of the wander program was to build a map w/o manually driving Dave around the room. My wanderer.py does that well, with two exceptions cases:
Does not reliably avoid a forward, non-black, 4-6 inch obstacle with sufficient clearance
Will appear indecisive when boxed in on three close sides, turning left, right, left, right but will eventually escape.
Bottom line, the program works well to wander creating a map of an area with no obstacles in the center. Once navigation with 360 degree scan ranges available, the wander concept should be revisited.
If the clearance is set high the irregularities in the “walls” of a not-flush refrigerator, stove, dishwasher, or the high gloss surface of those appliances, or even “a cove” do not cause problems, but the wandering bot only explores the center of the playground, never getting close to any walls.
But if I put a small obstacle in the playground, the “only looking in 5 discreet directions” design fails regardless of the clearance value, because the obstacle can sneak too far into the angular shaped space between the forward and the 45 degree off forward beams.
The wanderer algorithm needs to take advantage of the 360 degree scan information, rather than just 5 discrete ranges, but I am too lazy to learn how to do that either with or without ROS. I am sure I will revisit this wanderer concept after I learn how to use ROS Nav2.
I’m so terrible about rushing forward without really learning ROS. It is just too huge. I didn’t stop to learn to use launch files (three types to learn). I didn’t stop to learn to use lifecycles or composition. I didn’t stop to learn about mapping or localization with SLAM-Toolbox. I’m not stopping to implement twist_mux so I can always override autonomous nodes. Now I’m rushing into Nav2. I want to understand everything, but ROS is just too big to even try to understand.