Reality Strikes GoPiGo3 Dave Again

Reality again hit me today.

Rather than “translate” the HoROS code, I went searching for “ROS 2 scan wander” with the confidence someone had already “done the work”. I found a very complete set of “ROS 2 Explorer/Wanderer” nodes built around the TurtleBot3.

After a few fixes to bring the code up to ROS 2 Humble conventions, I was able to launch my “ROS2 Wanderer”, but it simply spun around claiming there were obstacles all around.

Reality strikes:

  • Many examples are designed for simulation, where message passing never drops a topic and sensors are perfect.
  • I was reminded of a discovery @KeithW made that the YDLIDAR X4 sometimes returns a zero reading (even when you have configured the serial bit rate correctly)
  • The code had no “real sensor” accommodation, so the zero readings look like obstacles
  • The HoROS Chapter 8 wanderAround node also assumes perfect sensor and perfect msg channel

I understand why books such as Hands On ROS For Robotics Programming and tutorials introduce concepts in simulation (simplifies learning, standardizes learning environment), but the “Hands On” with an autonomous, self-contained, physical GoPiGo3 robot with actual sensors in a real world environment is my definition of success.

Why does that always end up being an “exploration”?

Ouch: Same robot, Same code - two years later:

*** range[561] index - left 420 front 279 back 560 right 140

left: 0.801 cnt: 310

front: 2.519 cnt: 113   <<-- 70% of readings were zero!

back: 0.733 cnt: 378

right: 1.398 cnt: 174   <<-- 54% of readings were zero!

scan_msg.ranges[277]: 2.562 always 0? no
scan_msg.ranges[278]: 2.526 always 0? no
scan_msg.ranges[279]: 2.519 always 0? no
scan_msg.ranges[280]: 2.583 always 0? no
scan_msg.ranges[281]: 2.726 always 0? no

Dave’s reality has changed somehow.


In my limited learning of ROS, one thing I saw is the importance of coding to the limitations of the sensors. For example, if a range sensor is known to be inaccurate within (near) or beyond (far) certain limits, one’s code must take that into account. In addition, if there are “unlikely” or “impossible” readings, that also needs to be taken into account. Finally, there is coding to deal with a malfunction.

A range reading of exactly 0.000000… could fall into one or more of these categories. In the absence of true knowledge, a reading of exactly zero sure “feels” like an error. How (and when and how often) is the lidar calibrated?


The YDLIDAR X4 driver has some parameters to set minimum valid range and max range to report, and a million other parameters. The LIDAR doesn’t need calibration, but your URDF laser_frame link has to be accurate. I documented a procedure for tuning the laser_frame link

Right but most of the examples folks teach with are for simulated sensors in a simulated world, so they conveniently ignore coding students will need to move the idea of the sensor to the reality of the sensor.

The most important reality ignored is the real world does not have big non-black-floor-to-above-sensor-height walls, so robots with LIDAR will stall the motors on office chair legs, my black filing cabinet, my two black UPS, and black Computer case, (unless you are @keithW and you build a simulated reality playground for your robot.)

The second most important reality ignored is sending an “all stop” /cmd_vel if the user presses cntrl-c. I always have to have my teleop_keyboard running in a separate window, and immediately after pressing cntrl-c to quit a program, I have to quickly click in the teleop_keyboard terminal window and press spacebar to stop the robot. Simulations are so convenient, and reality is so not.

Interestingly I wrote some “motor stall sensing” functions for Carl, but was not satisfied with the performance. Stall sensing really should be stressed more in robot programming. (Another iRobot Create3 status easily accessible that I did not try to implement in my ROS 2 GoPiGo3 node.)


Actually, this may be the new “most important” missing item from my ROS 2 GoPiGo3 node. In testing my wanderer program all was going well until Dave got stuck under a cabinet, which stalled the motors, which put so much drain on the power system that the Raspberry Pi was instantly “out cold”.

(No red light on processor board … this is not good…)

Also learned Dave cannot walk and talk at the same time under ROS (at least not in the same spin cycle).
I’m going to have to create a ROS 2 TTS node and send it stuff to say.


A robust robot would not allow a faulty program to kill the robot. Full stop.

Now one could suggest that after more than five years of folks building ROS GoPiGo3 robots, the ROS GoPiGo3 node would contain self-protection features, but perhaps all the ROS GoPiGo3 users were mostly interested in learning the basics of ROS / ROS 2 and with careful, correct programming of their ROS programs a basic ROS GoPiGo3 node has served everyone well.

I as well, decided not to get distracted trying to improve the ROS 2 GoPiGo3 node, and worked carefully to correct and tune my ROS 2 GoPiGo3 Wanderer.

This morning Humble Dave was able to wander 10 minutes before needed assistance. It seems like another 50mm safe distance (350mm “closest reading to a wall”) will do the trick.

UPDATE: Humble Dave wandered for 33 minutes till I shut down the program due to the “battery last leg” warning light activated. Considering my wander node complete and correct (for kitchen runs).


I think the two year ago test was inside a controlled space, while the “Today” test was the reality of my room with black stuff and fabric stuff around.

Much better when Dave is in his crib:

************* DEBUG 151 **********
*** Entering Scan Client Callback
left: 0.344 cnt: 1359
front: 0.360 cnt: 1359
back: 0.376 cnt: 1347    <<--- only 0.9% of readings were zero - 99.1% were good
right: 0.173 cnt: 1357   <<-- only 0.2% zero - 99.8% good
************* DEBUG **********

Since there are “black holes” on both sides of Dave’s dock,

I’m thinking his dock needs some “walls” to allow solid LIDAR returns as he navigates back to his dock.

|   D  |


I decided to endurance test Humble Dave with the wander node, and changed the shape of his “kitchen playground” a little.

Discovered a particular condition where wanderer recognizes an obstacle but does not command an action to escape. He sat there frozen in indecision.

After giving him a manual fwd command, he continued for another half hour before freezing in the exact same place. Again manual assistance allowed him to continue.

His “Battery on last leg light” activated at 1h 14m so I stopped the test to prevent a safety shutdown.

Now to walk through my string of “if a or b…elif x and y or c…elif xyzzy…else” to diagnose Dave’s mini-stroke.


It would be easy to go deep attempting to improve the exception case handling for my wanderer program, but it is not a good idea. The purpose of the wander program was to build a map w/o manually driving Dave around the room. My does that well, with two exceptions cases:

  • Does not reliably avoid a forward, non-black, 4-6 inch obstacle with sufficient clearance
  • Will appear indecisive when boxed in on three close sides, turning left, right, left, right but will eventually escape.

Bottom line, the program works well to wander creating a map of an area with no obstacles in the center. Once navigation with 360 degree scan ranges available, the wander concept should be revisited.


Is it an issue with how the clearances are set for the navigation program?


Yes, and no:

  • If the clearance is set high the irregularities in the “walls” of a not-flush refrigerator, stove, dishwasher, or the high gloss surface of those appliances, or even “a cove” do not cause problems, but the wandering bot only explores the center of the playground, never getting close to any walls.
  • But if I put a small obstacle in the playground, the “only looking in 5 discreet directions” design fails regardless of the clearance value, because the obstacle can sneak too far into the angular shaped space between the forward and the 45 degree off forward beams.

The wanderer algorithm needs to take advantage of the 360 degree scan information, rather than just 5 discrete ranges, but I am too lazy to learn how to do that either with or without ROS. I am sure I will revisit this wanderer concept after I learn how to use ROS Nav2.

I’m so terrible about rushing forward without really learning ROS. It is just too huge. I didn’t stop to learn to use launch files (three types to learn). I didn’t stop to learn to use lifecycles or composition. I didn’t stop to learn about mapping or localization with SLAM-Toolbox. I’m not stopping to implement twist_mux so I can always override autonomous nodes. Now I’m rushing into Nav2. I want to understand everything, but ROS is just too big to even try to understand.


I’d have them taper out slightly.

I’d also consider an alignment guide like the one you did for Carl’s dock.


That’s for sure - but that’s the final mile fix. I need Dave to be able to know the office from the kitchen, and to know how to “find his desk” in the office.

May be. Have to figure out what helps the slam-toolbox location be the fastest and most accurate.


How about a flashing female GoPiGo robot? :wink: