Did a test with Dave mapping my dining area using synchronous online mapping versus asynchronous online mapping. The dining area has two uncluttered walls, one mirrored wall, and no fourth wall, so it is perhaps not the best test case, but better than the office.
When Dave did async mapping in my office room (next to Carl’s dock), the slam toolbox kept changing its mind where Dave was in the map by a half meter:
I don’t want to sound like I’m criticizing, but those outlines seem awfully messy.
Is it possible to gain precision by having Dave move around? (i.e. Is it possible that the closer he is to an obstacle, the better the accuracy?)
Is it possible to preserve accuracy/precision as you move away/somewhere else? (i.e. After mapping near one wall/obstacle and getting a reasonably clean map on one side, can you “keep” it as you move other places, building up an accurate map by accumulating and combining accurate pieces?)
It seems to me, based on the results I have seen so far, that the distance accuracy leaves much to be desired.
Or, perhaps, some kind of data filtering needs to take place, (like a FFT type of transform or the RMS of all the readings), to help collapse them into a smaller set of readings?
Agree, but I also am supremely astounded by the closeness to reality of the displayed “belief network”.
Indeed the algorithms already require movement to incrementally increase the belief probability from “unknown”, to “occupied” or “empty”.
The preservation of accuracy is dependent on the accuracy of knowing the robot’s location when it has moved away. The slam_toolbox in “map development mode” is constantly estimating the true new location for the robot (and thus the incoming sensor data about the world around it) by combining the input “odometry” and a “local localization” from its belief network built to that point.
Since in the current level of sophistication of the ROS2 GoPiGo3, odometry topics (robot’s estimate of x,y,z,theta with dx/dt, dy/dt, dz/dt, dtheta/dt, quaternion, and quaternion velocity) is entirely based on some barely understood unfiltered trigonometry code I lifted from Japon who lifted it from Rauch for generating raw odometry from the GoPiGo3 encoders, the odometry input to the mapper is what I believe is the source of the “belief inaccuracy” when returning to the starting position.
When I command the GoPiGo3 to drive forward 1 meter, the bot drives forward 1 meter to within a few millimeters, but it also drives slightly to the right or left a centimeter or two. Looking at the odometry output, the change in the x value will be very close (usually less than 0.3%) to the commanded 1 meter. The change in the y value will be less than 100% different than the actual left/right travel, and the change in heading will be less than 5% different than the actual heading (I can’t measure to that level - that is the result of my tests several years ago).
After that single movement, the robot’s new estimate of location will be updated based on input with error that the mapper has applied a number of error influence reduction techniques. (It takes a PhD just to understand that step, and the user has multiple options to choose from multiple buffet lists for each component of that step. ) I’m not qualified to play with those options.
My assessment is that the distance accuracy is phenomenal, but the angular accuracy leaves much to be desired.
Every ROS robot, except for the ROS2 GoPiGo3, uses Extended Kalman Filtering (EKF) to “fuse” multiple position sensor data. In the case of the TurtleBot4, the Create3 base uses wheel encoders, a 6DOF IMU, and a visual “flow” sensor as input to the EKF that outputs the “filtered odometry”.
My next investigation is to learn how to configure a ROS2 EKF node to fuse the GoPiGo3 encoder odometry with the GoPiGo3 IMU (9DOF with “BNO055 Fusion Processor”) to hopefully produce filtered odometry with much better angular accuracy, and hopefully the maps generated from the slam_toolbox will be cleaner.
A few years ago I did a square-path-return-to-start “best of my ability” comparison of GoPiGo3 encoder alone, DI IMU alone, and “Alan’s hokey fusion of encoders for linear, and IMU for angular” and concluded I’m not smart enough for this topic:
Hopefully after reading every Extended Kalman Filter Tutorial Google can feed me, I’ll be able to wrangle the ROS2 GoPiGo3 odometry topic to more accurately represent where in space Dave has wandered.
Is it REALLY necessary to know exactly where an obstacle is, with centimeter accuracy, from 20’ away? (Or even 5’ away?)
My thought is that accurate navigation doesn’t require that level of precision from that far away. Rather it is sufficient to know that there may be an obstacle in that direction - and as you get closer you can define the obstacle with greater detail.
From 5’ knowing that there is something that might be a door or passageway should be sufficient. As you get closer you can define it more carefully and you can then explore it if you wish.
This should be a way to reduce the complexity of the calculations, at least at distances.
As I mentioned, the “Simultaneous Localization and Mapping” SLAM toolbox has modes. Most of the time, the Localization is only “local”, meaning it is only basing its position and heading estimates on the nearby map.
The mapping function can be in three modes:
map development
given map extension
fixed map
Many folks do:
map development “walk-around”,
save the map
Clean and Correct the map in a graphics editor
then run in fixed map mode
This improves localization accuracy, and allows the bot to recognize temporary obstacles versus permanent obstacles. The Navigation package (which I have not learned how to use yet) uses the permanent obstacle map to plan a “lowest cost” route to achieve an end position and heading. During execution of the plan, Navigation “replans” around temporary obstacles encountered (only if they are predicted to influence the “cost” of the plan.)
So “is it really necessary to know exactly where an obstacle is, with centimeter accuracy?”, not really. The SLAM toolbox came with resolution set to 5cm and I set it down to 1cm, to see if it cleaned up the visual of the map, or improved the “where am I” estimates of the algorithm. This too (along with the other 20-30 option selections) is a parameter that needs investigation at some point. My goal was install/configure/program the “build a first map”, good or bad.
Now I can move on to improving the localization by fusing IMU with encoder data.
(And first, a small diversion from that plan, I built a ROS2 battery watching safety shutdown node.
It is announcing the battery voltage quietly, and printing to the console, once a minute for a few minutes before shutting down the robot for safety - should finish the testing phase today.).
This leads to the corollary question: How important is it for the robot to know “where it’s at”? (I know that sounds like a stupid question, but I am talking about absolute location as opposed to “I can see my dock from here” kind of location.)
What I am trying to brainstorm here is the possibility of using a less precise location that’s “good enough” rather than a detailed map. Perhaps it would be easier to implement and require less resources than finely detailed maps
Perhaps “landmarks” (like those simple coded square signs) can tell the 'bot where it is and/or which direction to travel to find the dock.
IOW, I am asking if there is a “KISS rule” solution that’s “close enough”.
There are a million options, and I am exploring the typical ROS beginner’s learning path. Everyone is lead to the “my robot made a map, and can be told to navigate to a point/pose in that map” by the tutorials, along the way learning how to write topic subscribers for things like /odom that tells the estimated x,y, theta (and a whole lot more), how to write topic publishers for things like /cmd_vel that tell the robot drive at a fwd velocity or turn at an angular velocity, how to write and use services which are asynchronous commands such as “point servo 45 left” with no feed back, and how to write and use actions, which are synchronous commands such as “drive 30 cm” that provide feedback on the way to the goal and provide a success/fail result.
Of course a robot can use “dead reconning” and use the LIDAR only as a 360 bumper, but dead reconning has its limits. My opinion, the ROS learner should start learning navigation with dead reconning and using the LIDAR scan data to fully appreciate what the advanced SLAM (simultaneous localization and mapping) techniques alleviate, and what headaches they also bring.
People complained when the first Roomba vacuum used random walk to cover the room. Folks like Shark put a LIDAR on their vacuum, performed a “home tour” scan and then used directed navigation to vacuum in less time than the iRobot’s random walk algorithm. Some people are worried about outsiders accessing the map of their home built by their vacuum. I’m still using a huge vacuum that I need to pull the car out of the garage just to get access to the vacuum, and then push/pulling the thing around my home using brainless attempt to impress the wife that doesn’t seem to be effective. (She is dead set against having a robot vacuum or robot mop, and keeps asking why I am so bent on having 24/7 robots - “what good are they if they can’t cook or do the laundry?”)
Is she interested in you keeping your mental faculties sharp and in good shape?
Does she care whether or not you are happy?
Then she should understand that this is an “essential nutrient/vitamin” for creative people. if you deprive them of this essential activity, they slowly die - like a musician who cannot play or a scientist that cannot think or research - she will have condemned you to a slow, lingering and painful death.
Note that this has been researched and proven so many times in the past that it is considered axiomatic and beyond needing proof nowadays.
My oldest brother Tom, (the nationally renowned model railroader I mentioned when I was making my power supply), has been passionate about railroading since before he was ten years old.
I am sure that his wife Karen cannot imagine why he is so head-over-heels about railroading. However much she doesn’t understand why, she has fully embraced his railroading passion and his love of his hobby. So much so that their entire house is decorated in “Early Norfolk and Western” with N&W railroad memorabilia everywhere. (And her own office has a N&W “General Manager” sign over the door.)
She doesn’t need to ask why. She understands that this is the way her husband is and accepts it without question.
Not the same wildly eccentric “Tom, the model railroader, classical music expert” that ran the “Evening Classics” radio show at New Mexico State University Classical FM station in 1970-72 is it? (All they let me spin at first was the Saturday morning Latin music show, and I didn’t know any Spanish to announce.)
Unless they simulcasted Virginia Polytechnic Institute’s radio, and if my brother had a side-gig that nobody else knew about, probably not.
Though the description is almost right-on. What he doesn’t know about classical music or railroading isn’t worth knowing.
Eccentric? Not him. Radio personality? Absolutely not!
That’s more like me. I had an afternoon show at WSCC at Suffolk Community College on Long Island when studied there and I was a bit of a screwball.
One of my first COBOL programming assignments was an “80-80 list” - that read 80 columns from the data cards and printed them out on the line-printer.
Instead of printing a few lines of text, I made a HUGE banner that said “IBM - OUR HERO!” and sent a copy to a friend at Brookhaven National Laboratory, where they ran Control Data mainframes.
Same with the Irish (as a reminder my wife is from Ireland, but it’s generally true of all Irish*)
Right, because it’s completely mainstream to decorate your house with memorabilia from a defunct railroad company
Never did radio - but was on the CCTV crew for my high school. Nerds of a feather I guess…
Back to the original topic - I think the ROS distant/local mapping is actually a good solution for a dynamic environment - provides overall routing efficiency along with avoidance of unexpected obstacles.
/K
* a joke I heard from one of my Irish in-laws:
An Englishman was visiting Dublin. He grew very frustrated by the habit of the Irish to answer every question with another question. So he formulated a plan to get at least one straight answer. On his last day he stood in front of the GPO** and asked a passerby “Excuse me sir, could you tell me where the post office is?”. The Irishman replied “Is it a stamp you’re wantin’?”
** for historical context, the GPO is the general post office in Dublin. It served as the headquarters for the Irish during the 1916 Easter Uprising, and was stormed by the British. You can still see some of the bullet holes. Every Irish schoolchild learns about the GPO of course.
But on my Raspberry Pi is proving to be interesting…
I built a ROS2 safety shutdown node and tested the gpgMin robot (which “does not have LIDAR” and thus does not do mapping). It ran 4h 1m from start at 11.7v to shutdown at “under 9.5v”, and was responsive to keyboard input in all shells.
Today, I started the test on the full-up Dave doing sync mapping…very interesting - the safety shutdown monitor continues to output voltages every 60 seconds so I know it is running, but the shell that was running a bash script to monitor load ran once through, printed load of 6.37 and has not been heard from again, nor can I open another login shell.
It is still a fully functioning ROS2 bot though - I can drive it around and it is outputting /odom at 30Hz, and a map update every 5 seconds as if nothing is strange in its world.
Rviz2 says the ROS time and Wall time are equal so there does not appear to be any slow down either.
Not defunct. They bought out about four other railroads, (including the Wabash), and eventually changed their name to Norfolk Southern. It’s still primarily managed by the N&W management team.
N&W has become one of the four-or-five major railroads left operating.