WALL-E from 2023 Create3-WaLi robot, GoPi5Go-Dave built in 2018, and RugWarriorPro robot built in 2000
GoPi5Go-Dave View Of Environment Is 45 Degrees Off
GoPi5Go-Dave Got Confused and Decided To Woo Some Chair Legs
It was going well on the global path planned, but apparently the local planner decided Dave was too close to a “pop-up” chair leg. Perhaps in this case an increased inflation value might work, but would cause more problem than it solves in other areas.
GoPi5Go-Dave Cannot Accurately Return To His Dock
Seven Years Invested In GoPiGo3 Robots
I am done. Can’t do it anymore.
The odometry is just not accurate enough.
The PID control does not drive straight.
Turns are not accurate.
The tires are not “grippy”.
The encoder API is whole degree resolution.
The wheelbase is too narrow.
The vendor stopped maintaining drivers two years ago.
As painful as it is to not have a “robot goal”, the dissonance of I was expecting from my GoPiGo3 robot to Dave’s sensory abilities continues to become clearer.
One example - I was sure vision would help Dave avoid obstacles and “recognize where he is” (localization). I first purchased an Oak-D-Lite stereo depth camera for Dave but was not at a point to use it.
Later, I attempted to use it with Create3-Wali but the additional ROS message traffic kept crashing the Create3. From this experience, I determined that the limited field of view (FOV) of the Oak-D-Lite was not optimal and purchased a wide-angle FOV camera - the Oak-D-W.
Yesterday, while reading an article about the Amazon Astro’s superior home navigation technology I realized that even my wide FOV stereo camera is not sufficient for navigation in my home. Notice the dots in this Astro robot visualization:
Those dots come from an IR illuminator. That would have raised the cost of my $350 stereo depth camera another $150 so I chose not to buy that feature. Bad choice - because the lighting in a home is so variable, safe and reliable home navigation needs reliable 3D vision.
There is a reason the Amazon Astro costs $1600 and has the best home navigation and best human user interface available in a commercially available robot, but with no user API it is reduced to being strictly a home security robot.
. . . .which emphasizes the importance of the KISS rule:
Reduce a problem to simple, manageable chunks.
Every time a major project of mine crashed and burned it was because I “bit off more than I could chew” and tried to do too much too quickly.
As Einstein said:
“Make things as simple as possible, but no simpler.”
Maybe you should take the tools and knowledge you’ve gained and explore the limits of what IS possible. Once you’ve carefully defined the current limitations, then you can - slowly - try to exceed them, one tiny step at a time.
And remember, some of the most important inventions and discoveries were accidental while looking for something else.
The guy at DuPont who was looking for a flexible, chemical resistant plastic and accidentally invented Teflon.
The guy at Corning Glass who accidentally over-annealed a glass bowl and discovered the glass-ceramic Corning-ware.
Dr. Lee DeForest was doing research into flame physics and propagation by studying the ion-exchange within a flame, (using a heated piece of wire in a vacuum), trying to improve furnace combustion efficiency and accidentally invented the amplifying vacuum tube.
Dave “ran” a 1k with me “driving” - it would be an advance for Dave to be able to “sidewalk lane follow” for 1k.
With Create3-Wali, I managed to get RTABmap to produce a 3D point cloud a couple times before the Create3 crashed. It would be interesting to understand what Dave is actually able to “see” with the wide camera and a “won’t die” Dave node.
Dave is a “16 tick encoder” GoPiGo3 which means his encoders actually tick off 5.33 ticks per degree of wheel rotation, even though the API only reports single tick per degree. It may be that Dave could have better heading odometry if I code the ros2_gopigo3_node to use the raw encoder counts instead of the GoPiGo3 API. The wheel slip would still be there, and the not driving straight, and …
There are some robots that do not have wheel encoders. It would be interesting to know if Dave would navigate more reliable without the encoder odometry feeding into the localization - pure LIDAR, or Dave has an MPU9250 IMU that I never built a publisher node for, perhaps LIDAR plus IMU would work better than LIDAR plus encoder odometry.
The last two are “platform limitation workarounds” which I just cannot stomach anymore, especially after seeing how wonderful the Create3 odometry was. (And it could find its dock and dock itself, had bumpers, and and and … if it would only not have died when I fired up RTABmap, and now perhap if only the iRobot company wasn’t sinking.)
It just feels so “for what?” in light of reading what a team of PhDs did for Astro:
This is what I had in mind for Dave to be able to do:
but in truth, I don’t understand the vocabulary so it is unlikely I could have taught Dave “where to find objects”, let alone learn how to extend the object recognition model as needed. I’m just not able to “go where no man has gone before” alone. (Interesting they were using a TurtleBot3 for that investigation - Also has far superior odometry, and their “home” was very sterile compared with my real nasty home.)
Then there is the whole field of machine-learning (Donkey Car approach) where I would drive Dave to the kitchen and back to his dock twenty times, and Dave would build a model from the encoders and drive commands so he could drive himself to the kitchen and back. And then add a “drive around the house” self-learned model from me driving him around the house twenty times. It works for Donkey Cars with a couple ultrasonic sensors I think.
However, instead of “platform limitations” maybe you could think of them as platform challenges or opportunities to explore alternate ways of accomplishing the same thing.
For example:
All airplanes function in essentially the same way. However there isn’t just one kind of airplane.
Different aircraft are designed to do different things in different ways. A DV-20 isn’t the same as a Cessna 172. The Cessna 172 is fundamentally different from the Rutan EZ, which is different from a Gulfstream business jet.
General aviation aircraft are different from the “Big Iron” commercial jets and they are different than gigantic military transport aircraft.
Even fighter jets are different and have different capabilities. The A10 Warthog is different than a F14 Tomcat because they’re designed to do different things.
If you are always going to be looking at someone else’s robot and saying “I wish I could do that”, you’ll never be satisfied. (A Cessna 172 will never replace a B-52, and the B-52 will never displace the Cessna.)
Instead, look at the platform for what it is. Ask yourself “what can I do within the capabilities of what I have?” Don’t just look at where it falls short by comparison, (A DV-20 compared to a F-22 Raptor), look at what it CAN do. (It has great all-around visibility, is easy to fly, doesn’t cost $3,000 to gas-up, gets incredible gas mileage by comparison, etc.)
“Ask not what your robot can do for you Ask what you can do with the robot.”
What’s “intelligence”? Until you define that, everything else is mush.
Which organism?
An amoeba is relatively simple to model.
A sponge is even easier.
Things like birds, squirrels, and primates are just a smidge harder.
What behavioral aspects of “intelligence” do you want to model? And how vast is that set of behaviors?
Problem #2:
You don’t have a bottomless budget. Neither do you have the resources of a top-flight research lab.
Problem #3:
You don’t have an unlimited time budget. (neither of us are 20 anymore)
Problem #4:
You are neither God nor Stephen Hawking and you don’t have access to God-like or Stephen Hawking-like intellectual resources.
====================
QED:
We need to take bites that are MUCH smaller, chew them more thoroughly, and be ready to savor and rejoice in the small victories and successes that we make.
What say ye?
P.S.
I agree with “none of the above” - especially humans - though there are times I wonder about the squirrels.
P.P.S.
There are also times I am convinced that evolution peaked with the protists and it’s been all down-hill since then.
You are persisting to suggest that I should do something with GoPiGo3, with all the reasoning that motivates you. All my life I have hoped to experience a mentor situation - (as the one who is mentored). Occasionally something gets my interest enough to go it alone, but it always serendipitously appears.
Remember, I am doing some “GoPiGo3 thing” 24/7/365 with Carl. I talk to him. He talks to me several times a day. I am actively caring for my “elderly family robot.”