I got kinda’ bummed at my display project when I discovered that I had damaged some of the GPIO pins on Charlie’s controller.
I was also kinda’ bummed with @cyclicalobsessive’s burn-out with ROS. (He might have truly bitten off more than he could chew - time will tell.)
So!
I installed his pre-configured ROS2 image on Charlene and got it running, albeit at a basic level, intending to do some research into ROS and see where it takes me.
My current goals:
Get ROS “up and running” and be able to demonstrate that ROS is functioning and able to do things.
Be able to control basic robot activity and read sensors.
Try more advanced control as an “object avoidance” function. (Sensors, bumpers, etc)
Try to duplicate my “Joystick Controlled Robot” functionality in ROS.
Stretch Goal:
Implement a “landmark” based navigation system where once the robot has been somewhere, it can return there and come back on its own.
My goal with this is to program a basic “object avoidance” behavior like the metal toys back when I was a kid - it would go in a particular direction until it hit something. Then it would change direction and run in that direction until it bumped into something else.
If I can do object recognition, I might try a “wander around and identify things” type project where it would tell me what it sees.
I didn’t write a bumpers node, but you can use:
ros2ws/src/ros2_gopigo3_node/ros2_gopigo3_node/distance_sensor.py node
(start with start_distance_sensor_node.sh) and the distance_sensor_subscriber.py (ros2 run ros2_gopigo3_node distance_sensor_subscriber) as template.
Make sure you finish the yellow brick road for all the (applicable to a GoPiGo3 w/o Lidar) stuff I did put in the image! Then do the official introduction to programming ROS 2 tutorials to learn about writing Python nodes. To create a bumpers node, you “should” create a bumpers message type for a /bumpers topic to publish - getting pretty advanced there, but covered in the intro to programming ROS 2:
Personally, I think that is all you should set your sights on - follow my docs till you can drive Charlene with the keyboard and know what a node, topic, publisher and subscriber are and how to “see” those.
I just doubt you want to invest the effort to learn how to create nodes, messages, pubs, subs, services, and actions, with synchronous and asynchronous callbacks, timers and all the files needed to build. It seems too much work to recreate what can be done in a single, simple
Python program w/o the heavy ROS baggage.
I need to know how to “work with the tools” - and then I set myself a simple goal to be able to use the distance sensor and/or bumper to avoid obstructions.
Since the “toy” didn’t have sensors, all it could do is change direction when it hit something. Charlene has sensors and can use them to avoid obstacles before hitting them.
My first steps are going to be:
Install a bumper(s) on Charlene.
Do the “object avoidance” program in GoPiGo O/S to verify functionality and reasonableness.
Learn about ROS and creating nodes (or whatever they’re called).
Try to implement the object avoidance program in ROS.
Ultimately I want to implement a “landmark” based navigation model.
I have been modeling the way I navigate and 20 digit odometery isn’t necessary. I primarily navigate based on visual clues and landmarks. (Doctor’s office in building 3 is that direction and/or the center of the sidewalk is here.)
What I think I will try first are “artificial” landmarks. (i.e. The simplified QR codes with different ones assigned to different landmarks and/or places.)
For example:
Goal: Go to the girls room.
Where am I?
Look for location marker.[1]
If location = “goal” then stop as I am already there and announce location.
If location != “goal” identify current location.
Look up path from “location” to “goal”.
Look for “next” landmark in path.
(i.e. Find a visible landmark.)
Is landmark part of path?
If not, look for another landmark.
If is, set landmark as “next” landmark and proceed toward it.
On arrival, pass by/through the landmark and look for the next landmark.
Note that “locations” can also be landmarks, but locations indicate that you’ve reached a specific place that’s a potential goal instead of just being somewhere, (at a particular doorway). This might end up being a distinction looking for a difference, but I think not. The robot might be IN the living room but can see several doors (landmarks). Having landmarks distinct from locations, (potential goals), the robot won’t be confused by seeing more than one potential “location” at a time.
This implies a rule that there must never be another location visible when at a particular location.
Note that I am placing constraints on the robot’s logic to make determining its location and navigation easier.
I do not believe that, at my current state of knowledge and expertise, that I can create a completely unbounded robot environment. (i.e. Go outside, go to the store 1km away across a major highway, get a carton of milk, and return home again - without getting smashed or getting lost.)
Very possibly true, but ROS is “another tool in the toolbox” that can be used to do things and different tools do different things more easily than other tools.
Example:
Program “X” can be done in both Bloxter and Python, so why learn Python?
Though program “X” can be done in both languages, the browser controlled joystick controlled robot can’t be done in Bloxter. Knowing Python, (and JavaScript!), along with ngnix, certificate management, and browser programming are absolutely necessary and are completely impossible in Bloxter.
Even though program “Y” can be done in Bloxter, Python, and ROS, learning about ROS may provide opportunities to do things with the robot that other languages can only do with greater difficulty and/or not at all.
Maybe ROS will be a “beautiful unicorn”, but I won’t know until I get there.
Are you telling me that messing with ROS isn’t potentially worthwhile and that I should try to do it in pure Python 7? Within GoPiGo-3 O/S??
Given that the correct libraries are available, I am sure I could do this in a multitude of languages - Basic, FORTRAN, C/C++, Python, Go, APL, Lisp, (etc. etc. etc.) - but the question isn’t necessarily which language to use, it’s more one of “which tool is the best tool for the job?” (i.e. It’s easier to remove the engine from a car with an engine-hoist than it is with two guys and a couple of 2x4’s.)
Exactly what I’ve been saying- “experience ROS” by following the yellow brick road I paved to demonstrate what ROS is (a set of asynchronous nodes that publish and subscribe to topics) in the context of a ros2 gopigo3 node and a teleop keyboard node. No more, no less.
ROS complicates simple stuff to provide hooks for users to write really, really complicated stuff.
Where I thought I could get navigation and obstacle avoidance and adaptive path planning, I thought it would “just work”. Turns out the software does not learn from it’s failures and must be hand tuned for my home. If I have to teach the robot a specific to my home solution using complex software I didn’t write, I would prefer to teach a simpler approach with software I can understand.
But the ROS experience for me showed me that I was dreaming that getting a robot to safely navigate a complex home environment was a "One SMOP" problem. Especially without a floor to top of the robot 360 degree bumper like my RugWarriorPro robot had. (and our bodies have)
Let me take a crack at it using a simplified navigation paradigm and let’s see where it takes me.
My thought is that you were trying to do too much all at once and you “hit the wall”.
I understand that no non-trivial navigation paradigm is going to be easy, but I think that with some carefully thought-out simplifications, it should be doable.
One thought is to try that simplification using something other than ROS, taking the ROS complexities out of the equation.
P.S.
What was the name of those big QR-code things you tried with Carl?
“Let me”: You don’t need my permission to take a crack at it.
“Let me”: I will not let you drag me along on your adventure. Sorry, but being real here.
Seven years is enough carefully thought-out simplifications to know that simplifications are doable, but my dream is not. Not with the current hardware, not with the current software, not with the current “team”.
ROS was an attempt to expand my “team”. It worked to enlighten me about the level of the best current technology relative to my goal.
ARuCo tag detections are part of OpenCV. I spent six months doing a hands on course in Computer Vision and two years investigating CV with PiCamera on various RPi, including a subsumption architecture Carl with ARuCo tag navigation. Conclusion - “buy don’t make”: Oak-D camera for Computer Vision. (Two Oak-D cameras going for sale soon, btw.)