I got kinda’ bummed at my display project when I discovered that I had damaged some of the GPIO pins on Charlie’s controller.
I was also kinda’ bummed with @cyclicalobsessive’s burn-out with ROS. (He might have truly bitten off more than he could chew - time will tell.)
So!
I installed his pre-configured ROS2 image on Charlene and got it running, albeit at a basic level, intending to do some research into ROS and see where it takes me.
My current goals:
Get ROS “up and running” and be able to demonstrate that ROS is functioning and able to do things.
Be able to control basic robot activity and read sensors.
Try more advanced control as an “object avoidance” function. (Sensors, bumpers, etc)
Try to duplicate my “Joystick Controlled Robot” functionality in ROS.
Stretch Goal:
Implement a “landmark” based navigation system where once the robot has been somewhere, it can return there and come back on its own.
My goal with this is to program a basic “object avoidance” behavior like the metal toys back when I was a kid - it would go in a particular direction until it hit something. Then it would change direction and run in that direction until it bumped into something else.
If I can do object recognition, I might try a “wander around and identify things” type project where it would tell me what it sees.
I didn’t write a bumpers node, but you can use:
ros2ws/src/ros2_gopigo3_node/ros2_gopigo3_node/distance_sensor.py node
(start with start_distance_sensor_node.sh) and the distance_sensor_subscriber.py (ros2 run ros2_gopigo3_node distance_sensor_subscriber) as template.
Make sure you finish the yellow brick road for all the (applicable to a GoPiGo3 w/o Lidar) stuff I did put in the image! Then do the official introduction to programming ROS 2 tutorials to learn about writing Python nodes. To create a bumpers node, you “should” create a bumpers message type for a /bumpers topic to publish - getting pretty advanced there, but covered in the intro to programming ROS 2:
Personally, I think that is all you should set your sights on - follow my docs till you can drive Charlene with the keyboard and know what a node, topic, publisher and subscriber are and how to “see” those.
I just doubt you want to invest the effort to learn how to create nodes, messages, pubs, subs, services, and actions, with synchronous and asynchronous callbacks, timers and all the files needed to build. It seems too much work to recreate what can be done in a single, simple
Python program w/o the heavy ROS baggage.
I need to know how to “work with the tools” - and then I set myself a simple goal to be able to use the distance sensor and/or bumper to avoid obstructions.
Since the “toy” didn’t have sensors, all it could do is change direction when it hit something. Charlene has sensors and can use them to avoid obstacles before hitting them.
My first steps are going to be:
Install a bumper(s) on Charlene.
Do the “object avoidance” program in GoPiGo O/S to verify functionality and reasonableness.
Learn about ROS and creating nodes (or whatever they’re called).
Try to implement the object avoidance program in ROS.
Ultimately I want to implement a “landmark” based navigation model.
I have been modeling the way I navigate and 20 digit odometery isn’t necessary. I primarily navigate based on visual clues and landmarks. (Doctor’s office in building 3 is that direction and/or the center of the sidewalk is here.)
What I think I will try first are “artificial” landmarks. (i.e. The simplified QR codes with different ones assigned to different landmarks and/or places.)
For example:
Goal: Go to the girls room.
Where am I?
Look for location marker.[1]
If location = “goal” then stop as I am already there and announce location.
If location != “goal” identify current location.
Look up path from “location” to “goal”.
Look for “next” landmark in path.
(i.e. Find a visible landmark.)
Is landmark part of path?
If not, look for another landmark.
If is, set landmark as “next” landmark and proceed toward it.
Note that “locations” can also be landmarks, but locations indicate that you’ve reached a specific place that’s a potential goal instead of just being somewhere, (at a particular doorway). This might end up being a distinction looking for a difference, but I think not. The robot might be IN the living room but can see several doors (landmarks). Having landmarks distinct from locations, (potential goals), the robot won’t be confused by seeing more than one potential “location” at a time.
This implies a rule that there must never be another location visible when at a particular location.
Note that I am placing constraints on the robot’s logic to make determining its location and navigation easier.
I do not believe that, at my current state of knowledge and expertise, that I can create a completely unbounded robot environment. (i.e. Go outside, go to the store 1km away across a major highway, get a carton of milk, and return home again - without getting smashed or getting lost.)