To me this is one of the main obstacles to using ROS - there are a few tutorials and a few courses, but then we are supposed to know what to search the ROS index for, or spend a few weeks reading the overview of every node and tool and guess how it might help you.
Take wall following for example. I think my only complaint about the GoPiGo3 example suite, (which really is a great suite with so many examples from simple to mathematically complex), is the lack of a wall follower.
I saw someone on twitter proclaiming achieving wall following with their ROS robot, and guessed I shouldn’t expect to find a black-box wall follower node.
I found a wall following lesson on theconstructsim, but not that “ROS will do that for you too”.
This is such a basic robot task, I thought by now ROS was mature enough that I would find a configurable wall recognizer, wall follower, corner recognizer, opening recognizer and they would use what sensors are available - such as servo mounted distance sensor, and/or LIDAR, and/or a camera, and perhaps even a bumper.
I admit I have not spent time perusing the ROS index yet, since learning to “Think in ROS” (why the robot should not emit joint states and “robot states”, just let the joint and robot state publishers do this for me) has already been quite an eye opener. I wanted ROS to do things for me, I just didn’t know what I needed it to do for me.
Then again I forget the real purpose for all these things is to enable business or research, and not build a hobby robot for the home. I really shouldn’t expect so much.
It does seem to be hit or miss regarding what nodes/packages already exist. And then getting them to run.
I guess professional programmers and professional engineers have the background and time to do that. But I know from talking with engineers who use ROS for their day jobs that even they don’t know all the nooks and crannies.
(Kind of a “karma” thing) You now have first-hand experience of the angst I experienced when starting with the GoPiGo.
IMHO any computing environment that requires a Ph.D in Rocket Science to make the thing do simple things is way too complicated and violates the principle of least astonishment.
If the environment is complex by necessity, (and I disagree with the definition of “necessity” often imposed), there should be clear instructions and tutorials to help a new user understand the system’s capabilities and features.
Again IMHO, I disagree with the assessment of the GoPiGo exercises. It’s not reasonable to expect a relatively inexpensive robotics environment to come with the advanced sensors like a LIDAR, (or multiple distance sensors), to enable wall following.
Given the available sensors, I think DI did an excellent job.
I believe what you are experiencing is a form of social exclusion jargon where complexity is added for it’s own sake. Perhaps unknowingly, but true just the same.
I admit that I am not an expert, but I am not yet convinced that ROS isn’t “over engineered”.
I believe any system should help the beginner get up to speed and also get out of the expert’s way to let them get work done.
Jim, you are really missing my point about wall following.
In fact, I can even envision a wall following algorithm for the GoPiGo3 with NO Additional sensors. The GoPiGo3 has a stall sensor and encoders. By driving till one or both wheels either slow or stall, the GoPiGo3 could detect an obstacle. By a series of short backward, forward, and turning motions, the GoPiGo3 could characterize the shape of the obstacle, and if flat, the angle of the flat, and then proceed to back up a set distance, then turn parallel to the flat, and drive a distance. Then turn to again face the flat and repeat.
Another case would be with a single distance sensor, and another case the distance sensor mounted on the servo kit.
Anyway - if DI saw fit to have the fancy math obstacle avoider that can actually navigate a maze, I think a set of wall following examples should be on the plate.
Seeing a robot follow a wall for a distance is an example of a behavior which invokes emotions in the viewer. Imagine how “cute and smart” folks would think of a GoPiGo3 upon seeing the bot “guarding a section of wall” - walking a foot along the wall to the left, turning 180, walking a foot along the wall to the right, turning 90 to face away from the wall for a time, then walking guard again. This example can bond students to the bot to want to learn about angles, distances, durations, and the logic needed to make a robot perform useful (to a robot) functions.
Every robot I have either built or bought before GoPiGo3 had a wall following behavior demo.
I don’t think that would be too hard to write with the lidar - just adjust the steering based on the reading from 90 degrees to L or R. Could implement it in a PID way to smooth out the steering, and use a rolling average of the readings so one aberrant reading didn’t throw the whole thing off.
You totally lost me with what you disagree with in this statement.
ROS has a tremendous community maintained set of ROS neophyte tutorials, and the sold-as-ROS-platforms all have extensive instruction manuals, examples, and maintained update for their targeted OS.
The GoPiGo3 has all that for three different OS (GoPiGo OS, Raspbian For Robots, and raw PiOS) and an extensive educator materials/support system, and even several community tutorials and a book to use GoPiGo3 to learn ROS.
Well I disagree that great is any less than excellent , and I visit one or more of the example and project and driver code at least once a week. I can honestly state familiarity has not bred any contempt.