General update on my Robo voyage for anyone interested

It’s been a while since I’ve posted in this community so I thought I might give an update for anyone interested (I have been quite busy…):

In addition to Charlie (GitHub - jfrancis71/ros2_brickpi3: ROS2 packages to drive BrickPi3 (a Raspberry Pi to Lego EV3 hardware interface)), I have welcomed 3 types of Holonomic robots (GitHub - jfrancis71/ros2_holonomic_lego: Demos of ROS2 enabled Lego EV3 holonomic robots (on a Raspberry Pi with BrickPi3 interface)) and 3 other mobile robots (GitHub - jfrancis71/ros2_mobile_lego: Collection of Lego mobile robots running ROS2 on Raspberry Pi (using BrickPi3 hardware interface)). Alfie and Thomas are two current favourites in that last repo.

I started a project last year to do vision localisation using YOLO object detection. If you’ve seen some of my videos you may have noticed posters with a cat and dog in background. The idea was that by recognising where these are in the camera image (and their locations pre specified) you can work out where you are in the environment. It somewhat worked. However I found that there were a number of tricky issues such as determining probability distribution of recognising image when it’s only half in frame or quite far away led to a number of imponderables. Also quite sensitive to camera calibration and recognition errors. Overall I lost confidence that this would ever work well and have abandoned.

Currently have a project to do visual route following, ie you drive along a route (software recording snapshot images along the way). Robot can then self drive this route. It looks for the best matching image and then asks if that camera image is slightly to the left or to the right (or spot on) of the best matching image and makes a small turn adjustment correspondingly. It works quite well (at least tested on shortish routes). Mostly finished, but I’m looking at putting a nice video demo together.

I bought a ROS2 robot earlier this year. It was not a great success, some software errors, and it wasn’t quite as open source as I’d understood. The good news is it came with a 2D Lidar; so I pulled the robot apart (of course), and now all my Lego robots can be fitted with Lidar which is super fun.

Prompted by my having Lidar, I am currently playing around with ROS2 Nav2; the SLAM toolbox seems to be working quite well. The general Nav2 stack less well (I don’t yet have good autonomous navigation), but I haven’t yet fully understood this stack; it’s early days.

Lastly, my current recommended installs are Mamba/Conda. I have been playing around with Docker as an alternative and would be interested in others experiences with this. While searching to solve problems I’ve come across cyclicalobsessive comments which were most helpful. So I’m clearly not the only person thinking of this.

Apologies for length of post!

Julian.

3 Likes

I have seen many folks mentioning Conda, but as yet did not investigate.

As for Docker, This I did go pretty heavy into before the native Ubuntu was released for Pi5. I did not have any Docker gurus guiding me, so my experience may actually be totally “outlier”.

Difficulties I experienced / thoughts on Docker:

  • mapping hardware access across the Docker boundary was not well described in tutorials. Most tutorials were about running ROS 2 core packages, and often focused on simulated robots. I had difficulty figuring what to do to map my joystick, my USB sound mic and speaker, the LIDAR, and the USB Stereo Depth camera. Some mapped easily in the launch configuration, but some needed changing group or permissions only after the container was launched for unknown reasons.

  • Since I was constantly progressing, I was always having to rebuilt the container to include more packages, or to include some new drivers that had to be downloaded under the docker build folder to be copied into the container. Eventually, I had a “basic ROS 2 Docker image” that I only needed to rebuild when I wanted to update the OS and ROS, and a “extend the basic image” image that I would build to try out various configuration changes or add packages. This sped up the process of building my images, but since each image was about 9GB, I was constantly having to delete images to make space for the new build. It was a real pain to maintain.

  • I tend to do a fair number of reboots when I’m debugging new stuff, especially debugging ROS startup issues like Nav2 components timing out rather than all coming alive as desired. Sometimes I lost track of what packages I needed to add or “quick and dirty ROS config file changes” (editing the /opt/ros config because the package didn’t allow passing a config file on the command line… early nav2 issue), and I would reboot then spend a bunch of time debugging only to realize I didn’t apply the quick fix or some package wasn’t in the build and was always having to add it again.

  • I also did not succeed to have multiple ROS 2 Docker containers talk to each other. Don’t know what the issue was, but I took the easy way out and rebuilt my image to include all the required packages instead of using a vendor container for the vendor’s sensor with my robot’s container.

  • Another inconvenience - I felt like I was always doing twice the OS updates - once for the Raspberry Pi OS (Pi5 didn’t have a ROS 2 supported native OS at the time), and a total Docker image rebuild to update the Ubuntu inside the Docker. The need to either re-run the Docker OS update every boot, or rebuild the Docker image to capture the updates was constantly weighing on my brain.

  • I feel like a “complete robot” vendor could choose to release a “complete robot Docker image” and simplify life for them and for users, but I actually prefer they release an OS image I can burn to an SDcard. The Turtlebot4 software architecture is pretty cool - with maintenance and diagnostics built into the plan. (Additionally, they figured out what SLAM and nav2 parameters work well together - my efforts to go-it-alone were successful but always fell short of how successful I desired. Modifying the Turtlebot3 code for GoPiGo3 gave me a quick success.)

3 Likes

Thanks for the update - you have been busy.
/K

3 Likes

I have been researching a similar type of autonomous navigation by using “landmarks” (like ARCO tags) located in various places and a “map” that describes which landmarks to follow to get from “A” to “B”.

The idea is to simulate the logic people use when making a trip:

Goal: Travel to Micro Center in Cambridge.
Current location: My house in Worcester Ma.

  • Travel from “home” to I-290 south towards the Mass Pike.
  • Go east (towards Boston) on the Mass Pike.
  • Continue to exit 13 and exit.
  • Continue to Memorial Drive. (Turn right after crossing the bridge across the Charles River.)
  • Continue toward Micro Center on left.
  • At Micro Center, turn left and park.

Goal has been reached.

Note no SLAM, LIDAR, or other fancy stuff needed as all you do is follow a series of easily recognized landmarks.

The big assumption is that the robot knows how to get to the next landmark from the current one. This can be done with a look-up table.

3 Likes

Apologies for late reply, I’d assumed I’d get an email notification of others replies, but didn’t receive anything, so I’d assumed my message had gone into the ether…

@cyclicalobsessive
Thank you for your thoughtful reply. I am trying out this Docker as an alternative to mamba/conda. I now do have everything working, as in I’ve got this working on my desktop and robot talking to each other, and also can run X-windows programs (like rviz2) from docker on my desktop. My solution is a bit ‘hacky’ at the moment (faffing around with permissions), so I am not satisfied with it. But if I get a more tidy solution with I might put it up in a github repo.

Since I last wrote I’ve got NAV2 working much better. My lego wheel size in the differential drive configuration file was out by about 50%. I probably hadn’t noticed before as if you’re just driving around with teleop you might not notice it’s not quite right (unless you’re measuring everything precisely). Anyway this error played havoc with the odometry for AMCL which is why it was working so poorly before.

@jimrh. Fascinating, I’d be interested to see where it goes. I was less ambitious just trying it out in a small room…

3 Likes

Of course you do need to do something for obstacle avoidance. Difficult to guarantee a clear path in the real world.
/K

2 Likes

I’ve got a link that might be useful for anyone looking into Docker. The Desktop setup will presumably be of main interest, but the brickpi3 Docker setup might still be of interest for an example of how to run on a robot.

My philosophy on the Docker setup was for maximum simplicity (that does what I want). I am sure others could think of more sophisticated setups, but I’ve gone for … keep it simple.

https://github.com/jfrancis71/ros2_brickpi3/blob/main/Docker.md

I can control the robot from my Dell desktop using my USB Joystick (teleop-joy) and the robot is the BrickPi3 (Raspberry Pi 3B+).

Hope this is of interest.

2 Likes

Wow, you solved the two container issue! (Ipc flag).

Persisting the ROS ws doesn’t solve issue of OS and ROS updates still requiring either update/upgrade -y after every Docker launch, or rebuilding the Docker image that drove me crazy trying to keep old Docker images from filling up the sdCard.

1 Like

Yes, Nav2 is so loaded with configurable params and plugins that it appears to take a “dissertation effort” to optimize it for a particular environment and set of use cases.

I found my home environment to be more complex than the default configuration of params and plugins and was not able to understand the layered and parallel decision making enough for reliable navigation. Reliability became the elusive goal that made “localization and navigation” not a “drop in reuse” ROS 2 ability that I went to ROS to gain.

1 Like

Yes, my plan is that once I have an environment that I am happy with I’ll build that image and stick with it. It takes a while to build an image in any case (about half an hour), so this is not something I will want to do very often.

I’ll be spinning up containers, well, basically all of the time, takes about 1 second to spin up. I’m using the --rm option so my containers are automatically cleaned up as soon as I’ve finished with them.

For installing new packages, in theory it might be possible to build a new image by overlaying over the current ros2 image. I suspect I will just keep it simple, alter the Dockerfile and rebuild the image. I will certainly have at least tried out the install using the existing image just to check it is really what I want. Just doing an apt install inside a container won’t persist the install, but it’s good enough to check it is something I want (and it doesn’t break anything!)

I don’t have a lot of experience with this yet, but I’ve been playing around with this for a bit and it seems quite promising so far.

1 Like

That is what I did - a stable base image that I extended with an experimental Dockerfile. It made the Docker build go very fast, but meant I always had two 9GB images on my 32GB sdCard. When I needed to update the stable base image, and the extended image, it meant blowing away both images, (and a bunch of well hidden build things), to rebuild with updated OS and ROS 2 packages.

1 Like

Experimenting with Nav2 is probably the next thing I am going to look at. It is now working for me in a small environment, small’ish study, a hallway and living room. There is more to my apartment than that… but there are stairs…well everywhere…

I did build Thomas a little disabled ramp (only 3 steps) to get him into kitchen, but he took a bit of a tumble…humpty dumpty was put back together, no real damage, apart from ego, and a couple of broken lego technic pins (which are quite cheap). But I’ve abandoned this for the moment.

From memory to get it working, there were some issues over topic names (cmd_vel) and base_link/base_footprint frames. Also all my robots use TwistStamped messages on cmd_vel, so that required some changes. It’s a bit hacky at the moment, but I hope to put something out there when I’ve tidied it up.

2 Likes

Output from my Pi and desktop respectively:

(base) julian@brickpi3:~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
ros2_brickpi3 latest 297802c9dee2 5 hours ago 2.82GB
ros jazzy-ros-base b913ca6da461 17 months ago 896MB

(base) julian@julian-Precision-Tower-5810:~/docker$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
ros2 latest ac0c3c7f4fa7 5 hours ago 4.41GB
ros jazzy-ros-base 94767efcadc9 17 months ago 882MB

I tend to be quite minimalist on my install’s; so maybe that accounts for the difference? Also I have a 64GB sd card on the Pi, which helps. (they’re not that expensive).

I plan on being quite minimal on the Pi, really just the controllers, a camera and lidar, that’s probably about it. Anything more complicated I’ll do on the desktop (that’s where I’m running the NAV2 stack).

2 Likes

That really is the proper architecture for experimenting with ROS. Since I began building robots in 1977, I have obsessively committed to finding the maximum functionality possible with “no off-board processing”. This is a very constraining philosophy that has limited my robots (but allowed me to find those limits and feel my robots are able to utilize their all their resources).

I have come to understand the world of robots is a lot more complex than one person can program and one processor can “understand the environment and act with intention”.

I was excited to be able to switch from grunt OpenCV image processing to TensorFlow image recognition and then moved the whole image processing into the Oak-D vision sensor, and the progression of speech recognition engines that were more processor efficient, thinking this would leave my robots resources to “sense, think, act” but continually hit the processing limits before reaching an “intelligent robot”.

I also keep hitting the power limits of my mobile platforms. Carl operates 6-8 hours before needing a 3 hour recharge. Create3 Pi5 ROS 2 based WaLI operates less than 2 hours before needing a 2-3 hour recharge when factoring in navigation ability that Carl does not have. Actually Carl, (without ROS), has much more ability with his Pi3 than WaLI with his Pi5.

LLMs will operate on a Pi5, but most folks don’t have the patience to wait for the results. Robots need lots of parallel thinking and control units, that the RPi architecture is simply not designed to meet.

Today’s “distributed processing” architectures with function specialized processors stretches my concept of what a “robot” is, such that I feel most robots are actually remote control sensor platforms.

As a single programmer, I cannot keep up with the skills needed to pull it all together. I love seeing your progress, and with you great success.

1 Like