General update on my Robo voyage for anyone interested

It’s been a while since I’ve posted in this community so I thought I might give an update for anyone interested (I have been quite busy…):

In addition to Charlie (GitHub - jfrancis71/ros2_brickpi3: ROS2 packages to drive BrickPi3 (a Raspberry Pi to Lego EV3 hardware interface)), I have welcomed 3 types of Holonomic robots (GitHub - jfrancis71/ros2_holonomic_lego: Demos of ROS2 enabled Lego EV3 holonomic robots (on a Raspberry Pi with BrickPi3 interface)) and 3 other mobile robots (GitHub - jfrancis71/ros2_mobile_lego: Collection of Lego mobile robots running ROS2 on Raspberry Pi (using BrickPi3 hardware interface)). Alfie and Thomas are two current favourites in that last repo.

I started a project last year to do vision localisation using YOLO object detection. If you’ve seen some of my videos you may have noticed posters with a cat and dog in background. The idea was that by recognising where these are in the camera image (and their locations pre specified) you can work out where you are in the environment. It somewhat worked. However I found that there were a number of tricky issues such as determining probability distribution of recognising image when it’s only half in frame or quite far away led to a number of imponderables. Also quite sensitive to camera calibration and recognition errors. Overall I lost confidence that this would ever work well and have abandoned.

Currently have a project to do visual route following, ie you drive along a route (software recording snapshot images along the way). Robot can then self drive this route. It looks for the best matching image and then asks if that camera image is slightly to the left or to the right (or spot on) of the best matching image and makes a small turn adjustment correspondingly. It works quite well (at least tested on shortish routes). Mostly finished, but I’m looking at putting a nice video demo together.

I bought a ROS2 robot earlier this year. It was not a great success, some software errors, and it wasn’t quite as open source as I’d understood. The good news is it came with a 2D Lidar; so I pulled the robot apart (of course), and now all my Lego robots can be fitted with Lidar which is super fun.

Prompted by my having Lidar, I am currently playing around with ROS2 Nav2; the SLAM toolbox seems to be working quite well. The general Nav2 stack less well (I don’t yet have good autonomous navigation), but I haven’t yet fully understood this stack; it’s early days.

Lastly, my current recommended installs are Mamba/Conda. I have been playing around with Docker as an alternative and would be interested in others experiences with this. While searching to solve problems I’ve come across cyclicalobsessive comments which were most helpful. So I’m clearly not the only person thinking of this.

Apologies for length of post!

Julian.

1 Like