Autonomous driving GoPiGo3 - bachelor project

Did you see my posting on the setup for the raspberry pi camera up above?

https://forum.dexterindustries.com/t/autonomous-driving-gopigo3-bachelor-project/8685/55

There are a few cute settings in the current config file that might be useful - the only caveat is to keep the aspect ratio at 4:3 if you want to use the entire field of view.

2 Likes

Okay thatā€™s nice and realistic anecdote :smiley:

3 Likes

Yes, I have seen it but havenā€™t tested it yet! Stillā€¦ very interesting and great input, thanks!

3 Likes

Is there any chance on finding a data sheet for the motors of the gopigo? I need basic information about them for my thesis and there are no information at all.

3 Likes

Most of the hardware is remarkably well documented on the Dexter Industries GitHub page.

I donā€™t remember if there is a BOM for the motor assemblies, but they are, essentially, standard robot gear-motors that you can buy anywhere with a magnet wheel and a hall-sensor PCB attached.

The encoders are made of:

  1. A disk magnet that contains either six poles or 16 poles.
  2. Two hall sensors located 90Ā° apart so that the robot can sense the direction of rotation.

GitHub is your friend.
Anyone else would end up looking there to answer this question.

Also the have schematics for just about everything there.

What prevents you from referencing the entire robot instead of every individual component?

2 Likes

Alright guys, time for an updateā€¦ So I was trying to get this lane detection algorithm from github working but there are two main problems. firstly, the raspberry pi 3 is not powerful enough to make this algorithm go in real time and secondly, the positioning of my camera is pretty bad. If the gopigo is at an turn I only ā€œseeā€ the outer lane and not the innerā€¦ So I am not really sure if i can make this algorithm be dumber and easier to use or If i should use something elseā€¦

Till the histogram it worked quite goodā€¦ But since starting the lane search everything just failed hardā€¦

This is the current code:

https://www.codepile.net/pile/Wm8M3jXJ

2 Likes

Do you have the camera set up to use the entire field-of-view?Ā  Depending on the video mode you set up, itā€™s really easy to get a much more limited FOV than you want.

Messing with the camera and trying to figure out how to make the latest rpi_camera updates work with GoPiGo O/S, I discovered two things:

  1. The video mode has to be one that allows the full camera sensor to be used instead of a subset of it.
  2. You want to set a 4:3 aspect ratio as that appears to be the aspect ratio of the standard raspberry pi cameraā€™s sensor.

You can get fancier cameras with higher resolution and a wider FOV, but I donā€™t know if you have time for that.

Another option is to get an add-on lens kit, like they sell for smartphones. For example, if you add a fish-eye lens adapter it will increase the FOV, but will add spherical distortion.Ā  If you can compensate for spherical distortion in software, that might be a way to do it.

Is your Raspberry Pi throttling because of voltage or heat?Ā  @cyclicalobsessive has some nifty scripts that show exactly that kind of problem.

Raspberry Pi-4 boards are not that expensive, comparatively.Ā  Assuming that the software can handle it, you want to go with the largest amount of RAM you can get as RAM = speed.Ā  Another thought that I just had, make sure you assign enough RAM to the GPU so it can help you.

Also, donā€™t skimp on the SD card.Ā  Spend the extra dinero and get the best and fastest cards you can - A1 rated as a minimum.

Another speed boost is to forgo the SD card and go directly to a small USB SSD drive like the 500GB Seagate Expansion/One-Touch SSDā€™s, as they blast past SD cards in performance.Ā  Here in Russia they are the equivalent of about $80 US.Ā  The 1T version is about $120 - $150, but you probably donā€™t need that much space.

Another tip is to turn on TRIM as that will help the SD card/SSD last longer and run faster.Ā  Chances are, if you have a really good, name brand, SD card, TRIM will work right out of the box without any additional setup.Ā  If you use a SSD, you will probably have to enable discard/trim in software.Ā  You can enable automatic TRIM on the SD card/SSD by starting the fstrim.timer service.Ā  You can find more information here:

https://www.jeffgeerling.com/blog/2020/enabling-trim-on-external-ssd-on-raspberry-pi

What say ye?

2 Likes

Very good questionā€¦ Not really sureā€¦ how to check that. All I know is that, I am streaming the video feed with ā€œcv2.VideoCapture(-1)ā€ :sweat_smile:

Yes, I am using 640x480 as a compromise between image quality and complexity.

Yeah, there is no time to get more hardware. This hardware need to be enough. Everything else needs to be software.

Not sure, but I tested the code and it was extremely slowā€¦ Like a couple of seconds till the camera recognized a new scene.

The main problem was and probably is the delivery bottleneck of Pis right now. So I dont think I could get a Pi4, even if I wanted.

2 Likes

Youā€™ll have to do what I did, do a web search on raspberrypi.orgā€™s documentation pages for the camera stuff.Ā  Get yourself a cup of tea and some cookies when you study it - it took me awhile to figure it out.

@cyclicalobsessive has a couple of interesting routines that tell you if itā€™s throttling or not, and what the battery voltage is.

Do you have the link to his GitHub repo?Ā  I think itā€™s named ā€œslowrunnerā€

The two routines you are looking for are called ā€œprint_voltages.pyā€ and ā€œthrottled.shā€ and they would be in the ā€œCarlā€ project - or something like that.

I donā€™t know how proper it would be for me to distribute his code without his permission, so Iā€™m hoping he hops on soon and gives you his blessing to use his code.

==================

Seriously, we can theorize about cameras, FOV, Raspberry Piā€™s that are way undervoltage or smokinā€™ hot - but until we get some real data about where the actual bottlenecks are, itā€™s all smoke and mirrors.

One thing you can do is ā€œhtopā€ - it gives you a nice, colorized, representation of your systemā€™s status, (processor load, memory used, swap used, etc.), and like regular top, it shows you who the hogs are and what theyā€™re spinning onĀ  Processor?Ā  Disk I/O?Ā  Thrashing swap?

You canā€™t clear the bottleneck until you know what it is.

1 Like

Some questions and thoughts:

  • To optimize, you need to know how long each major operation is taking - so need to perf it.
  • Does it still reco lanes if you donā€™t warp the incoming image?
  • How does decreasing the number of areas from 9 to 8,7,6,5,4, and 3 affect speed and recognition performance?
  • What if you use 9 lines or 9 areas of smaller vertical area - starting at 2 pixels and working up to full area.
  • What is the performance when you turn off image display and image markup?

Your program is compute bound - only using one core of the Raspberry Pi. My single vs multiprocess lane finding experiment on the Pi3B showed a 2.5x speed improvement (20FPS vs 8FPS) when allowing asynchronous computation. If the 9 histograms are the bottleneck, perhaps multi-threading (or multi-processing) that routine could use the three other idle cores.

MY CONCLUSIONS

Using MultiProcessing for 640x480 image results in 2.5x higher frame rate (20 vs 8 fps)

Using MultiProcessing for 320x240 image results in (only) 20-30% higher frame rate (28 vs ~23)

Utilizing the processing result of the single-threaded is much easier,
(multi needs interprocess result messaging)

Find_lane_in( frame) method needs a computation only mode, (no image result)
to further minimize computation time and improve performance for robot navigation

2 Likes

I need to adjust the functions after histogram first, because it doesnā€™t recognize anything right now. These functions are pretty hard to understand, so this would need quite some time, I guessā€¦

I think the problem with that is that I have a continuous lane line on the right side but the lane line in the middle isnā€™t continuousā€¦ So if I just take lines in which I am searching for the lanesā€¦ then there will be lines that dont find the lane on the left.

Yeah, I have that in my mind but I donā€™t have experience with multi-threading at all. If I have a algorithm working that finds the lanes but is just too slowā€¦ Then I can implement the multi-threading approach.

3 Likes

The more I think about itā€¦ the more I am sure that I need to look for a different algorithm. Like I said, in turns the camera doesnā€™t see the whole lane and I think this algorithm is not made for thatā€¦

3 Likes

I am trying to go back to the basicsā€¦ and improve the visibility of the lanesā€¦
At the moment I am using the functions canny and threshold and combining the images to get a good result on the lanesā€¦
note: I lowered the threshold, that the combination image is better understood.

I am thinking about other color or filter based images, but not sure which one is best.

So the next step is to extract the information of the lanes and visualize them in the original imageā€¦
After that I am warping the image and then the clusterf**k starts again and i need to calculate the curvage information

3 Likes

Little update about the whole project. The lane detection so far in a decent state. I figured that the problem of lane detection with turns is the gathering of the lines from the hough transformation. In Turns this merging doesnā€™t work because itā€™s not gonna be a straight line afterwardsā€¦ So I decided to skip the merging part and voila the lanes are foundā€¦ No pretty, but itā€™s fineā€¦ I focus on the writing part in the next weeks. But I have still a couple of things I need to fix:

  • Lane detection: The result of the lane detection needs to be some sort of information for the steering. I was thinking about the seperating idea, that I had a couple of weeks agoā€¦ Maybe I can divide the roi in a couple of sections and use a histogram on them to detect the middle of this particular section. After that I can connect the midpoints with a line and I have something like a trajectory. Problem here is that, like I said earlier, when I drive a turn the camera loses one lane. So I thought a work around. In the bottom of the screen I know the width of the lane in pixels of the picture and in real life of the track. I made a small function that calculates the width of the lane for a certain height. So If the camera loses one lane I can use this ā€œdefaultā€ width and estimate the middle.

  • Obstacle detection: I tried to use a haar cascade for detecting cars. I works quiet fine. Of course itā€™s not robust at all and pretty inaccurate, but this will do. On the pc I tried a DNN with trained data to detect cars and itā€™s slow as f**k, but still fun to try. So the goal here is to write a small programm that lets the gopigo drive forward while capturing the camera data and use the haar cascade to detect a model car infront. Then I have two options. First, I draw the bounding box and if the bottom edge of the bounding box is below a certain threshold I stop. Second, I try to build a distance estimation for a single cam. I found two github pages about this topic. And if the distance is below a certain thresholdā€¦ againā€¦ stop.

  • Lane keeping: IIIIIF I can get the information from lane detection about the steering, I can calulate the steering angle for the gopigo. I tried the gopigo function steer which seems promising. I havenā€™t tried the spin-right, spin-left or target_reached - functions. Maybe those could work as well.

Soooā€¦ Conclusionā€¦ I am finally out of the writerā€™s block and I try to write as fast and as good as I can that I still have time to complete this tasks above. If all goes to plan in about one month the thesis is written, the gopigo is driving by himself, I am happy, exhausted and a freaking engineer!!

3 Likes

Congratulations. You put a lot of work into this.
/K

2 Likes

Hello everyone! Quick updateā€¦ I am trying to extract any information of steering out of the imageā€¦ But I just canā€™t seem to get it down.
The code so far does this: (Current code is linked on the bottom!)

  • Gets frame and undistort the image with calculated camera matrix and distortion coefficents
  • Get the ROI by using only 60% of the bottom part of the image
  • image processing with grayscale, blur, canny and threshold function
  • Combining canny and threshold to get a ā€œbetterā€ result
  • Apply Hough Transformation to connect the separated lane parts together and display it
  • Warping the perspective to birdeye

Now the tricky partā€¦ My approach was the following:

  1. Mark the middle of the pictures width with a small line
  2. Use the warped image and cut it into a few pieces
  3. Apply a histogram on the pieces and calculate a steering angle out of it.

EDIT: After thinking about it last night, I realized that I calculated the angles wrongā€¦ Maybe I can get this approach working after allā€¦ Still the problem in curved lanes is this thereā€¦

And this approach is not working at all so far. Here are a few output images:

First the result of the Hough Transformationā€¦ Some would say this is lane ā€œdetectionā€. Works fineā€¦ Not perfect, but fine I guess
(1)
lane_image

This is the warp image of the combination of canny and thresholdā€¦ The lanes get displayed pretty good. Huge problem is that If I get near a turn like in picture 3 the result of the warping is very badā€¦ and this is killing the histogram approach ā€¦ Any Ideas how to make this work differently?

(2) Wheel is around 30cm away from the turn

(3) Wheel is around 15cm away from the turn

(4) This is a piece of the birdeye image with the histogram of itā€¦ The averaging works fineā€¦ Maybe with a offset the middle gets found
Hist

The current code:
https://www.codepile.net/pile/Y6WmZzgP

2 Likes

Small update, big impact:

Obstacle Detection: I implemented a distance estimation with a comparison of a reference image . Itā€™s not super accurate and the distance alters the whole time, but stillā€¦ the distance is estimated. I reused my code to record a drive-by video to get the camera footage and drive at the same time. At a certain limit of distance the gopigo stop. So, I count that as a win!

Lane Detection: I fixed the bad calculation of the angle this morning and I kept going. I have coded something like a trajectory on the birdeye-view image and transported it back into the driving perspectiveā€¦ Also not pretty, but wellā€¦ itā€™s a trajectory :smiley:. After fixing a few more bugs I managed to get a steering output, that is calculated through the histogram averages like I showed yesterday. After that the lane detection part ended and I created a new file called lane keeping :smiley:

Lane Keeping: I used the steering output to calculate a movementā€¦ Not that easy to be honestā€¦ Itā€™s very flawed so far, BUT itā€™s the first time the gopigo is driving based on the camera and the calculations. I managed the drive a ā€œstraightā€ line and even a left turnā€¦ Of course driving repeatedly in the oncoming traffic but heyā€¦ nobody is perfect, right? Somehow the right turns donā€™t seem to work properlyā€¦ I also noticed that I probably made a mistake with the steeringā€¦ Because I donā€™t consider the middle of the footage as a baseā€¦ Stillā€¦ my boy is driving for the first timeā€¦ calculations are a little off, but he did drive the lanes he was supposed tooā€¦ at least most of the timesā€¦ One thing I noticed and predicted was that he turns are executed a little too earlyā€¦ Donā€™t really sure how to fix thatā€¦ I thought about some sort of a dead time controllerā€¦ Has anybody any experience with that?

By the wayā€¦ @cyclicalobsessive , you talked about the advantages of multiprocessing. I am interested in this kind of optimization now. Any tips on how to implement that bad boy quickly?

2 Likes
  1. There are two architectures to understand, threading vs multi-processes.
  2. There are two concepts to understand how to implement, data transfer/messaging and synchronization
  3. Analysis of data flow and synchronization will determine what must be single-threaded and what can be distributed based on input/output and computational demands.

There are many sites to introduce and contrast the two architectures such as this one

I have some simple examples of each architecture with one of the several possible data transfer/messaging and synchronization methods.

multi-threaded:

multi-processed:

3 Likes

@superbam

It sounds like youā€™re kicking ass and taking names!

This isnā€™t a trivial project, and youā€™re doing a lot of fundamental research on the GoPiGo - which is always valuable.

A serious, big, BIG :+1:

3 Likes

Thank you very much! I am trying my best to get this project up and running. There has been a lot of research and countless hours of coding and writing so far. The next four weeks is the completion of it all. Deadline is the seventh of february with a little presentation before that. And if all goes to plan I should get a replay from the admissions office for my masters application for next march. I keep you guys posted! If I complete this thing, it wouldnā€™t have been possible without you guys! Thank youā€¦ again!

3 Likes