Autonomous driving GoPiGo3 - bachelor project

Hello everyone! Today I am starting his thread about my bachelors project. My goal with the gopigo is to implement lane detection, lane keeping and obstacle avoidance.

Hardware: GoPiGo3 with Raspberry Pi 3B+, Raspberry Pi Cam V2 and 32GB microSD card
Software: Buster OS, VNC Viewer and VS Code

If I encounter bugs or problems, I will post the link to the thread here as well! Help and suggestions are always welcome!


First Problem is already solved. I was using a venv on my pi which caused a lot of trouble with the gopigo firmware. I “solved” it in this thread:


Thanks for documenting your process here @superbam. I’m sure it will be illuminating. I learned a lot just lurking on the venv thread.

Best of luck with your project.


Yes, that is pretty much the essence of it :smiley: . After resetting the microSD with Buster everything seems fine so far. I noticed that I get this warning with opencv sometimes like before:

[ WARN:0@0.816] global /home/pi/opencv/modules/videoio/src/cap_gstreamer.cpp (2076) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Failed to allocate required memory.
[ WARN:0@0.818] global /home/pi/opencv/modules/videoio/src/cap_gstreamer.cpp (1053) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0@0.818] global /home/pi/opencv/modules/videoio/src/cap_gstreamer.cpp (616) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

but without the numpy error… So the cam shows me the output, but the warning are still displayed. I suspect with using the raspberry pi cam I need to adjust some parameters or something.


Far be it for me to contradict, and I don’t want you to think I am telling you what to do, but is OpenCV a requirement?

GoPiGo O/S has TensorFlow light pre-installed and pre-configured, and there is example code that might be interesting.

I remember that there is a “face recognition” example and there may also be a lane-following example too.

Maybe you can use this to help you complete your project on time?

1 Like

No, OpenCV isn’t required, but most projects that detect lanes are using it. I have no experience with TensorFlow, but it that’s the way to go, I would try it out.


Unfortunately I have zero, repeat no, experience with any of this stuff and I especially cannot advise you as to why you should choose one over the other.

Do you have any real experience with OpenCV?  Than that’s what I would do.

Otherwise, if it were me, I would - at the very least - look at a pre-implemented solution.

Since I have heard that the latest GoPiGo O/S has TensorFlow lite pre-installed, it was my humble opinion that it might help.

You may wish to do a web search on OpenCV vs TensorFlow lite.

@cyclicalobsessive knows far more than I do, and he will be able to advise you better.

1 Like

Very little to be honest :smiley: . I started using python with this project. Yeah using a pre-implemented solution would be very nice. But finding this solution and make it work with my hardware is the task here


Wow, wars could break out before the end of the day!

My take on the “state of image processing”:

OpenCV is a mature toolkit containing a designed variety of basic feature and object detectors, to be applied to images, with examples for every feature/object detector - (also allows custom detectors).

TensorFlow(lite) is a tool with a number of ad-hoc example detectors, with the idea that every use includes extending an existing detector or creating a totally new application specific detector net.

TensorFlow is hot, OpenCV has been hot for longer.

I did find this interesting TensorFlow-lite on RaspberryPi Lane Follower:


I told you @cyclicalobsessive was hot!

How so?

AFAIK, he’s on a relatively tight schedule and anything we can do to help him to NOT re-invent the wheel is a plus.

If GoPiGo OS can help, great!  If not, at least it was a thought.


Am I reading that article wrong, or does TensorFlow lite require OpenCV to do lane detection?  :man_facepalming:

There are a number of interesting links there that I might follow sometime soon.

(P.S. maybe Carl/Dave should see that video?)

1 Like

Who knows - it talks about using TFlite but needing OpenCV to process youtube videos, so maybe the OpenCV is used to extract the test videos?


While watching an rsync of GoPiGo OS go by, I noticed it appears to implement OpenCV along with TensorFlow lite.

At least there’s a whole LOT of stuff under /opt/opencv. . .



Later. . . .


It’s STILL spooling by!

OpenCV seems to be a much larger installation than TensorFlow.  :astonished:


Oh wow that example really is great. I am wondering if that could really work on a Raspberry Pi 3B+. In the comment section of the youtube video he says, that it’s not that fast and that a GPU is probably needed to run it in real time. Although I would love to implement it… I doubt a machine learning algorithm is the right way with the limited computing power of the hardware :slightly_frowning_face:.
I am honest with you guys… I feel kind of overwhelmed but this task… Too many pre-implemented solutions that don’t really help me… Also… I need a lot of time to change a pre-implemented solution for my needs. Well, but… no point in quitting I guess…


Your situation is concerning to me for a couple reasons:

  • A craftsman or engineer needs to be familiar with their toolbox to estimate the time required and probability of success of a project. There used to be a project management triangle idiom floating around "“Good, fast, cheap. Choose two.” to describe the relationship between time, cost (people and materials), and scope. An undergrad research project typically constrains time and cost so scope is the only variable, and must be underestimated to allow for unforeseen/unplanned challenges.
  • “Engineering Success” might require
    • a detailed problem statement,
    • identification of challenges
    • enumeration and selection of specific challenge mitigations
    • solution design,
    • implementation
    • post analysis
    • project documentation
    • ?project demonstration?
  • My experience attempting to just reproduce a “Found Internet Example” on my RaspberryPi with my Raspberry Pi Operating System version, with my installed Python version, with my OpenCV version, with my ad-hoc Python knowledge, alone with only “the net” to ask for help has often stressed me to the point of needing a break, blocked until someone answers, or actually giving up on the project. You probably won’t have these freedoms, and it does not sound as if you have found an example implementation to “simply” reproduce.
  • Unless you are swimming with OpenCV’s available image processing techniques and the benefits and limitations of each, extending any found example to your specific purpose will be shooting in the dark hoping to score a bullseye.

The challenges of

  • setting up an operating system,
  • setting up a software development process,
  • installing the robot API,
  • verifying the robot API installation,
  • installing OpenCV,
  • verifying the OpenCV installation,
  • choosing what example to attempt to reproduce
  • getting someone else’s program running properly
  • understanding the purpose / technology of every line of that program

are not insignificant! Is it possible to scope your project to do that, document the process, the challenges that arise, the solutions to the challenges and what you specifically learned?



Two months to accomplish this, all by yourself?

I’m assuming you’re not doing this in a vacuum, and you still have your “Advanced Differential Vector Analysis” math class, along with your “Applied String-Theory” (Physics 410), and the rest of the nonsense that fill an 18 credit-hour course load - right?

And this?

Surely you have a team to help you - I can’t imagine a project scoped this broadly, with a two months time-line, and being required to do it by yourself ?!!

Is it possible that you have misinterpreted the scope of the project?

We’ll do what we can to help, but. . . .

1 Like

Alright, I need to explain a couple of things first. First of all, yes this is an engineering project for an engineering degree. Yes, time is very fast moving and limited. Yes, the project right now seems pretty grim and yes I feel like I am struggling at pretty much every aspect of time.
The one thing that I can decrease is the quality of the result. This task is attempted with a raspberry pi 3 and a cheap cam. So everyone reading the (hopefully) finished bachelor’s thesis should know that the hardware is very limited and so is the result. Yesterday I saw a self-driving car that drove for a second and stop to scan the area… This is a possible goal of my car as well. The computing power of the pi3 is probably not great enough to get this thing running in real time. But this is absolutely fine. My professor told me that he would be satisfied if the car is placed on the track(that i printed) and the car is detecting the lane and certain objects. This is satisfying for him, but for me a car that can’t drive is pretty dull. So yes… I am putting a s**t tone of pressure on myself, that I feel everyday nowadays :sweat_smile:.
The task regarding the thesis itself is to write about the project and the implementation and to give an overview about autonomous driving… primarily with the use of a camera. This writing task alone is a lot of work, I know. But I don’t really have an option or plan b. This is the task and this is what I try to achieve.
About the “doing this in a vaccum” theory. It is pretty much in a vaccum. I wrote all exams the thesis is the only thing left and stand between me and my degree. Two months(well 1.5 months now) is all just for that now.

1 Like

I am unaware of any complete lane following example on the GoPiGo to “reproduce”.

I did not integrate my OpenCV lane detection code to drive the GoPiGo. I did test the code running on the GoPiGo with white paper lane boundaries. That integration would require:

  • assume start position in lane “stopped at an intersection”
  • simplifying assumption: one or more lane lines extend across the intersection
  • extract the lane line info
  • if two lanes detected, compute a target point half way between them
  • if only one lane detected, compute a target point “half a lane width” away from the detected line
  • [translate the target point in the image frame to the robot frame] - maybe not
  • compute direction change to the target point
  • decide what forward velocity allows vision lane center detection loop to smoothly control motion
  • issue robot API to implement the direction change (non-blocking execution)
    • several API options available: steer(), orbit()
  • If have $4 Grove Ultrasonic Ranger or $30 DI TimeOfFlight Infrared Distance Sensor:
    • add “smooth emergency accident avoidance stopping” (no obstacle avoidance)
    • place Cardboard Car Model in lane stopped at stop sign at next intersection to end test run

I know what you’re saying as I tend to do that myself - shoot high and figure that even if I fail, I will have accomplished something.

In this case, (IMHO), time is the critical resource and I would work toward the professor’s stated goal - detect the lane and obstacles - not re-invent the Tesla self-driving car.

If you can accomplish more, great!  In the meantime, don’t risk your degree by trying to reach so far you finish nothing. . .

1 Like

Absolutely true yes… Well the first part is the lane detection anyways. More lane detection solution are for straight lines only. If I dumb down the result, at least the minimum solution should include turns. And for obstacle detection, I need to figure out which obstacle is the easiest to detect. Maybe with a simple cascade.
Of course the implementation of a driving car with a lane keeping algorithm is pretty hard. But working step by step is the way to go here


Ok, now you are saying you want to go beyond the examples and design with OpenCV tools.

Perhaps you could establish a set of “stretch goals” that are added after a simplification is achieved if time permits?

Really, is it so bad to start with straight line detection, with a straight lane with an obstacle stop, and a measurement of how much lane curvature the solution tolerates (comparing that to the “US Interstate Highway Curve Radius Design Standard”) , then if time permits solve for greater curvature or propose a “future investigation”?

(The test design should model vehicle/lane proportions that road designers follow also.)