Autonomous driving GoPiGo3 - bachelor project

Wow!

Two months to accomplish this, all by yourself?

I’m assuming you’re not doing this in a vacuum, and you still have your “Advanced Differential Vector Analysis” math class, along with your “Applied String-Theory” (Physics 410), and the rest of the nonsense that fill an 18 credit-hour course load - right?

And this?

Surely you have a team to help you - I can’t imagine a project scoped this broadly, with a two months time-line, and being required to do it by yourself ?!!

Is it possible that you have misinterpreted the scope of the project?

We’ll do what we can to help, but. . . .

1 Like

Alright, I need to explain a couple of things first. First of all, yes this is an engineering project for an engineering degree. Yes, time is very fast moving and limited. Yes, the project right now seems pretty grim and yes I feel like I am struggling at pretty much every aspect of time.
The one thing that I can decrease is the quality of the result. This task is attempted with a raspberry pi 3 and a cheap cam. So everyone reading the (hopefully) finished bachelor’s thesis should know that the hardware is very limited and so is the result. Yesterday I saw a self-driving car that drove for a second and stop to scan the area… This is a possible goal of my car as well. The computing power of the pi3 is probably not great enough to get this thing running in real time. But this is absolutely fine. My professor told me that he would be satisfied if the car is placed on the track(that i printed) and the car is detecting the lane and certain objects. This is satisfying for him, but for me a car that can’t drive is pretty dull. So yes… I am putting a s**t tone of pressure on myself, that I feel everyday nowadays :sweat_smile:.
The task regarding the thesis itself is to write about the project and the implementation and to give an overview about autonomous driving… primarily with the use of a camera. This writing task alone is a lot of work, I know. But I don’t really have an option or plan b. This is the task and this is what I try to achieve.
About the “doing this in a vaccum” theory. It is pretty much in a vaccum. I wrote all exams the thesis is the only thing left and stand between me and my degree. Two months(well 1.5 months now) is all just for that now.

1 Like

I am unaware of any complete lane following example on the GoPiGo to “reproduce”.

I did not integrate my OpenCV lane detection code to drive the GoPiGo. I did test the code running on the GoPiGo with white paper lane boundaries. That integration would require:

  • assume start position in lane “stopped at an intersection”
  • simplifying assumption: one or more lane lines extend across the intersection
  • extract the lane line info
  • if two lanes detected, compute a target point half way between them
  • if only one lane detected, compute a target point “half a lane width” away from the detected line
  • [translate the target point in the image frame to the robot frame] - maybe not
  • compute direction change to the target point
  • decide what forward velocity allows vision lane center detection loop to smoothly control motion
  • issue robot API to implement the direction change (non-blocking execution)
    • several API options available: steer(), orbit()
  • If have $4 Grove Ultrasonic Ranger or $30 DI TimeOfFlight Infrared Distance Sensor:
    • add “smooth emergency accident avoidance stopping” (no obstacle avoidance)
    • place Cardboard Car Model in lane stopped at stop sign at next intersection to end test run
2 Likes

I know what you’re saying as I tend to do that myself - shoot high and figure that even if I fail, I will have accomplished something.

In this case, (IMHO), time is the critical resource and I would work toward the professor’s stated goal - detect the lane and obstacles - not re-invent the Tesla self-driving car.

If you can accomplish more, great!  In the meantime, don’t risk your degree by trying to reach so far you finish nothing. . .

1 Like

Absolutely true yes… Well the first part is the lane detection anyways. More lane detection solution are for straight lines only. If I dumb down the result, at least the minimum solution should include turns. And for obstacle detection, I need to figure out which obstacle is the easiest to detect. Maybe with a simple cascade.
Of course the implementation of a driving car with a lane keeping algorithm is pretty hard. But working step by step is the way to go here

3 Likes

Ok, now you are saying you want to go beyond the examples and design with OpenCV tools.

Perhaps you could establish a set of “stretch goals” that are added after a simplification is achieved if time permits?

Really, is it so bad to start with straight line detection, with a straight lane with an obstacle stop, and a measurement of how much lane curvature the solution tolerates (comparing that to the “US Interstate Highway Curve Radius Design Standard”) , then if time permits solve for greater curvature or propose a “future investigation”?

(The test design should model vehicle/lane proportions that road designers follow also.)

2 Likes

I entirely agree.  You need to get the required assignment done within the professor’s stated goals.  If the professor is happy with straight lanes, give him straight lanes.  Concentrate on accomplishing that - though you may think this is “dumbing down” the result - remember this is a Bachelor’s thesis, not a Masters or Ph.D. - you don’t have to re-invent an entirely new mathematics from the ground up.  :wink:

Get the assignment done.

Then - as @cyclicalobsessive said, you can indicate in your report additional research that could be done to extend the project and improve on the results.  That, (indicating additional research to accomplish additional goals), is something professors just love to see in reports, as it indicates serious research and thought on your part.

Go get 'em tiger!  Just don’t get distracted. . .

1 Like

Yeah that’s all true… Of course I have to get the straight lane detect going first and see how long that is gonna take… After that I can think about turns. But it’s also very true, that I can’t figure out a task… I can still write about it in my thesis which fills my pages, so win win I guess… in a way it least :smiley:

3 Likes

In a way, this is like the military - give them exactly what they ask for.  If you try to get fancy and clever, you just give them additional ways to hammer you into the ground, or you run the risk of balling-up the entire project.

Many, (MANY), years ago, I worked as a technical person for a company that did high-reliability electronics and avionics for airplanes and submarines - and the spec’s were brutal, as they should be - because people’s lives literally depended on it.

Inspections by agencies like the FAA, the Department of Defense, the Department of the Navy, (and so on), were tough.  I learned the best way to “pass” an inspection like this was to “show them what they want to see”.  In other words, show them that you know the spec’s, you know the rules, and you know what you’re doing.  No more and no less.

Know the limits of your authority and the specification - that way when they try to “bag” you with something you are not required to do - you can tell them just that; “Sorry sir, but that’s not a requirement for us because we’re not certified to Cat-B yet.”

Give the professor what he expects to see and talk about your wish-list in the report.

That will use your limited time most effectively.

2 Likes

I hear you. Thankfully my professor is very good to talk to. So if he says that his would satisfy him, then this is enough.

3 Likes

There is a whole “cottage industry” right now about what suite of sensors are needed for obstacle detection for “real cars” in “real life situations”. Every sensor has benefits and limitations.

You should pick the simplest to achieve obstacle detect to start with in your project. Choosing if that is image processing for “a new object crossing central horizontal line” or a physical distance sensor will be very important to how quick you get something up and running. The physical distance sensor would be my choice, because I have not tried the image processing solution before and it (image analysis) will require more processor than simply polling a distance sensor.

BTW, I was notified today that a “Luxonis Oak-D-lite smart vision sensor” is ready to be shipped to me. This is fitting with the concept of reserving the GoPiGo3’s RaspberryPi for sensor fusion, decisions, and control. Performing image processing on the only processor on the GoPiGo can only be for learning. Cars today have a network of smart sensors and distributed processors, so a GoPiGo3 with no smart sensors can only begin to simulate a partial solution to the whole problem

2 Likes

We end this day on a “high” note. I made the lane detection algorithm work that I tried a couple of weeks ago with a picture of the raspberry pi cam:
Before:
Strecke
After:
result with averaged lines

3 Likes

Big Kudos!

That is the basic technology demonstration. Love it.

3 Likes

Thanks! Let’s keep the progress going :slight_smile: ! Again… thanks to both of you for your time and help!

3 Likes

Do you have a proposed software architecture diagram yet?
Is there a vehicle speed controller based on inputs from other processes?

2 Likes

Bravo!

EXCELLENT news!

I think you have, if not the lion’s share, than at least the tiger’s share of the work done.  Great job!

I have to laugh at that. . . .

One of the major limitations is dollars, (or Kroner, or whatever you call your currency there), as some of these solutions cost far more than the robot itself.

The one @cyclicalobsessive is talking about “supposedly”, (I want to see it working before I believe it), does all the fancy image processing in the camera head itself - and draws power like an automobile’s starter motor! - leaving your robot the job of analyzing the results and deciding what to do with them.

That might be a “stretch goal” you could mention in your report - some of the new technologies that would make the job easier.

1 Like

The only software architecture is the one that I made in week one. Which is based on the architecture of autonomous driving in generell:

No there is no speed controller. Or let’s say there is no specification about it. In my mind, I execute the code via ssh and the gopigo drives the course

3 Likes

Thank you! Always nice to hear words of encouragement. Yes that’s a fair point your making! Before deciding for the gopigo I had the openmv h7 cam in mind with a lego based car. This cam is basically a microcontroller with a cam built for machine learning. Incredible piece of hardware!

3 Likes

“the code” needs a design to meet “actual requirements”

  • should be non-blocking, non-sequential to be “always in control”
  • modularized to limit coupling and maximize cohesion (this is another “cottage industry”)
  • explicit data paths
  • no use of side effects!
  • how do modules get configuration data
  • how do modules access common data (centralized or distributed data keeping)
  • Multi-threaded or Multi-processing (and why)
1 Like

Here is a big picture architecture for one of my robots - Carl:

Central data is stored in CarlData.json
Separate Python Programs implement major “Behaviors”
(There are multiple instances of the EasyGoPiGo3() class and there are some collisions possible since there is no “static class data”.)

2 Likes