GoPiGo3 Escapes To Find Lanes On The Highway

While watching the progress of the adorable OrionRobots father/daughter team working on OpenCV line following for their 2020 PiWars entry, (https://www.youtube.com/watch?v=DKCsQOEiRTc) I realized I had not posted

Carl’s “Lane Finding Escape”

The Video of Carl’s Escape:

There is also an extensive write up of my tests of the algorithm using single and multi-threaded OpenCV processing of video frames, with all the code here:

MultiProcessing Find Lane In PiCamera Frames

Configuration

  • PiCamera v1.3 mounted on robot Carl (Dexter Industries GoPiGo3)
  • Raspberry Pi 3B Processor 1.2GHz 4-core 1GB memory
  • Raspbian For Robots (Dexter Industries Release of Raspian Stretch)
    • Linux Carl 4.14.98-v7+ #1200 SMP Tue Feb 12 20:27:48 GMT 2019 armv7l GNU/Linux

Demonstrate find_lane_in(image) using multiprocessing with PiCamera

One process owns camera and fills a Queue with 320x240 images. (Uncomment alternate for VGA 640x480.)
Four processes each grab images from the Queue, and run find_lane_in(image)
(they do nothing with the result, unless the write-result-to-timestamped-file line is uncommented)
(they will write-input-frame-to-file if line is uuncommented)

lanes.find_lane_in(image) performs the following:

  1. create a grayscale image copy
  2. blur the grayscale image
  3. apply Canny edge detect to blurred grayscale image
    return edge mask
  4. crop edge mask to triangular region of interest
  5. use Hough transform (binned r,theta normal to len/gap qualifed lines) to find lines
  6. average left and right lane lines down to one left of lane, one right of lane line
  7. create lane lines overlay
  8. combine lane lines overlay over original image
    returns image with lane lines drawn in bottom 40%

(can uncomment write-edge-detect-image)


INPUT FRAME(S)

input_image


EDGE DETECTION (Gray, Blur, Canny, Trangular Region Of Interest Mask)

edge_detect


FIRST FRAME RESULT (No wait for camera to adjust exposure)

first_result


NORMAL FRAME RESULT

result


MY CONCLUSIONS

Using MultiProcessing for 640x480 image results in 2.5x higher frame rate (20 vs 8 fps)

Using MultiProcessing for 320x240 image results in (only) 20-30% higher frame rate (28 vs ~23)

Utilizing the processing result of the single-threaded is much easier,
(multi needs interprocess result messaging)

Find_lane_in( frame) method needs a computation only mode, (no image result)
to further minimize computation time and improve performance for robot navigation

Definitions

  • Frame Processing Time consists of

    • dequeue or if single threaded, capture an image,
    • find_lane_in(image),
    • print statistics to stdout (redirected to file)
  • Inter-frame Time

    • Time until next find_lane_in(image) results available

RESULTS

Multiprocessing of find_lane_in( 640x480 image)

  • In Order Average Inter-frame Time 48 ms or 20 fps
  • By Process Average Frame Processing Time 193 ms
  • “cpu load 80%”
  • 307200 pixels per frame

Multiprocessing of find_lane_in( 320x240 image)

  • In Order Average Inter-frame Time 35 ms or 28 fps
  • By Process Average Frame Processing Time 142 ms
  • “cpu load 57%”
  • 76800 pixels per frame

Single-Thread.py Results of find_lane_in(640x480 image)

  • Average Inter-frame Time (and Frame Processing Time) 125 ms for 7-9 fps
  • “cpu load 35%”
  • 3 fps with imshow

Single-Thread.py Results of find_lane_in(320x240 image)

  • Average Inter-frame Time (and Frame Processing Time) ~40 ms or 21-26 fps
  • “cpu load 50%”
  • 7 fps with imshow
1 Like

I made the mistake of leaving this out where Charlie could see it - and he’s jealous!