Oak-D-Lite Sensor Running Depth AI and MobileNet Object reco on GoPiGo3 over Legacy PiOS

Continuing in the “Post Your Messiest Desk” tradition of late, I present:

  • “Dave GoPiGo3 Platform”
  • hosting “GoPiGo3 installed over the December 2021 Legacy PiOS” system
  • sporting a $79 Oak-D-Lite sensor
  • running Depth AI and mobilenet-ssd_openvino object recognition
  • on the Oak-D-Lite’s Intel MyriadX processor

Since the depthAI demo launches two quicktime windows on the Legacy PiOS desktop which I am viewing via VNC, the Raspberry Pi load is quite heavy at 3.5 causing the RPi to run at 72degC. The Oak-D-Lite is drawing about 700mA from the separate 5v supply.

In this first test, the sensor is displaying depth at every pixel in one window, and the color camera image in the other. Both have markup for the objects recognized:

  • tvmonitor at 74% confidence
  • chair at 86% confidence
  • sofa (in another room over 4 meters away) at 51% confidence

2 Likes

Very cool. So the OpenCV is running on the Oak?
Looks like the coordinate systems is based on Z being depth, with X and Y aligned with the 2-D image axes. You said the sofa was over 4m, but the oak pegs it as 3.66. How accurate have you been finding the Oak for distance?

Happy New Year.
/K

1 Like

I’m guessing that close in the distances will be more accurate. The depth cameras are 75mm apart so the image disparity is probably very slight to the sofa.

1 Like

So, how is this supposed to save processor load and angst on your part?

Need a fan? :wink:

P.S.

You need to locate the sensor behind Dave’s eyes. . . .

Great point.

I’m a big fan of Dave’s, so that’s one…
:laughing: :robot:

/K

1 Like

You’re gonna get a SMACK!
:rofl:

2 Likes

Robots don’t need no stinkin’ windows … and (in my rogue opinion) don’t need full time 3D depth measurements, nor full time object recognition - it’s not like the sofa is going to move from one look to the next!

The plan is to keep all the heavy data processing on the sensor, and the bot only needs to tell the sensor to power up, do a look-see, and report back “The nearest obstacle in the field is a chair 20 cm away”.

One other planned application “follow me”, would need to run the sensor full time during the follow function, but only reports “human is 3 feet away and X degrees [left | right] of center” so the RPi only needs to do speed and differential motor control.

The longer goal is to build “Carl’s home object knowledge base” which will involve:

  • wander or wall follow for a bit
  • perform a 360 image set capture
  • review each image with Carl’s custom “known object neural net classifier”
  • verify all recognized object locations with Carl’s home object knowledge base
  • review each image with an “unknown object neural net classifier” (round | square | non-flat)
  • queue the unknown objects for human classification, naming, object purpose,
  • add the new known object/images to the “known object classifier” (probably off board, but onboard if possible with a lot of time.)

… next wander with updated “known object classifier”

  • notice new object not in Carl’s home object knowledge base (perhaps multiple hits around home)
  • add object location, masked object image, color, size, confidence, date-time added, etc to Carl’s home object KB

The experimentation is taking place on the Dave platform. Carl will need to be rebuilt on the Legacy PiOS and fully checked out, or perhaps I will have to learn to do GoPiGo3 stuff in a virtual environment on Carl. Carl is stable and jealous but avoiding the “robot bleeding edge”.

2 Likes

Cute.

This experiment has caused serious personality disorder syndrome. I wanted to learn about this sensor on the GoPiGo in its simplest form factor. I only had complicated Carl with a fixed OS version, and ridiculously immature"ROSbot Dave" on Ubuntu, so with a quick change I hijacked Dave’s body to the previously disembodied “GoPiGo3 on Legacy PiOS” card. Now there are two robots sharing Dave’s body, but only one of them is “Dave.”

2 Likes

Yes, the Oak-D-Lite has an Intel MyriadX processor capable of 4 trillion image operations per second, and can run multiple OpenCV operation streams simultaneously.

There are three cameras - one color and two grey scale. The two grey-scale camera images can be directed to a “depth AI” algorithm to extract distance for every matching pixel. Images can be directed to any of the common 2D feature or object CNN classifiers like YOLO or edge and corner detector, color area, or to custom built classifiers downloaded to the sensor. There are also streams that can combine depth information with color and edge information for 3D object classifiers but at slower frame rates of course. It is also possible to use sequential frames in some classifiers for object motion measurements.

This thing has more OpenCV recipes than a developer’s algorithms bookshelf.

1 Like

Ah - got it. From an experimentation pov definitely an advantage to be able to pop in a new micro-SD. Perhaps you should name this bot “Not Dave” to distinguish. If you have a third experiment going then that would be “Not Dave Either” (or maybe “Still Not Dave”). Lots of possibilities :wink:

OpenCV is something I’m starting to check out. Sounds like this would be a good way to go.

/K

2 Likes

I did the Practical Python and OpenCV from pyimagesearch and learned alot about the Raspberry Pi, Python, and the fundamental transformations and recognizers of OpenCV. I bought the “Basic Bundle” since I didn’t want any virtual machines or videos. Just the book, the code, and the supplemental material with tests. Highly recommended, even though the OpenCV folk have free introductory tutorials that parallel the course material.

2 Likes

I’m looking for the “Practical Python, Flask, Werkzug, JavaScript and web-services” course. . . . though I think I’m making progress just hacking away at it with a big machete.  :wink:

2 Likes