"Easy Pi Camera Sensor" with Examples Now Available

Easy PiCamera Sensor Class For GoPiGo3 Robots

NOTE: No connection to DexterIndustries or ModRobotics. Do Not Ask Them For Support!

Python3 Class to treat the PiCamera as a unified family of sensors useful in robot programs

including:

  • Left, Front, Right Light Intensity (0-100)
  • Motion Detector (Left, Right, Up, Down)
  • Color Detector With Color Table ReLearning (Black, Brown, Red, Orange, Yellow, Green, Blue, Violet, White)
  • Maximum Intensity area horizontal angle from center and value (0-100)
  • 320x240 RGB image save to JPEG file or retrieve as numpy array

Refresh rate is roughly 10 per second.

To Bring Down To Your GoPiGo

wget https://github.com/slowrunner/Carl/raw/master/Projects/EasyPiCamSensor/EasyPiCamSensor.tgz 
tar -xzvf EasyPiCamSensor.tgz

Requirements

Requirements ( All come in the stock ModRobotics Rasbian_For_Robots ):

  • Python3
  • (Does not use/require OpenCV)
     

Other Requirements:

  • GoPiGo3 for some example programs
  • (Sensor does not require GoPiGo3)
  • Not multi-processing-safe (camera does not allow multiple streams)
     

Note: The tgz contains a version of easygopigo3.py with a working steer(lft_pct,rt_pct) method

  • For use with the Braitenberg Vehicle examples
  • The current ModRobotics easygopigo3.py steer() method has a problem.

ARCHITECTURE

  The EasyPiCamSensor class encapsulates  
  - PiCamSensor class:  
    * Creates a thread to run PiCamSensor.update() roughly 10 times per second  
      - PiCamSensor.update() computes average light intensity for the left half, the whole, and the right half of a camera image  
        and estimates what color is present in the central portion of the image by matching RGB values to a table of colors,  
        and estimates what color is present in the central portion of the image by matching HSV values to a table of colors,  
        and computes the horizontal angle from centerline to an thresholded area of brightest pixels  
    * Starts a PiCamera class object which will capture video frames at 10fps  
    * Creates MyGestureDetector for the PiCamera object to analyze motion using three consecutive frames  
      - When MyGestureDetector.analyze() finds new motion it latches the direction of motion and the time it occurred  
        (reading the latched motion clears the latch, ready to hold the next motion event that will occur)

   EasyPiCamSensor <-- PiCamSensor <-- PiCamera  
                                          ^  
                                          |  
                                   <-- MyGestureDetector  

API

  • epcs = easypicamsensor.EasyPiCamSensor()
    • Create and start Easy Pi Camera Sensor object
    • Read values from config_easypicamsensor.json if exists
  • light() # return average intensity across entire sensor (0.0 pitch black to 100.0 blinding light)
  • light_left_right() # return average intensity across left half and right half of sensor
  • color() # returns estimate of color of central area of sensor using “RGB” method
  • color_values_dist_method(method=“BEST”) # returns nearest color with distance and method (“RGB” or “HSV”) used
  • motion_dt_x_y() # returns time of first motion left|right and/or up|down since last method call
  • motion_dt_x_y_npimage() # returns details and image of first motion left|right and/or up|down since last method call
  • max_ang_val() # returns the horizontal angle from centerline (+/- half FOV, left negative) of bright area and max intensity (0-100)
  • save_image_to_file(npimage=None,fn=“capture.jpg”) # saves passed or last frame to file encoded as JPEG
  • get_image() # returns RGB numpy image array
  • learn_colors(tts_prompts=False) # learn one or more colors with optional TTS prompting
  • print_colors() # print the current color table
  • known_color(color_name) # returns True if color_name is in the current color table
  • delete_color(color_name) # removes a color from the (local copy) EasyPiCamSensor object color table
    (must then call save_colors() to make it permanantly gone)
  • save_colors(path=“config_easypicamsensor.json”) # save color table in file [default: config_easypicamsensor.json]
    (Probably a good idea to save to a .json.test file to protect the existing config file; So use responsibly.)
  • read_colors() # reads color table from config_easypicamsensor.json file
  • save_config(dataname,datavalue,path=“config_easypicamsensor.json”) # save a value or variable in the conf file
    e.g: epcs.save_config(“vflip”,True) for later retrieval with vflip = epcs.get_config(“vflip”)
    epcs.save_config(“my_color_array”,my_color_array) for later retrieval with my_color_array = epcs.get_config(“my_color_array”)
  • get_config(dataname=None,path=“config_easypicamsensor.json”) # retrieve a value or entire config dictionaryfrom config file if exists
  • get_all_data() # returns dict with all “by-frame” data
  • print_all_data() # convenience prints dict returned by get_all_data()

EasyPiCamSensor Example Programs:

  • read_sensor.py
    • Reads all “by-frame” data from sensor 10 times per second and pretty prints with headings every 15 readings
pi@Carl:~/Carl/Projects/EasyPiCamSensor $ ./read_sensor.py 
2020-12-31 22:23:59 read_sensor: Starting
2020-12-31 22:23:59 read_sensor: Warming Up The Camera
config_easypicamsensor.json or colors_rgb_hsv not found
Using DEFAULT_COLORS_RGB_HSV.
2020-12-31 22:24:04 read_sensor: Starting Loop
xmove ymov     latch_move_time     l_x   l_y       frame_time         rgb  (   values  )  dist    hsv  (       values       )   dist  left   whole  right (maxAng   val )
 none none                         none none 2020-12-31 22:24:06.83  Brown ( 89, 66, 41)  18.71 Orange ( 30.51, 56.53, 35.05)  10.49  23.80  18.01  12.23 ( 16.49, 99.22)
right   up 2020-12-31 22:24:06.92 right   up 2020-12-31 22:24:06.93  Black ( 17, 12,  7)  21.95 Orange ( 30.71, 59.80,  6.87)  10.29  17.63  13.70   9.76 (  7.12, 98.82)
 none   up 2020-12-31 22:24:07.02  none   up 2020-12-31 22:24:07.03  Brown ( 78, 57, 35)  31.91 Orange ( 29.45, 56.74, 30.87)  11.55  13.78  15.58  17.38 ( 16.49, 99.22)
 left   up 2020-12-31 22:24:07.12  left   up 2020-12-31 22:24:07.14  Brown ( 88, 65, 39)  19.65 Orange ( 30.55, 56.93, 34.85)  10.45  22.96  20.50  18.03 ( 16.49, 99.22)
  • i_see_color.py [-h] [-v]
    • Uses EasyPiCamSensor.color_values_dist_method() and optionally [-v] espeakng TTS to report estimate color seen
    • User selects BEST, RGB or HSV color matching method (BEST returns RBG for some colors, HSV for others)
    • Target_Colors.pdf provides color samples that match the default sensor color table (Print on matte photo paper for best results)

Target Color Samples

  • i_see_light.py
    • Comments when someone turns a room light on or off
  • i_see_motion.py [-h] [-v]
    • Reports first motion and datatime since last report
    • Recognizes left or right, up or down motion
    • Option [-v] adds TTS reports
  • i_see_colors_in_motion.py [-h] [-v]
    • Prints and optionally [-v] speaks last motion and color
    • Saves image of motion to motion_capture-YYYY-MM-DD_HH_MM_SS.jpg
    • Note: Image may not catch a fast moving object

Color and Motion Detect With Image Save

  • face_the_light.py [-h] [-v]
    • Turns GoPiGo3 robot to face the brightest area in room
    • Option [-v] narrates with Text-To-Speech
  • braitenberg2B.py [-h] [-v] [-g N.n] [-s]
    • implements Braitenberg Vehicle 2B “loves light”
    • Uses left and right light intensity as stimulus for the opposite side wheel
    • Implementation adds obstacle inhibition of forward motion for vehicle protection
    • Option [-v] narrates with Text-To-Speech
    • Option [-g N.n] introduces stimulus amplification with given gain. [Default 1.0]
    • Option [-s] pops window on desktop every few seconds showing robot view
      (This option only works from command shell on robot’s desktop)

Braitenberg Vehicle 2B using EasyPiCamSensor.light_left_right()

Video: GoPiGo3 “Carl” w/EasyPiCamSensor Braitenberg Vehicle 2B

  • simple_braitenberg2A.py
    • implements Braitenberg Vehicle 2A “loves the dark”
    • Uses left and right light intensity as stimulus for the same side wheel
    • Implementation adds obstacle inhibition of forward motion for vehicle protection
    • (Does not use TTS)
  • teach_me_colors.py
    • Allows adding or re-learning one or more colors
    • Outputs to config_easypicamsensor.json.new
    • (Copy to config_easypicamsensor.json for use)
  • delete_a_color.py
    • Allows testing and easily deleting a poor performing color
    • Outputs to config_easypicamsensor.json.new file
    • (Copy to config_easypicamsensor.json for use)

Credits

DISCLAIMER

First, Thank you for trying my EasyPiCamSensor.

There are certainly more elegant ways of doing this.
This is the best I could do with my limited understanding of Python and the Pi Camera.

This code comes with no warranty of correct function.

If you think it should do something it doesn’t,
or shouldn’t do something it does,
you should know this was a learning experience for me,
It is not a product. I am not interested in maintaining it for you!

No connection to DexterIndustries or ModRobotics. Do Not Ask Them For Support!

If you know how to make it better, create a pull request.
Perhaps I will learn how to merge other people’s code.

2 Likes

@cyclicalobsessive - looks like quite a library. I’m still fooling around with ROS, but will have to check it out at least for ideas.
Thanks for making this available.
/K

2 Likes

Wow!

What a piece of work!!

You do realize that this likely earns you the “programming God” badge, right? :wink:

Nice job!

1 Like

I’ll accept comments or compliments only after you download it and run all the demo programs!

1 Like

You do realize that me messing with that is a bit far out, (make that read, “not near the top of my current list”), right now.

However, as dull a boy as I may be, I’m bright enough to know when considerable work and effort has been put into a project.  And this is one of those times.

Nice piece of work, no doubt.

I’m still working on getting to that level of rarefied excellence where I can start using your projects to help me do useful things.

Until then. . . . .

That’s honest, even if executing the two commands to install it on Charlie take less than 30 seconds, and starting ./i_see__light.py and turning the room lights on and off takes another 15, but hey, I understand what you’re saying. It would be more fun if Charlie had a monk makes speaker mounted under him.

1 Like

Yup!

Right now I’m working on figuring out how to multi-boot Charlie via dip-switches, and continue testing further GoPigo O/S releases.

Spoiler:
It appears that the reason for the flakyness I’m experiencing in a multi-boot environment is because the /boot partition is a fooler.

  • It LOOKS totally generic and bland, capable of doing whatever you want.

  • However, the actual content of the /boot partition is extremely specific to the type, kind, and update level of the O/S you’re trying to run - even though everything has the exact same name - and often the exact same file-sizes - as everything in any other /boot partition.

  • Even something as simple as “apt-get update” can wreck havoc on a carefully crafted /boot partition when the kernel and firmware files get updated.

I am continuing to research this, and it is beginning to look like it has more aspects than a cat has hair. (Unless you have a hairless cat. . .)

And yes, a MonkMakes speaker would be nice.  Assuming I could actually get the blasted beastie over here! (growl and hiss)

I’d be thrilled with a regular speaker plugged into a USB port, or even the fancy HAT that comes with the Google AIY kit, if I could get it to work - but everything wants to be an SPI interface. . . .

1 Like

Did you checkout how the noobs multi-boot image works?

1 Like

Righty-Oh!

That, and BerryBoot, PINN, and a few others.

All of these assume that:

  • You ultimately have some kind of console device attached to the 'Pi that can be used to select the O/S you wish to use.  Normally not an unreasonable expectation, however in a stand-alone 'bot, maybe not.

  • The O/S images are compressed into a special tar-gz, (“NOOBS”) format that includes a lot of special meta-data about what OS does what, with what resources, where and when.

  • I’d really rather use images that are in what I call the “standard” download image format where the structure of the image - both /boot and /rootfs - are contained within a binary image file.  This way any downloadable image that can be flashed to an existing SD card is a valid candidate for multi-booting, not just a pre-selected, pre-packaged subset of available images provided by the maintainer of the boot manager.

Right now I’m looking at a tutorial on multi-booting the Pi-4 which is heavily command-line driven that - once I figure out that tangled mess - might be just what I’m looking for.  Combine a non-native English speaker with the assumption that you’ve done this before and already know what he’s talking about makes a soup that’s a bit chewy.  Ultimately what I will probably end up doing is running through the steps myself, making a total balls-up of it, figuring out what he really intended to say, and then - somehow - translating that into simple steps that people like you and I can do repeatably.

I dug out my old(er) Pi-3 to plug into things and experiment with as Charlie’s getting heartily tired of being disassembled.

Are trying to solve a problem for “if you build it, they will come”? It sounds like an elegant solution to a unique problem.

Personally, I want everything to be stock, so if I have a problem there will be lots of folk that hit my problem and already have a solution

1 Like

It could be that I’m just too stupid and stubborn to know when to give up beat.

Seriously now, it seems to me that in a robotic environment there will be opportunities to make configuration choices that might not want to depend on a monitor/console arrangement for selection.

As I mentioned in the forum posting where everyone said “Are you SURE there isn’t some input selection available” as in VNC, SSH, a serial console, etc.

I said to assume the Pi is like an old-time RS-232 modem where all you get is a set of dip-switches on the back panel to select target baud-rate and framing until after it boots live.

This particular project does not assume any kind of console connection at all - rather that the 'bot is a dumb-lump of hardware until after power is applied and the selections are made before you plug it in.

Then again, maybe I’m just too stubborn to know when to quit!