HUSKYLENS for GoPiGo3

I ordered up one of the HUSKYLENS boards, and figured out the cable swaps needed to connect it to the GoPiGo3:

HuskyLens For GoPiGo3

WIKI:
https://wiki.dfrobot.com/HUSKYLENS_V1.0_SKU_SEN0305_SEN0336

HUSKYLENS INFO

  • I2C addr: 0x32 (50 decimal)

  • Processor: Kendryte K210

  • Image Sensor: OV2640 (2.0Megapixel Camera)

  • Supply Voltage: 3.3~5.0V

  • Current Consumption (TYP): 320mA@3.3V, 230mA@5.0V
    (face recognition mode; 80% backlight brightness; fill light off)

  • Connection Interface: UART, I2C

  • Display: 2.0-inch IPS screen with 320*240 resolution

  • Built-in Algorithms:

    • Face Recognition,
    • Object Tracking,
    • Object Recognition,
    • Line Tracking,
    • Color Recognition,
    • Tag Recognition
  • Dimension: 52mm * 44.5mm / 2.05 x 1.75inch

  • HUSKYLENS Connector ("Grove connector, different pin assignment)
    (left to right, tabs down)

    • T: connect to SDA
    • R: connect to SCL
    • -: Gnd
    • +: +5v (or 3.3v)
  • GoPiGo3 (True) Grove Connectors: (right to left, tabs up)

    • SCL
    • SDA
    • 5v
    • Gnd
  • HUSKYLENS to GoPiGo3 Cable - Mod one end:

    • swap Pin 1 and Pin 2
    • swap Pin 3 and Pin 4

sudo i2cdetect -y 1

Bring Software to Raspbian For Robots RPi

mkdir Carl/Examples/HuskyLens
cd HuskyLens
git clone https://github.com/HuskyLens/HUSKYLENSPython.git

API Returns:

command_request()
=> Return all data

command_request_blocks()
=> Return all blocks on the screen

command_request_arrows()
=> Return all arrows on the screen

command_request_learned()
=> Return all learned objects on screen

command_request_blocks_learned()
=> Return all learned blocks on screen

command_request_arrows_learned()
=> Return all learned arrows on screen

command_request_by_id(idVal)
*idVal is an integer
=> Return the object with id of idVal

command_request_blocks_by_id(idVal) *idVal is an integer
*idVal is an integer
=> Return the block with id of idVal

command_request_arrows_by_id(idVal) *idVal is an integer
*idVal is an integer
=> Return the arrow with id of idVal

command_request_algorthim(ALG_NAME)
* ALG_NAME is a string whose value can be the following
“ALGORITHM_OBJECT_TRACKING”
“ALGORITHM_FACE_RECOGNITION”
“ALGORITHM_OBJECT_RECOGNITION”
“ALGORITHM_LINE_TRACKING”
“ALGORITHM_COLOR_RECOGNITION”
“ALGORITHM_TAG_RECOGNITION”
“ALGORITHM_OBJECT_CLASSIFICATION”

command_request_knock()
=> Returns “Knock Recieved” on success

1 Like

Once you get that in your mitts, it’s all over as far as the software is concerned.  (I can help design the mount, though.)

You’re going to have to get Carl a bigger hat!

No, in fact just the opposite. I will be forced to implement an OpenCV corresponding feature set for comparison. The flexibility in OpenCV trumps the HuskyLens speed. The zero idle power of OpenCV is supremely huge by comparison to HuskyLens’s always on requirement.

AND from what I have read, regrettably after the fact, it doesn’t actually work. I’m already regretting giving into to the “add hardware” weakness. I’ve been preaching “use a minimum sensor suite to the max”, and then I order up this thing. Maybe I’ll use it on a ROS powered, lidar equiped, GoPiGo4 bot some day.

1 Like

My tongue-in-cheek reply went right past you. . .

What I was trying to say is that, (coding wise), it seems like you pull flying monkeys out of your ears on command, write classes and methods with both feet - in your sleep, no less - and generally write, (and understand), more code in ten minutes than I do in three weeks!

Once you get that beastie home with you, plug it in, and look at the API, you’ll have code written in no time - and my money’s on you coming up with a better handler than the manufacturer did!

1 Like

Update:

  • HuskyLens arrived
  • My cable plan worked
  • The device works
  • Nearly every function requires a series of non-intuitive manual steps.
  • The Python interface is minimally implemented, allowing for algorithm selection and reading results only.
  • All configuration and learning are only available via the screen, the multi-function button and select button.

I’m building a “GoPiGo Interface and Examples” but disappointed that so much manual configuration is needed to make the device useful to the bot.

1 Like

Yup.  Not surprising.  I’ve noticed that in a hardware dev environment, software support is the ugly step-child.  There’s usually just enough software written to verify functionality to spec, and that’s it.

The rest?  To Boldly Go Where No Hardware Dev Has Gone Before?  That, my friend, is “left as an exercise for the student.”

This is also true in a software dev environment.  The MDN/HTML-5 Gamepad methods and documentation are, (I’m being very polite here), “sparse”.  The thought that someone may want to use a gamepad for something other than Half-Life, or World Of Tanks, didn’t occur even in their nightmares.  The result?  If you want to use a “joystick”, (gamepad), for anything other than raster animated sprites, “Son. . . You’re on your own!” (Blazing Saddles)

1 Like

First developer - just get “it” done (while we figure out what “it” is.)
Second dev - if it only did this one little thing different
Third dev - useless code confusing me, I’ll just get rid of this little thing I don’t understand.
Fourth dev - “ why didn’t they make this totally configurable so everyone can use it?”
Fifth … useless. I can rewrite it in an hour…”well, that was optimistic”

1 Like

At $44 I’m a totally amazed at everything it does, even is quick about it.

1 Like

Never said it wasn’t good, just that you just had to “grow your own” code with it - which is very common with “bleeding edge” :wink: hardware.  But then again, that’s half the fun!

If they have a “Huskylens” site or forum, maybe you can go back there and share your results?  (Along with a shameless plug for the GoPiGo. :wink:)