command_request_blocks()
=> Return all blocks on the screen
command_request_arrows()
=> Return all arrows on the screen
command_request_learned()
=> Return all learned objects on screen
command_request_blocks_learned()
=> Return all learned blocks on screen
command_request_arrows_learned()
=> Return all learned arrows on screen
command_request_by_id(idVal)
*idVal is an integer
=> Return the object with id of idVal
command_request_blocks_by_id(idVal) *idVal is an integer
*idVal is an integer
=> Return the block with id of idVal
command_request_arrows_by_id(idVal) *idVal is an integer
*idVal is an integer
=> Return the arrow with id of idVal
command_request_algorthim(ALG_NAME)
* ALG_NAME is a string whose value can be the following
“ALGORITHM_OBJECT_TRACKING”
“ALGORITHM_FACE_RECOGNITION”
“ALGORITHM_OBJECT_RECOGNITION”
“ALGORITHM_LINE_TRACKING”
“ALGORITHM_COLOR_RECOGNITION”
“ALGORITHM_TAG_RECOGNITION”
“ALGORITHM_OBJECT_CLASSIFICATION”
command_request_knock()
=> Returns “Knock Recieved” on success
No, in fact just the opposite. I will be forced to implement an OpenCV corresponding feature set for comparison. The flexibility in OpenCV trumps the HuskyLens speed. The zero idle power of OpenCV is supremely huge by comparison to HuskyLens’s always on requirement.
AND from what I have read, regrettably after the fact, it doesn’t actually work. I’m already regretting giving into to the “add hardware” weakness. I’ve been preaching “use a minimum sensor suite to the max”, and then I order up this thing. Maybe I’ll use it on a ROS powered, lidar equiped, GoPiGo4 bot some day.
What I was trying to say is that, (coding wise), it seems like you pull flying monkeys out of your ears on command, write classes and methods with both feet - in your sleep, no less - and generally write, (and understand), more code in ten minutes than I do in three weeks!
Once you get that beastie home with you, plug it in, and look at the API, you’ll have code written in no time - and my money’s on you coming up with a better handler than the manufacturer did!
Nearly every function requires a series of non-intuitive manual steps.
The Python interface is minimally implemented, allowing for algorithm selection and reading results only.
All configuration and learning are only available via the screen, the multi-function button and select button.
I’m building a “GoPiGo Interface and Examples” but disappointed that so much manual configuration is needed to make the device useful to the bot.
Yup.  Not surprising.  I’ve noticed that in a hardware dev environment, software support is the ugly step-child.  There’s usually just enough software written to verify functionality to spec, and that’s it.
The rest?  To Boldly Go Where No Hardware Dev Has Gone Before?  That, my friend, is “left as an exercise for the student.”
This is also true in a software dev environment.  The MDN/HTML-5 Gamepad methods and documentation are, (I’m being very polite here), “sparse”.  The thought that someone may want to use a gamepad for something other than Half-Life, or World Of Tanks, didn’t occur even in their nightmares.  The result?  If you want to use a “joystick”, (gamepad), for anything other than raster animated sprites, “Son. . . You’re on your own!” (Blazing Saddles)
First developer - just get “it” done (while we figure out what “it” is.)
Second dev - if it only did this one little thing different
Third dev - useless code confusing me, I’ll just get rid of this little thing I don’t understand.
Fourth dev - “ why didn’t they make this totally configurable so everyone can use it?”
Fifth … useless. I can rewrite it in an hour…”well, that was optimistic”
Never said it wasn’t good, just that you just had to “grow your own” code with it - which is very common with “bleeding edge” hardware.  But then again, that’s half the fun!
If they have a “Huskylens” site or forum, maybe you can go back there and share your results?  (Along with a shameless plug for the GoPiGo. )