I was following the Dexter Instruction to connect my GoPiGo2 to Google Cloud Vision API:
I got the Raspbian OS version V.9, last updated on 26th of June 2018. GoPiGo Firmware version 1.6
Every time I tried to use google api I got the message back “illegal instruction”.
When I use Google Cloud Shell with the same service account credential I don’t have any issue.
Any help is really appreciated
I’m using currently google vision api on windows and on Google Cloud Shell. The only place where I was not able to set is on raspberry/gopigo2 platform which I have exhumed after some years since my kids are little bit bigger now. Of course I have already did some research and tests before to submit a topic in this forum. I wonder if somebody in Dexter may help to address this issue, and overall understand if gopigo2 platform was ever tested to work with google api?
Thanks
While we wait for DI to jump in, do you mind some questions?
What do you mean by “to work with google api”?
Can you post the exact traceback console message that had the illegal instruction error?
Was the error in one of the three DI example programs from the tutorial you linked?
were you able to follow the tutorial steps for the google cloud web steps exactly? (It seems to me like everything was different - I eventually figured it out but I didn’t take my usual copious notes so I won’t be able to help others navigate the path. I ended up setting up another project, enabling the vision API, adding credentialed user, setting up a new billing account, linking the user to that billing account and downloading auth file but only after searching every menu option and some new permission denied errors that had hints “browse to …”.)
There weren’t any gopigo specific imports or calls in the camera_vision_label.py that I was able to run on my GoPiGo3 based Raspbian For Robots (Linux Carl 4.14.98-v7+ #1200 SMP Tue Feb 12 20:27:48 GMT 2019 armv7l GNU/Linux)
By applying what has explained there on Google Cloud Shell and Raspberry OS/GoPiGo2 either, ( you can see the result on both the terminals in the picture), I have understood that probably the issue is with my raspberry processor version (ARMv6) and its set of instructions which doesn’t support google API.
I hope to get a confirmation from DI as well. If so I will plan to upgrade my raspberrypi card with a new one, probably a Pi3b+ if my GoPiGo2 is compatible.
From a power usage standpoint on a mobile robot, I would recommend getting the Pi3B non-plus version. Your robot will run between 35% longer and twice as long on a charge versus the plus version (probably 60% longer).
Additionally (I believe - need DI confirmation), be sure to stick with Raspbian For Robots stretch based version bc it still has GoPiGo2 support. I think I read that DI was dropping “native” support for the GoPiGo2 in the buster based R4R.
The Google Vision API “illegal instruction” from the import may need to be reported to Google as an issue on Github. The python vision API was just bumped to official release in February, and Google does not list any Raspberry Pi incompatibilities.
Attach my OS version. Unfortunately I didn’t get chance to look and compare with Pi-Zero. However, I remember to have read on DI blog a while ago a difference in supporting of Cloud API based on platform version but I was not able to find the article at this time.
I will definitely try to solve this issue in one way or another since I have a project that I want to accomplish
Perhaps the post was about using easygopigo in place of easygopigo3 when converting GoPiGo3 examples for the GoPiGo2, That there are some things that will not be available. There really should not be any GoPiGo2 hardware or software incompatibility with any of the Google APIs.
That’s an interesting find, @strangersliver.
I wasn’t aware that Google dropped support for armh6 . That’s kinda annoying, really.
Your GoPiGo2 can definitely be upgraded to a Pi3B or Pi3B+. It is currently untested with a Pi4 but I can’t think of any reason why it wouldn’t work. I can’t recommend it though because it has a capacitor right in the middle so you wouldn’t be able to have a heat sink on the Pi4. And the heat sink is needed. Stick with the Pi3 family.
@strangersliver, Please keep us informed of your progress (and plans).
One of the goals I have for my robot is “capture, label, catalog objects in its world”:
find center of room
face south wall of room
Perform “capture, label, catalog”
perform known object labeling
catalog known objects with location found with
reference to cropped photo of object
reference to “room view” where object is located
identify suspected unknown objects in photo
catalog suspected unknown objects with
reference to cropped photo of unknown object
reference to “room view” where unknown object is located
if not done one complete rotation:
rotate one photo view width clockwise
repeat from “capture, label, catalog”
return to charging dock
ask human for help to “classify suspected unknown objects”
I believe that precise localization may not be needed for my robot, (or easily done), and that the bot could use this catalog to know roughly where it is by a “capture photo, label objects in view, compare with known world objects and derive robot location and orientation” ability.
(I don’t know of another use for the acquired knowledge at this point, but my knowing that the robot is “aware” of the existence of these objects would allow me to think that the robot could infer an awareness of its self as knowing something unique that no other robot knows. If that makes any sense.)
This is one of my “bucket list of abilities for my robot”, so I am curious about your plans and progress. Visit often please.