Hello. I am working on building a GoPiGo similar to the one featured on Google’s Cloud Vision API announcement and have a few questions.
First there is a speaker on the Robot in the video but Dexter is not selling a speaker. Any ideas which speaker they used in the demo and how is connected and powered? Also the camera seems different than the Dexter camera (bigger lens body than the Dexter (I imagine there are a number of cameras supported by Raspberry Pi)), any idea which one it is. Lastly the camera is mounted in an interesting way (off to the side with an upward angle (presumably it is more important to capture images that are above the GoPiGo’s “eye level”)), any idea how that camera is mounted (the acrylic mounts seem to sit @ 90 Degrees to the GPG body)? Any clues would be greatly appreciated.
I didn’t even know there were other cameras before you asked! Dexter sells the official Raspberry Pi camera. The two I linked to are from companies I have never even heard of.
If you look at the video at 0:41, you’ll see the camera is not really mounted in any special way. It’s just the cable that’s folded behind it, and there may be tape to hold it in place. Nothing tricky. You also get a good view of the speaker. It’s most likely a USB-speaker of some sort.
Help in locating the sample code. I signed up for google cloud vision, the only samples are for basic face, landmark and how to cloud. So nothing for this specific robot control, speech, parsing, processing responses. Has anyone actually seen the code linked somewhere? All I can find is the news article stating that is used just a hundred or so lines of python.
Hey todd_s, we’ve been in touch with Google about this, but we still don’t have an open-source example for using this. We want to get it asap though, and we’ll keep asking for it!!
This video is 30 minutes long but they show you how they did it. Some of the code for their example programs are on Github. I have not been able to find an open source archive with everything they used in their demo.
I was able to successfully sign up for cloud apis/voice, got the keys, installed the code (on my Mac) and run the streaming example of voice rec.
I will try to do the same for the video part next.
Will let everyone know how that goes.
Once all of that is working, into the GoPiGo it goes.
I have no expertise at all here; just a note to say that Kazunori Sato’s twitter feed @kazunori_279 mentions a few other gopigo + object recognition projects, including this one from Uday Sandhar which combines Amazon Alexa voice commands and google vision
Also this one described on O’Reilly. It’s not a gopigo (similar chassis but uses an Adafruit motor hat) and doesn’t post all his code either.
These look like challenging projects and don’t seem to include complete code, but the parts list and python library suggestions might be of interest. I would really love to see an easier to follow step by step project with all needed code.
P.S. – and on youtube they’re also demoing a gopigo with the google cloud speech API. also awesome:
I am having trouble on Google Cloud Vision and speech API on the raspberry pi 3. I have download all the command’s and have put in my project code of the Google Cloud Vision like the video you shared! the problem is the python camera-vision-logo.py it wouldn’t take a photo and tell me the logo’s name! I am using Fish-eye camera on the raspberry pi 3 the fish-eye camera is work well i was take a test and it works! Please make a Youtube video or send my in email at Drumclog21@gmail.com!
A couple of things to check so we can figure out what the issue is.
When you run camera-vision-logo.py, is a photo taken? You will find the photo in the same folder, and it’s called image.jpg
If it’s there, could you upload it so we can have a look at the fish-eye result?
The code as is takes photos only through the official Raspberry Pi camera. You will have to edit the code and use whatever commands are appropriate for your camera. The rest of the code should work just fine.