Thought projects: GoPiGo with the Pi-5 and GoNanoGo?

While I’m laid up, I’ve been thinking about what I want to do next:

The first and most obvious is porting the GoPiGo libraries to the Pi-5, using the latest Raspberry Pi OS as a base.

  • Doable?  Very likely.
  • Difficulty?  Probably no more painful than a knee replacement.  (i.e.  A significant challenge, but most likely achievable.)
  • Desirability?  Very much so, as the Pi-5 is The Next Big Thing, and it’s a dead certainty that people are going to want to use it.

Second, (a project that has been on my radar since day-one, waiting for me to have enough confidence and understanding to try it), porting the GoPiGo libraries and hardware to the Jetson Nano.

  • Doable?  Possibly - my guess is about 50/50, depending on how difficult it is to port the GPG libraries to the nano.
    • The Nano is already “mostly” pin-compatible with many Raspberry Pi HATs, and some, (like the Waveshare e-Ink hats), already have Jetson Nano libraries for them.
  • Difficulty?  Quoting a graduate chemist on Derek Lowe’s blog Things I Won’t Work With, (discussing insanely reactive oxidizing agents):  "It makes n-butyl lithium look like dishwater!"
    And so it is here, it being more like doing that knee replacement surgery by yourself, without anesthesia.  The major question being how many parts of the GPG libraries are strictly Raspberry-Pi specific and if there are equivalent versions for the Nano.
  • Desirability?  Absolutely, as it would put the GoPiGo robot in a whole different league regarding AI, ROS, SLAM, and a whole bunch of other things.

Of course, there’s the GoPiGo-3z, (a port of the GPG to the Raspberry Pi Zero-W), which shouldn’t be too difficult, just so long as you don’t want a beast ROS platform that doesn’t go around SLAMming itself into walls. :rofl:

In fact, one of the things I want to do with Charline is to configure her to be more processor agnostic so I can swap-out the Pi for different hardware without having to totally disassemble her to do it.

Wish me luck!
:man_facepalming:  :exploding_head:  :deaf_person:  :crazy_face:

1 Like

I’m a little confused by this as a possible goal. I documented doing exactly this. The only TODO: left is to build a script to automate it and support the myriads of GoPiGo3 users to purchase the required hardware to mate and mount a Pi5 on the GoPiGo3. Well…there is that test to see if it will even boot with the GoPiGo3 power supply, and will it run the example projects or blink out when the processor is asked to do a little thinking. The processor checks the supply capacity at boot and will not continue if it does not see at least 3A but maybe that is only for USB-C PD power negotiation. Probably bypasses that limitation with GPIO connector power delivery.

Ps. Just went to look back and it looks like I did create the script to install GoPiGo3 software on both the 32-bit and 64-bit PiOS Bookworm.


I think this is the most exciting extension of GoPiGo3 use cases!

GoPiGo OS on Pi0W would bring the entry cost down significantly, and be a better match for the majority of users than even the Pi3 or Pi4.



You’ve been doing so much, so quickly, that you really need a forum subsection of your own to keep this all straight!

I like the Nano and Zero projects for the same reasons you said:

  1. The Zero would make the robot more available since there are more zero’s around than Pi-4’s AFAIK.  And really now, unless you’re a power user like us, do you REALLY need a Pi-5 cluster running your robot?  I can really see the Pi-Z being a better fit for classrooms using GoPiGo O/S - that is unless they’re doing SLAM or Tensor Flow. . .

  2. I think a Nano based GoPiGo would be a huge benefit for those who DO want to run with the ROS big dogs and do all the AI-type heavy lifting - especially since it’s not much more expensive, (AFAIK), than a fully tricked-out Pi-5.  (Depending on where you buy it, it’s about $150 in the U.S. now.[1] [2])  Of course, the fact that it has a pretty well developed ROS/2 ecosystem doesn’t hurt it at all.
    There are a few “gotchas” though:

    • It doesn’t fit the GoPiGo chassis without some significant modifications.  And since the GPG was designed with the Pi’s footprint in mind, it’s going to be a bit of a kludge.  Definitely not for the “glamorous robot” crowd.  Then again, I’m not expecting any of my 'bots to be featured on the front cover of Elle anytime soon.

    • The GPIO port, though pin-compatible, is turned 180° from the Raspberry Pi’s orientation, so you will need to make a custom ribbon cable to hook them together if you want them to fit within the same chassis footprint.

    • And the big one:  The thing eats power like a starving pack of rabid badgers.  (The main power barrel connector is marked 5V at 4A!)  This is sufficient to warrant a beefier battery along with a Charlie-esque 5v supplemental supply.

Both of these are sufficiently interesting extensions of the GoPiGo-3 ecosystem to make them worthwhile projects.


  1. Amazon:  $150 USD
    SparkFun:  $149 USD
    Okdo:  $159 USD
    Seeed:  $149 USD

  2. These prices are for the second-generation Nano which has expanded features:

    • Two camera ports, instead of one, so it can fully utilize the built in libraries for stereo vision/depth.

    • It uses the newest generation Nano processor board which, (though spec-compatible with the older one), has an edge connector which is compatible with upgraded processor boards like the Xavier or Tegra, (the old one used the same connector, but a different pinout), which gives you a HUGE boost in computing power without having to redesign the entire universe.

    • A few other goodies that I don’t remember.

So, we’ll see what happens.

1 Like

There is no peace in Nano ROS land - they are struggling seriously. Yes, some versions of the Nano are finding success running basic ROS 2 packages in Docker, but having difficulty trying to get the ROS2 packages to use the Nano GPU which is the primary Nano “Raison d’Etre”.

As far as people that want to run their GoPiGo3 “with the ROS big dogs” - remember the ROS big dogs don’t care how much puny processing is on the “mobile data acquisition platform”, they are running water cooled desktops with multiple NVIDIA parallel processing cards drawing serious wall power.


Well. . . It sounded like a good idea. . . .


AND there is the issue that feeding the Pi5 with the volume of data (Stereo Depth Cameras) to justify the Pi5 over the Pi4 requires 4.5W more power than the typical ROSbot (Motors, Processor, 2D LIDAR).

I may end up moving my Oak-D-Lite back to my Pi4 GoPiGo3 ROSbot Dave.

The Create3 is seriously under processing powered to survive as a VSLAM platform with the current ROS 2 Humble release. I saw hints of this before biting, but thought “for sure they have or will shortly solve this issue”. The manufacturer is blaming the “ROS Middleware Distribution Layer” over which they have no control, and there is a replacement layer in the future ROS version that is hoped could take the pressure off the four Create3 processors.

I’ve been blocked trying to learn deep networking internal protocols and configuration enough to segment ROS message traffic, with no one seeming to be willing to help on my time frame (I want a solution now. What you all are volunteers writing “Open Source” code? What big company do I work for that you should be interested in my problem?).

I wish my brain could be managed, but then I also wish it didn’t need to be managed.


That’s one of the reasons I’m researching the Nano for the GPG 'bot.

IMHO, if you are going to use a funnel to pour power into the beastie, you might as well get the FLOPs that go with having two handfuls of CUDA floating-point rendering pipelines as a part of the package.

Again, AFAIK, if you have four processors, even with hyperthreading, you have 8 data pipelines.  8 cores equals 16 data pipelines.  Yes, the Pi gets help from the Broadcom GPU, but to what extent? . Where’s the API libraries/documentation that allows programmers to effectively manage that resource?

With the 128 well documented[1] CUDA cores in the Nano, you have the equivalent of the floating point co-processor from Hell.  Add to that, if you have the 2nd generation Nano you can upgrade to better processor boards like the Xavier et. al.

That’s where I think the meat is.


  1. NVIDIA has been pushing the idea of using specially designed graphic rendering pipelines, (CUDA cores), in special non-video GPU chip(s) that are designed to allow for extremely high performance floating point math, especially trig and rendering mathematics.
    In fact they sell special “video” cards that don’t have video hardware to provide teraflop super-computer performance to PC’s.  These are used in medical imaging, genetics, drug research, and in pure science applications.
    As a result, they’ve fallen all over themselves to make API data/libraries and special SDK materials easily available and NVIDIA experts are, literally, crawling all over their forums.

The Nano does claim to only draw 2.5A when powered off the USB-C connector or the GPIO, so perhaps it would survive on the GoPiGo3. The LIDAR and or Stereo Depth Camera would still need that separate battery->supply 5v at 2A->{LIDAR,Oak-D-Lite} I installed on Dave.

One of my “Install GoPiGo3 API for Ubuntu 2x.04” might have to be modified to succeed on the Nano.


That depends on the base operating mode.

The Jetson Nano configurator that runs at first start allows two run modes:

  • A high-power mode that draws maximum power and requires the barrel connector for power.
  • A low-power mode, (5W), that runs the device considerably underclocked, though I don’t know about powering it via the GPIO, etc.

My game plan is to use a parallel power feed like I do on Charlie to ensure sufficient power is available.

See also:

1 Like