URDF Introduction: ROS2 GoPiGo3 gpgMin.urdf

In ROS there are several standard XML format files to describe a robot
(in terms ROS applications can understand):

  • URDF: Unified Robot Description Format
  • xacro: XML Macro Language (allows complex elements to be summarized into URDF files)
  • SDF: Simulation Description Format (primarily used by the ROS simulation tools)

The URDF primarily describes:

  • the virtual “robot frame” established by the “base_link” element
  • the parts of a robot that make up the “base_link”
    • where these parts are located in relation to the “base_link”
    • the shape and size of the parts belonging to the base_link
  • the parts of a robot that have their own frame
    (with shape, orientation, reference point of the part)
    • Left Wheel
    • Right Wheel
    • LiDAR: “laser_frame”
    • Servo
    • Distance/Ultrasonic Sensor
    • Bumper
  • Joints describe
    • relation between “part frames” and the “robot frame”
    • degrees of freedom of a part - rotation about an some axis

These are a few of the XML elements from the Minimal ROS2 GoPiGo3 URDF file gpgMin.urdf:

Even if a GoPiGo3 robot does not have a LiDAR, the ROS will know about the parts that do exist to keep track of where the robot (base_link / “robot frame” ) is in the world frame

  • when the “odometry function” announces a change (in a /odom topic message)
    (odometry comes only from encoders in the minimalist ROS2 GoPiGo3)

Yes, part of a ROS robot builder’s responsibilities is to maintain a current and accurate robot description file. There are some tools to help, but I have not learned how to use them. I hand-built the sample gpgMin.urdf in a text editor. ( I did not even have XML element parsing assistance).

The gpgMin.urdf file includes an example LiDAR mounting position that would need to be corrected for the robot builder’s actual mounting of the LiDAR to the core GoPiGo3 robot.

If/When the time comes that you want to build a charlie.urdf, you can incorporate elements from the example finmark.urdf and my dave.urdf for some parts.

Neither Finmark nor Dave have bumpers. The bumper would be an advanced concept that will need a custom ROS2 node to watch the physical bumper and publish a /bumper topic.

These are the three URDF examples included in the ROS2 GoPiGo3 uSDcard image:


This is the urdf file of Bernardo R Japon, the author of the “Hands On ROS: ROS For Robotics Programming” showing the use of xacro to the fullest:


definitely nicer than my Finmark URDF - but I have the lidar mounted differently, so it wasn’t really an option to use as it. But I should look at some of the visualizations again.


Does it really matter if the URDF shows every nut, bolt, and hole in the chassis?


No, the added visual information is only used by visualization applications and taxes those apps to display all the complexity.

Of course, if you have a wind tunnel cooled GPU and liquid cooled CPU running Ubuntu Desktop, you can have a very pretty 30 Hz display of your Raspberry Pi powered ROS2 GoPiGo3 (that is working hard to figure out how the “world moved” roughly every 5 seconds)

A robot’s worst nightmare is not knowing where it is while executing a move command from a human. A robot’s top dream is to know where the juice is. The robot builder’s top dream is to know what the robot was thinking before it crashed.

(I promise you, Dave does not care that I didn’t give him pants in the URDF.)


Not quite that bad - I’ve run Gazebo sims of Finmark reasonably fast on an older laptop with Ubuntu on it. Of note @jimrh even models that have nice mesh files for the visuals still tend to use simpler, blockier element descriptions (dimensions, and also mass and center-of-gravity) for the physics simulation. Since everything is positioned via transformations (which tend to be related to a fixed spot on the robot), the visual elements can be made to move correctly, even though they’re not driving the physics calculations.


I noticed that @cyclicalobsessive has modeled a huge domed cylinder on Dave to approximate the character mounted on top.

Is that necessary for center-of-gravity calculations?  Or is it there to make the robot more recognisable?  And if it is necessary for CG or “can I fit there?” calculations, where is the carrying handle?

Is it necessary to have a representation of the LIDAR, accurate to fractions of a millimeter, or is the fact that a LIDAR exists at point “X, Y” sufficient?

Maybe I’m just lazy, but it seems like a lot of work just to have a 'bot drive around.

Me?  I’m happy if Charlie doesn’t fall off the table and break something!

1 Like

The “Dave visual” in the URDF is only to make the visualization of Dave distinct from Finmark and gpgMin models.

Dave has more platforms and the Dave to distinguish it visually from Finmark. Both Dave and Finmark have more sensors visually and “sensor frames” modeled than the gpgMin.

The representation is immaterial.

For Dave, I specified the location to the centimeter, which seems to be accurate within a few millimeters as best I can measure.

It is impossible to measure with a ruler the distance from the unmarked center of the LiDAR to the unmarked center of the wheel-base, both of which are at different elevations from the floor.

Additionally, the reality of the bowed axis of wheel rotation (from load), and slight not-round wheels causes a wobbling of the virtual center of the wheel-base, and a varying wheel-base-width. So when the robot broadcasts “I think I am at x,y,z (meters to 17 significant digits)” in the /odom topic 30 times a second, consumers of the information need to handle significant variations from one thought to the next.

      x: 0.09589677444463708
      y: 9.189333067601711e-05
      z: 0.0
      x: 0.0
      y: 0.0
      z: 0.002727331845487278
      w: 0.9999962808235862

The encoders provide sub-millimeter straight line accuracy in the x estimate, but do not directly measure heading changes. This introduces considerable inaccuracy in the heading estimate and the dependant y estimate.

Turtlebot has a node parameter to tell the bot to use the IMU for heading and encoders for travel (as best I can tell from the docs - haven’t dug into their code). Eventually I want to add something like this to my ROS2 GoPiGo3 node to improve the published /odom estimates. BUT I think ROS provides lots of different pre-coded strategies for differential-drive mobile robots. Since I have no ROS expert to guide in that investigation it has been on the back burner since I first created my ROS2 GoPiGo3 node two years ago, based on the work of others:

  ROS2 Migration of a ROS1 gopigo3 node
#    See: https://github.com/ros-gopigo3/gopigo3-pi-code/blob/master/pkg_mygopigo/src/gopigo3_driver.py
#    me: ROS2 Migration 2021
#    Christian Rauch: Original author of ROS1 gopigo3_node.py 2018
#    Quint van Djik and John Cole: edits
#    Bernardo R Japon: edits for "Hands On ROS for Robotics Programming", Packt Publishing

Last night I took Dave on a slow mapping tour. He did very well while in the kitchen, but everywhere else he was wildly optimistic about his accuracy to recognize the world around him.


@jimrh A few edits beyond when you read my initial reply.


Is it possible to limit the precision to reasonable values?


An interesting philosophic area - what effect on ROS2 consumer nodes would clamping the publishing of odometry estimates to the precision of reasonable certainty, and would it be worth the additional computational load if doing so offers some advantage?

ROS is filled with stupidly long numbers when displaying for human consumption. The “time stamp” for that /odom topic:

    sec: 1668095743
    nanosec: 658198310

Can’t you specify the number of digits you want?

When I was working on my joystick project, the resolution of the axes was insane - I don’t need 20 digits (or whatever) on a controller that uses non-precision carbon pots.  so I implemented a routine that did 4/5 rounding, away from zero, rounding to the number of digits I felt was reasonable - in my case two decimal digits of precision.

Everything that returned a floating point number for speed consumption used raw numbers then filtered to two decimal digits before sending them to the motor routines.

I also implemented a “dead zone” of ±0.10 around the zero center readings to eliminate chatter and “ground noise on the radar”.

Since the joystick sends data every Vsync interval, (every call to get_animation_frame) I also had to implement a filter that would only transmit data to the 'bot if there was changed data to send.

This can be gnarly, but after I cleaned up the data like that, the robot was much easier to control.


There is also a question of what is reasonable - I went looking to compare the YDLiDAR X4 which Keith and I have to the RPILiDAR A1 which the Turtlebot4 sports to divine the precision and accuracy for these devices.

Uh what? You expect a straight answer? At what reflectivity, at what angle of incidence, at what range, at what scan rate, at what std deviation at $100?

The Turtlebot4 LiDAR:

Range Resolution	
- ≤1% of the range(≤12m)
- ≤2% of the range(12m~16m)

- 1% of the range(≤3 m)
- 2% of the range(3-5 m)
- 2.5% of the range(5-25m)

So 1% at 1 meter is 1cm accuracy with 1cm precision.

And the YDLiDAR X4:

Systematic error:  2 cm  at Range≤1m
Relative error:  3.5% at 1m<Range ≤6m

Again, I don’t know what that means.

When I put my bot as accurately as possible 0.50 meters from a flat, non-black, roughly perpendicular wall, I am seeing readings that seem accurate to less than 1cm.


Dave would be much easier to control if I had a real joystick instead of a 1|0 gamepad.


How about repeatability?

If Dave moves forward 10, or maybe 30 cm and then backs up the same distance, does the measured distance correspond to the distance measured with your “standard” ruler?


Is this with the LIDAR or the TOF sensor?

What happens if Dave moves forward 2 meters, turns around, and moved back to appx. 0.5 meters?

1 Like

That is one of my “Robot learns about itself” to-do list item. The bot would use a corner and it’s “given as fact” self dimensions to determine wheel-diameter, wheel-base, encoder accuracy/precision, LiDAR accuracy/precision, and then drive around to determine localization accuracy/precision and mapping accuracy/precision.

And then somebody would have to give me an honorary Ph.D


Those are the LiDAR specs.

The DI Distance Sensor may be slow and a CPU hog. I have put it on the back shelf of my brain for a while.


The problem with that is there is no standard to compare to.

People learn this because:

  • The have arms and legs that can be used for exploration.
  • They fall or hit things if they guess wrong.

Your robot doesn’t have known length appendages, its kinesthetic sense is poor (AFAIK), and it has little sense of “impact”.


As far as I am concerned, you already have that, but my accreditation lapsed long ago.


Practical examples of precision vs. accuracy in the most recent XKCD



Not sure how that would work unless you included an extra bit of data (e.g. distance to at least one of the walls). There are camera calibration algorithms based on grids of known size. A grid like that could work to provide distance in a self-calibration scenario like that.