Localization is still terrible

I haven’t posted much, but I’ve been working on my localization. I updated my URDF so that it’s more accurate. Looks OK in RVIZ, and I’ve checked all the measurements.

Unfortunately I had to dismantle my “roboland” in anticipation of upcoming holidays (and moving). I have a map of part of the ground floor of my house. When I run navigation and look at the localization in RVIZ, the robot is jumping around by a meter or more on the display. Giving it an initial position estimate doesn’t really help. This can’t be just my /odom issues since the robot hasn’t really even moved. Maybe it’s the lidar (with it’s many missing data points), or maybe my den just doesn’t have enough distinct points for good localization.
Very frustrating. Didn’t help that yesterday my SD card was corrupted and I spent a good bit of the afternoon getting back to where I’d been.
Sorry - more of a rant today.


And I have just the “emoji”, (actually an animated gif), for that:


That’s how I feel a lot of the time too.


Very frustrating, but I’m sure there will be some solid learning come out of solving this.

Does the localization come with a confidence value? Is there a confidence threshold parameter?

I’m finishing up my ROS odometer (and spin counter) in a day or so and hope to move on to setting up SLAM. I really want to see what Dave’s results are.

1 Like

Yeah - I’m sure you are correct. But sometimes I’d be happier if things “just worked” even if I remained a little more ignorant :wink:

Great question - I haven’t dug in that deep. I’m sure there is one internally, but not quite sure what I would do with it. I suppose my trouble shooting might be different if any given estimate had a high confidence. But I’m not sure. I suspect it’s not just the odom since it happens even when the robot is sitting still. In that case over time the localization should narrow in (especially if given a hint), but it doesn’t. Which is why I’m suspecting the lidar as an issue.


I don’t know why I hadn’t noticed before (probably because it’s always been this way with my GoPiGo3), but when I’m doing navigation, the entire map was never displayed.

With my recent need to restore from an old backup (see initial post in thread), I hadn’t updated my YDLidar driver (https://forum.dexterindustries.com/t/hands-on-ros-chapter-9-more-on-navigation/8390). That led to RVIZ not getting the scan topic, but also led to the full map showing.

I realized that the launch file in the book (which I’ve copied for subsequent navigation launch files) included lines:

  <node pkg="map_server" name="map_server" type="map_server" args="$(arg map_file)"/>
  <node pkg="gmapping" type="slam_gmapping" name="gopigo3_slam_gmapping"/>

Both lines launch nodes that publish maps. But we don’t need SLAM since we already have made a map. When I comment out the second line, I get this:

Note that localization still isn’t great. But if I give it a nudge with a 2D pose estimate (or a slow spin), localization gets much better

So that’s good!!

Unfortunately if I give it a nav goal, the robot will makes it way, but the localization quickly deteriorates (i.e. the laser signal no longer aligned with the map, like the second image). But I’m seeing much less “jitter”, which is also good. Overall feels like I’m making a little progress.



It sounds like you’ve made a LOT of progress!
:+1:  :+1:  :+1:

It would be interesting to see what the location of the average of all the “x” and “y” values is with respect to the actual location of the 'bot itself.

What is the black rectangular object in the middle of the scatter plots?  Is that the actual 'bot itself?

If that is true, then doesn’t the 'bot already know exactly where it is since it’s the black rectangle itself?


The black rectangle with the green cloud around it is the robot.

As for localization, you have to look to see how closely the red lines (which indicate the lidar input) align with the map. In the middle picture you’ll see they don’t align well, so the robot doesn’t really know where it is.


1 Like

Keith, have you tried localization when wall following?

From the looks of the point cloud, the walls are too far away and there is nothing to sync to, or very few points. I wonder if you instructed the bot to wall follow, if the localization would maintain a better estimate? Perhaps along a long, straight featureless wall it will again have trouble with how far along the wall, but anytime it nears an opening or an obstacle along the wall, it should have lots of points to match up.


There is definitely something weird going on with the pose being about 30 degrees off until you rotate the bot, but perhaps that is expected at startup.

Then the question would be what happens if you always drive with a slight amount of angular motion? Does the localization estimate hold good with non-straight travel, but get lost during straight travel?

After you have some “This is what happens when a, b ,c” documented, I would put the results on the robotics stack exchange as a question for the folks that really know how localization with LIDAR should be working to perhaps tell you what is going on.


. . . and then come back here and share it with the rest of us!  :wink:


Doesn’t seem that unusual at startup compared with other robots I’ve tried in different classes.

Good suggestions. I realize with my actual den that there’s a lot of irregularity/soft surfaces at LIDAR level. Once I move (don’t recall if I’ve mentioned that I’m moving soon) I’d like to re-create my “roboworld” so that things are easier to trouble shoot. And I should spend more time on the robotics stack exchange in general.

Absolutely!! :robot:



I have a tendency to shy away from asking for help to the point of wasting a lot of time trying to educate myself the hard way, or re-inventing solutions for well known problems (known seemingly by everyone but me).

The problem in asking questions these days, is getting the attention of the folk that know. The answers.ros.org and robotics.stackexhange.com have some folks that seem amazingly addicted to helping others.