Thinking far ahead, I was wondering if discovery was the only way to develop a map.
ROS.org on Twitter suggested a “Setting up ROS Nav Stack” tutorial. Turns out the writer has other applicable tutorials, among them sensor fusion of odometry with BNO055 IMU for a two wheel differential bot, and this tutorial:
Interesting. I wonder how things like the chairs (which have solid lines on the blueprints) will impact navigation, since at the robot level it will just see the legs. From a navigation standpoint it won’t go there. But I wonder how much it interferes with localization, since the lidar returns wouldn’t be consistent with the map.
One way to find out I guess.
<edit - I had written this reply shortly after you initially posted. But somehow I didn’t actually post it>
/K
At least in the example it does look like they have movable “for reference” items on the blueprint. And I think the conversion ends up making these appear to be a solid obstacle.
I know it is possible to pull the .pgm file into an image editor and remove/add obstacles and just tidy things up in general. I’ve done that in the past.
/K
Addendum - was experimenting with that this evening. You can actually use IrfanView (which has long been my favorite image viewer on Windows) to edit .pgm files. I hadn’t realized there is a “paint” menu for very simple bitmap manipulation. IrfanView is a free download for non-commercial use, and has been around for a very long time, so I’ve trusted that it doesn’t contain malware. https://www.irfanview.com/