I think you mean “normally aspirated” as opposed to NOX injection or turbo/super charging.
As much as you don’t like the Pi-4, that’s one of the reasons I bought it, to allow me to do local development using the Visual Studio Code local server talking to my laptop without crushing the device’s performance.
Trying that on a Pi-3 was painful.
You may want to try some things on a Pi-4 to ease the development burden, and then try on the Pi-3 and tweak for power performance.
It would be worth trying. I’m just not sure how well it will handle the ongoing localization and planning. I’m guessing it won’t be able to handle those too well on it’s own, at least not in a reasonable time frame. It might be able to generate a global plan based on a static map, but I don’t think it would be able to keep up with local planning as the robot moves. And as far as I understand, using a bag file is still essentially real time, just deferred. One thing you might be able to do is record the LIDAR readings to a bag file, and then generate a map file later on a PC. I’m trying to think how you’d generate a “safe box” - I guess you could put make-believe walls on the map to define boundaries. But I’m not sure how that would mess with localization. There are probably other ways I just don’t know about.
move_base does have a service called /make_plan that you can call - it will generate the path but not start moving the robot. That’s the closest thing I know of, and you do have to have move_base up and running (which requires in turn the global and local planners).
Not that I know of. What I’ve done is kept the goals in a python data structure of some type, and then sent them one by one (as poses) to the move_base action server. Once the action is completed I send the next goal. I want to figure out how to save the goals in a YAML file (or I should say - I want to figure out how to read in a YAML file that has the goal poses), but that’s a future project.
/K