It depends on three distance sensors: front, left and right ones and I don’t think I can replace them with one sensor installed on a servo motor. So, how can I connect multiple distance sensors and how can use each of them individually in code ?
The $4 Grove Ultrasonic Ranger could serve for left and right, and use a ModRobotics ToF IR distance sensor for the forward sensor (as long as you do not also need the DI IMU)
It may be possible to use Three ToF IR sensors by plugging into AD1, AD2, and one into the I2C port, but I cannot say definitely. And why spend so much money when two $4 Ultrasonic sensors can be used…ok different range, different FOV, different surface responses.
. . . . but don’t those all ultimately use the same i2c channel - or at most two?
Since this guy wants to use three distance sensors, (I am assuming the standard “googly-eyes” distance sensor), there will be i2c address collisions. I am also assuming that whatever he uses, he will want them to be identical so that the responses will be symmetrical.
An i2c mux solves that problem.
Or. . . .
He can look around for ToF sensors that have programmable i2c addresses, but how well the GoPiGo can support them is another huge unknown. (Can you say “Seeed Studios pH sensor”? Ahh! I knew you could!)
It would be interesting to see what this worthy individual comes up with as that could do much and go far with respect to your own wall-following interests.
The software I2C implemented on AD1 and AD2 do not have I2C address conflict with the hardware I2C, but the question of having two distance sensors on software I2C remains unknown.
Using two Grove US for left and right is much cheaper and solves the whole conflict issue, if non-symmetrical sensors will work. I believe it would also be possible to program two Grove US sensors on the AD1 and AD2, and another one direct to the RPi unused GPIO pins:
I commend you for combining local obstacle avoidance with a non-“precision location” approach (precision location being so commonly used in ROS bots).
I have supported a Brooks subsumption architecture with only local awareness for all my robots to-date. Humans navigate very well without precision location, and I believe it is inefficient of mobile robot processing resources to demand/attempt precision location.
The paper is describing the use of three Ultrasonic sensors. My prior post lists the Raspberry Pi pins available with a GoPiGo3 bot for the third ultrasonic sensor.
Obviously not if all three measurements must be time synchronized on a moving bot, but if “stop, measure, go (with only forward obstacle detection)” is allowed then yes, a single sensor on a servo could simulate a continuously moving bot with three full-time sensors.
Another way around the GoPiGo3 I2C address bus conflict issue would be to use a different vendor’s I2C distance sensor. I have read there are some ToF IR Distance sensors that do allow selecting a second I2C address - combining one of these on the Hardware I2C bus would also solve the conflict.
But I really think the Ultrasonic solution is better in your application. There are very few ultrasound-invisible surfaces, where there seem to be many (black) IR-invisible surfaces in my robots’ environment.
Perhaps the biggest issue against the stop, measure, go approach is that people in the environment do not expect a moving object to stop moving suddenly, but then if you are planning to mix moving people in a moving robot environment, I would suggest using 360 degree image obstacle detection and obstacle tracking would be needed.
First, human processing power has most, if not all, systems beat to a frazzle.
Second, humans don’t have 360° obstacle avoidance either as the typical human field of vision covers less than 180° They make up for it by their stereoscopic hearing that can place sounds accurately within a 360° sphere, (within reasonable tolerances). If you’ve ever accidentally backed into someone, or turned into someone you didn’t know was there, you know the limits of hearing as an obstacle avoidance system.
Third, humans have an “oops!” factor, also known as “fault tolerance”, built into the social fabric. People are clumsy in general, and the occasional clumsy mistake is easily excused.
Building all that infrastructure into a robot is not a simple task.
Exactly, don’t expect the totality of human capability without human processing. And I (and the whole BEAM robot folk, and Brooks, and more) believe robots only need to be aware enough to achieve the goals we set for them, if we are realistic about the goals we set for the robots and the environment we place them in.
Precision location in an unpredictable “fuzzy” world just seems like an oxymoron. To bring it down to your and my robots, we allow for the fuzziness of our ToF distance sensor having a 25 degree FOV by stopping, turning at least 15 degrees, and then proceeding. If we were to have three sensors with slightly overlapping fields (or two with overlapping angled fields) gathering readings along a known path, we could use probability to hypothesize the location and size of obstacles and actually plan a course to “prove” the hypothesis.
Back a long time ago, I was asked to design a path planner that used apriori information and heuristics that also replanned the path when significant information became known along the path. The initial plan used approximate perpendicular angles to forest from clearings and distance from roads, to suggest the best places and approach angles to look for Russian mobile missile launchers, and then after a possible sighting while flying the plan, plan a confirm (and optionally neutralize) path for a cruise missile that had multiple munitions. All this was probability based with the assumption that precision location information would not be available or would not be reliable.
The planning processor was programmed in Lisp, and was roughly comparable to the Raspberry Pi 3B in MIPS, perhaps less.
So my point, is the OP’s direction to use fuzzy logic with three sensors, rather than a LIDAR having 720 discrete 1cm precision measurements with precision-location-in-the-world estimates, on a mobile robot has my interest and support.
Note: The $79 LIDAR would be cheaper than three ModRobotics ToF Distance sensors, hence the suggestion to use three $4 Ultrasonic sensors.
That depends on the robot’s requirements and desired scope of action.
That’s not what that means.
“Fuzzy logic” is a method of working with ambiguous or incomplete details to produce relatively precise results.
For example, if you search for “pithon 3.9”, the search engine’s fuzzy logic knows that you likely meant “Python 3.9”
Fuzzy logic in location would take the, (perhaps imprecise), data from a LIDAR, (for example), and provide a first approximation of definite boundaries. As the robot moves around in its world and better data is collected, the robot updates and improves its approximation.
Read the paper Jim. At no point is localization a goal, or does a position-in-the-world estimate become precise. The bot works with seven degrees of distance from a goal, and five degrees of distance from obstacles. At no time does the bot attempt precise localization of itself, the environment, or obstacles. It does not do fuzzy localization, it does fuzzy control - similar in effect to the fuzzy thought processes with fuzzy sensor data humans use to navigate in the home.
We don’t improve our estimation of where a wall is as we move. We are either on a collision course, close to a collision course, or not on a collision course. We don’t care to know we are at x,y with known velocity in the house with a wall at y=mx+c.
My point about ROS dependance on localization is that collecting 720 distance points to find a most probable location among thousands of fixed reference point and fixed lines, in order to decide how much to turn toward a goal is computationally intensive when all the bot really needs is to know should I “turn a little left”, “a little right”, or “proceed with abandon until the cat plops in front of me”.
My wife countered “but Carl needs to know where he is to get back to his dock”. I conceded “yes, but not through knowing his precise location”, where upon she told me stop standing there frozen in thought.
There is another concept possible with the GoPiGo3 for multiple analog sensors. If the sensor returns an analog voltage relative to the distance, the AD1 and AD2 ports could have two sensors wired to each port. The ports have power, ground, data1, and data2 inputs but only data1 or data2 can be read at a single moment. It would be possible to take a reading on data1, then take a reading on data2, switching slow enough to allow the A2D circuit to settle but that time is quite low.
I don’t think this concept is possible with ultrasonic sensors though. The Ultrasonic sensor example code tells the GoPiGo3 red board to use both GROVE_1_1 and GROVE_1_2 or both GROVE_2_1 and GROVE_2_2 even though the Grove Ultrasonic Ranger uses a “single wire ultrasonic driver”.