May Be In Love With The $4 Grove Ultrasonic Ranger

Creating a “First Programs For GoPiGo3 Using VSCode” may have exposed an unexpected secret!

The $4 Grove Ultrasonic Ranger appears to have much less reading variance than the $30 Time Of Flight Infrared Distance Sensor.

Variance at 100mm:

  • US: 0 mm for max error 0%
  • IR: -5 to +2mm for max error -5%

Variance at 1000mm:

  • US: -1 to +3mm for max error 0.3%
  • IR: -66 to +2mm for max error -6.6%

At 100mm:

At 1000mm:

Add this to the fact that the ultrasonic sensor “sees” black stuff from much greater incident angles is a tremendous advantage as an obstacle detection sensor.

2 Likes

Variance at 2032-2033mm:

  • IR: -73 (-3.6%) to + 130 (6.4%)
    (!!! Ignoring 25% of readings=2995mm “no obstacle in max range” !!!)
  • US: 0-1mm (0.05%)

Very Interesting Sensor This Grove Ultrasonic Ranger

1 Like

That magnitude of variation is insane!

At least when using Bloxter, I have never, NEVER seen variations like that - it’s usually right on the button and quite stable.

Is there something wrong with your IR sensor?  Maybe the LIDAR is messing with it?

1 Like

Disconnected LIDAR, IMU, Oak-D-Lite, and even the Grove Ultrasonic but the variation remains.

Looking back at Carl’s “Spec Sheet” item for the Distance Sensor, I see that “he” also saw single reading variation of 1-5%, with a 6 reading average variation of 1-3%.

2 Likes

That’s more consistent with the readings from the Ultrasonic sensor.  Looking at Dave’s readings, (below):

The IR sensor on Dave appears to be giving you:

  • A lot of bogus readings
  • A variation that is all over the place compared with even an inexpensive TOF sensor.

A 6-or-so percent variation is nuts! Using Carl as the baseline, I think Dave’s sensor is either bogus or there’s something interfering with the readings.

I’ve seen TOF sensors from Sparkfun that appear to be using the same sensor that claim all kinds of microscopic accuracy. :wink:

Have you tried that same test on Carl, using the same software?

2 Likes

I have not actually tried measuring the accuracy of the readings, but if I place an object in front of the stationary robot, the readings don’t shift, even as I vary the distance, they remain stable over a wide range of distances.

Another thought:
Is your sensor square to the floor and the sides of the robot?  How shiny are the surfaces that it’s pointing to?

1 Like

Luckily, I bought two Grove US sensors so with a little "Rube Goldberg"ing:

Carl Variation at 100mm:

  • IR: -1mm (-1%) to +5mm (+5%)
  • US: 0mm

Carl Variation at 1m:

  • IR: -53mm (-5%) to +70 (+7%) with no “nothing in max range” readings
  • US: 1mm (0.1%)

So that is two different ToF IR Distance sensors, two different Grove US sensors, on two different GoPiGo3 robots, showing comparable “single reading” variance.

2 Likes

How shiny are these surfaces?

In my tests, all the surfaces are either matte, flat, or dull cloth, (a greenish duffel-bag).

Of course, I’m not being so picky - my distance sensor isn’t so much for measuring absolute distance, (23.070053 cm), but giving a “am I too close?” reading - so I haven’t been making a laboratory project out of it.

What was that you said about unicorns? :wink:

1 Like

Pretty much best case scenario - cardboard box, square to floor and square to sensor, 90 degree incidence angle (as best my eyes can do without bringing out the T-square).

2 Likes

Another variable is incident sunlight - are you in a sunny room?

Here, for a large chunk of the year, we’re on artificial light.  And no, I haven’t taken Charlie for any walks. . .

Also, with the joystick controlled robot experiments, I have been ignoring the distance sensor.

Here’s a thought:
How about you drop me your code and I’ll slap it on Charlie and see what happens when I point him at a cardboard box.

1 Like

Variation is not affected by artificial room light only scenario, or artificial room light with small amount of non-direct natural light leaking through closed venetian blinds.

I’ve commented out the US sensor:

#!/usr/bin/python3

# FILE: robot2.py

# PURPOSE: Test reading distance sensor and ultrasonic sensor

from easygopigo3 import EasyGoPiGo3
import time
import logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s %(funcName)s: %(message)s')

DIODE_DROP = 0.7
ULTRASONIC_CORRECTION_AT_100mm = 17.0 # mm
ToF_CORRECTION_AT_100mm = -5.0 # mm 

def main():
    egpg = EasyGoPiGo3(use_mutex=True)
    egpg.ds = egpg.init_distance_sensor()
    # egpg.us = egpg.init_ultrasonic_sensor(port="AD2")

    while True:
        try:
            vBatt = egpg.volt()+DIODE_DROP
            dist_ds_mm = egpg.ds.read_mm()+ToF_CORRECTION_AT_100mm
            time.sleep(0.01)
            dist_us_mm = 9999 # egpg.us.read_mm()+ULTRASONIC_CORRECTION_AT_100mm
            logging.info(": vBatt:{:>5.2f}v  ds:{:>5.0f}mm  us:{:>5.0f}mm".format(vBatt,dist_ds_mm,dist_us_mm))
            time.sleep(0.075)

        except KeyboardInterrupt:
            print("\nExiting...")
            break

if __name__ == "__main__":
    main()
1 Like

Also this is an interesting approach:

#!/usr/bin/python3
#
# distSensorError.py

"""
Continuously measure distance in millimeters, printing the average and individual readings
"""


import numpy as np


from di_sensors.easy_distance_sensor import EasyDistanceSensor
from time import sleep

ds = EasyDistanceSensor(use_mutex=True)
distReadings = []

while True:
    distReadings += [ds.read_mm()]
    if (len(distReadings)>9 ):  del distReadings[0]
    print("\nDistance Readings:",distReadings)
    print("Average Reading: %.0f mm" % np.average(distReadings))
    print("Minimum Reading: %.0f mm" % np.min(distReadings))
    print("Maximum Reading: %.0f mm" % np.max(distReadings))
    print("Std Dev Reading: %.0f mm" % np.std(distReadings))
    print("Three SD as a percent of reading: %.1f %%" % (3.0 * np.std(distReadings) / np.average(distReadings) *100.0))
    sleep(1)

Running on Carl:

Distance Readings: [988, 983, 978, 972, 987, 985, 967, 976, 966]
Average Reading: 978 mm
Minimum Reading: 966 mm
Maximum Reading: 988 mm
Std Dev Reading: 8 mm
Three SD as a percent of reading: 2.4 %
2 Likes

I modified the code to run for five minutes and then stop. (300 seconds)

I did this to give enough samples to be, (at least somewhat), statistically valid.

Results for a measured distance of 60 cm.

First try:

Distance Readings: [635, 633, 643, 633, 640, 634, 634, 640, 639]
Average Reading: 637 mm
Minimum Reading: 633 mm
Maximum Reading: 643 mm
Std Dev Reading: 4 mm
Three SD as a percent of reading: 1.7 %
count is 300 seconds
5 min worth of readings. . .

Second try:

Distance Readings: [635, 633, 643, 633, 640, 634, 634, 640, 639]
Average Reading: 637 mm
Minimum Reading: 633 mm
Maximum Reading: 643 mm
Std Dev Reading: 4 mm
Three SD as a percent of reading: 1.7 %
count is  300
5 min worth of readings. . .

The sets of readings come out to be exactly the same - interesting!

2 Likes

Maybe Seeed came up with a different US sensor, because the one they had when we launched our Time of Flight was ABYSMAL.

3 Likes

These readings, both @cyclicalobsessive’s and mine appear to have a tolerance of about ±5 mm.

I, myself, don’t consider that excessive as I am not using the distance sensor to accurately map a room, but simply to let me know, more or less, how far away I am from something.

If I need better accuracy than that, there are small millimeter-band robotic-sized radar units out there that give sub-millimeter accuracy.  Of course, they cost like one of Tom Coyle’s Nitro-burning Mega-Bots. :wink:

Me, this is way good enough.

What really gets me laughing is the way the title, and the logo, fit together at the top of the screen:

2 Likes

You need to change the list size to 300, and not print the list. Running the existing program is repeadly testing 10 readings.

2 Likes

Oh, NOW you tell me. . . .

I’ll do that tomorrow.

1 Like

Indeed they say this is version 2.0 of the sensor.

2 Likes

Here’s what I’m running (now)

#!/usr/bin/python3
#
# distSensorError.py

"""
Continuously measure distance in millimeters, printing the average and individual readings
"""
import numpy as np

from di_sensors.easy_distance_sensor import EasyDistanceSensor
from time import sleep

ds = EasyDistanceSensor(use_mutex=True)
distReadings = []

count = 0
while count < 300:
    distReadings += [ds.read_mm()]
#    if (len(distReadings)>9 ):  del distReadings[0]
#    print("\nDistance Readings:",distReadings)
    print("Average Reading: %.0f mm" % np.average(distReadings))
    print("Minimum Reading: %.0f mm" % np.min(distReadings))
    print("Maximum Reading: %.0f mm" % np.max(distReadings))
    print("Std Dev Reading: %.0f mm" % np.std(distReadings))
    print("Three SD as a percent of reading: %.1f %%" % (3.0 * np.std(distReadings) / np.average(distReadings) *100.0))
    count += 1
    print("count is", count, "seconds\n")
    sleep(1)

print("5 min worth of readings. . .")

Readings test - first try at (appx) 60 cm.

Average Reading: 625 mm
Minimum Reading: 613 mm
Maximum Reading: 638 mm
Std Dev Reading: 5 mm
Three SD as a percent of reading: 2.2 %
count is 300 seconds

5 min worth of readings. . .

Second try:

Average Reading: 624 mm
Minimum Reading: 612 mm
Maximum Reading: 636 mm
Std Dev Reading: 5 mm
Three SD as a percent of reading: 2.2 %
count is 300 seconds

5 min worth of readings. . .

BTW, it takes about 30 - 40 seconds before the readings settle down to essentially what you see here.

What I see here is a tolerance band of one stinkin’ centimeter!  (10 mm), or expressed differently, ±5 mm.

I can live with a 1 cm tolerance band. :wink:

You should try this on Carl and Dave and see what you get after five minutes.

1 Like

Here is a 300 list size program:

#!/usr/bin/python3
#
# distSensorError.py

"""
Continuously measure distance in millimeters, printing the average and individual readings
"""


import numpy as np

SAMPLES = 300

from di_sensors.easy_distance_sensor import EasyDistanceSensor
from time import sleep

ds = EasyDistanceSensor(use_mutex=True)
distReadings = []

while True:
    distReadings += [ds.read_mm()]
    if (len(distReadings)>(SAMPLES-1) ):  del distReadings[0]
    print("\nDistance Readings:",len(distReadings))
    ave =  np.average(distReadings)
    min =  np.min(distReadings)
    max = np.max(distReadings)
    minError = (min-ave)/ave * 100.0
    maxError = (max-ave)/ave * 100.0
    print("Average Reading: %.0f mm" % ave)
    print("Minimum Reading: {:.0f} mm  Min Error: {:.2f}%".format(min,minError))
    print("Maximum Reading: {:.0f} mm  Max Error: {:.2f}%".format(max,maxError))
    stdDevReading= np.std(distReadings)
    stdDevError = (stdDevReading)/ave*100.0
    print("Std Dev Reading: {:.0f} mm  StdDevError: {:.2f}".format(stdDevReading,stdDevError))
    print("Three SD as a percent of reading:  {:.1f} %".format(3.0 * stdDevError))
    sleep(1)

IR Sensor On Carl at 846mm

Distance Readings: 299
Average Reading: 846 mm
Minimum Reading: 819 mm  Min Error: -3.17%
Maximum Reading: 868 mm  Max Error: 2.63%
Std Dev Reading: 8 mm  StdDevError: 0.94%
Three SD as a percent of reading: 2.9 %

IR Sensor on Dave at 838mm:

Distance Readings: 299
Average Reading: 838 mm
Minimum Reading: 813 mm  Min Error: -2.97%
Maximum Reading: 869 mm  Max Error: 3.72%
Std Dev Reading: 10 mm  StdDevError:  1.19%
Three SD as a percent of reading: 3.5 %

Second set of 300:

Distance Readings: 299
Average Reading: 837 mm
Minimum Reading: 798 mm  Min Error: -4.66%
Maximum Reading: 878 mm  Max Error: 4.90%
Std Dev Reading: 12 mm  StdDevError: 1.42
Three SD as a percent of reading: 4.3 %


The StdDevError and Three SD Error are over 300 readings.
Three SD says “Probably the worst readings will be +/- this percent of the real reading”.

The Min/Max Error are single readings.

Note: If anything changes during test (like accidentally walking between the bot and the target), the test is invalidated, so must restart.

1 Like