# Why Use Distance Sensor Average

I understand that the distance sensor offset can vary by temperature, and the measurement variation depends on the distance, reflectance, angle to reflector, and ambient lighting, but what I don’t understand is why the average of 100 readings would vary from the average of the next 100 readings by much?

Or why is the error in the average of two sets of the average of 50 readings lower than the error in the average of a set of 100?

(Statistics was one of my toughest classes…)

For N=2

3:41:59
Adjusted For Error Average Distance: 764 mm

13:42:00
Adjusted For Error Average Distance: 746 mm

Average Average: 766 mm
Minimum Average: 757 mm
Maximum Average: 774 mm
Std Dev Average: 9 mm
Three SD averages vs ave reading: 3.4 %

For N=10

13:44:02
Adjusted For Error Average Distance: 754 mm

13:44:04
Adjusted For Error Average Distance: 759 mm

Average Average: 767 mm
Minimum Average: 764 mm
Maximum Average: 770 mm
Std Dev Average: 3 mm
Three SD averages vs ave reading: 1.1 %

For N=50

13:49:40
Adjusted For Error Average Distance: 758 mm

13:49:50
Adjusted For Error Average Distance: 759 mm

Average Average: 769 mm
Minimum Average: 769 mm
Maximum Average: 770 mm
Std Dev Average: 0 mm
Three SD averages vs ave reading: 0.2 %

For N=100

Start: 13:44:15
Adjusted For Error Average Distance: 758 mm

13:44:34
Adjusted For Error Average Distance: 760 mm

Average Average: 769 mm
Minimum Average: 769 mm
Maximum Average: 770 mm
Std Dev Average: 1 mm
Three SD averages vs ave reading: 0.3 %

Because the error in the average is typically +/- 3 * stddev(readings) / sqrt(n), versus the error in the READINGS which is +/- 3 * stddev(readings).

That is why to use an average. It will reduce the error by the sqrt of the number of readings.

So in the case of my distance sensor on my painted walls, which sports Single reading errors of 1.5% to 6%, taking 9 readings and using the average will (usually) reduce the error by the sqrt of 9, which is 3, to 0.5% to 2% or better.