Robot precision turn on itself

What is the Robot precision when turn on itself ?

I’ll explain:

i would like make an exploration robot on GoPiGo platform.
The main algorithm drive the GoPiGo in a map grid and It can only rotate 90 degrees left or right. And go forwards .
With the trim of motors, GoPiGo with fwd() command, it keeps the direction fairly accurate. Only after 6,7 times on 1mt of distance it have an error of 5-6 cm from the fwd() direction.
The main problem is with right_rot() or left_rot() where randomically the GoPiGo wrong about 10 grade.
In my mind the encoder with 18 count have an error of 1/18 of Wheel circumference .
So it is about (6.53.14)/18= 1.33cm of linear error .
The circumference drawn by the rotation of the robot is due to the distance between the wheels that is 12cm . So in degrees the 1.33cm of error related to circle of 12cm diameter, is (360/(12
3.14))1.33=9.551.33=12.7 grades of error .

What accuracy do you have about it?
How can it be improved?

Thanks!

It is quite hard to steer a robot via wheel rotations. When you turn 1 wheel forward and 1 wheel backward the very smallest amount of error(less than 1 degree) in the amount each wheel turns will relate to the direction the robot is facing to be incorrect multiplied by a factor of 4. Normally what you need to do is increase accuracy by turn using a gyro. When driving in a straight line the gyro will also help to monitor the accuracy of the direction of the straight line too.

For both turning and driving in a straight line you need have a program that loops around reading the gyro direction and then alters the motors speed correct direction and stay on correct gyro heading :slight_smile:

This is what NASA does to keep it space craft tracking in correct direction

Thanks.
So it is impossible to use only the encoders in order to have sufficient precision for matching with a map… The compass module maybe can help me? Which precision has? In your opinion with compass is possible to have precision about 1degree with rotation of 90 degree?
If i reach sufficient precision i will insert GoPiGo in my bechelor thesis

With my experience trying to get accurate navigation using wheel encoders hasn’t produced good results. Gyro gets the best results and that’s what is commonly used in robotics. A compass will only get a good reading when standing still for a short time for the compass to settle. Turning and driving on compass doesn’t usually produce good results either as the compass tens to swing around a bit while moving.

Gyros also do suffer from whats known as gyro drift (google it) so the best systems use gyro while moving then when there robot is still and the campass has settled then they use the compass to re-calibrate the gyro.

Thanks,
do you know if this example https://www.dexterindustries.com/GoPiGo/projects/python-examples-for-the-raspberry-pi/raspberry-pi-compass-guided-robot/ have sufficient accurancy? I would like to see a video of that robot guided compass. I would like if it is sufficiently accurate in a couple of square meters…

Hi @gabriele.piscitelli,

Wheel encoders and a compass may not be sufficient.

You’re asking what is the precision of this 2-sensor (the compass & the encoders) system.
Well, precision is relative and is dependent on the task you’re trying to achieve.

If you only turn 3 or 4 times within a 200 square meters place, then we can say the robot drifted from it’s destination point just by a small amount.
Things may be totally different if you have the robot turn for 20-30 times within a 5 square meter place.

So, it’s totally dependent on the application.


Just as @Shane.gingell has already explained, your best bet would be to pair your compass with a gyro and an accelerometer. With these 3 kind of sensors you’re going to get the best precission you can get.

Gyros and accelerometers both drift in time and need to be compensated. Their advantage is that they are very precise for a short period of time.
A compass will not drift in time, but will get lots of “noise”, so you can think of this sensor as a “recalibrator” for the gyro and the accelerometer.


With a MPU-9250, you can’t go wrong on this one. It’s cheap and it’s very accurate. Here’s a link.
In order to connect it to your GoPiGo, adapt the MPU-9250 to use a Grove connector - you can buy it from Seeed - here’s a link.
Then connect it with a Grove cable to the GoPiGo's I2C port.

With this setting, you’ll have access to the MPU-9250 from within your Raspberry Pi.
Then download the advertised library from Sparkfun and use a Kalman Filter in order to get drift-free readings.

That’s all.


Please let me know if there’s anything new.

Thank you!

Thanks @RobertLucian

I before used MPU-9250 with raspberry zero on other chassis like a tank. But without good results. I did not use filter on raw data from MPU-9250 and used only [FABO library] (http://fabo.io/202.html) in order to extract raw data. I had imputed the error on the angle reading to the NO REAL TIME SO of the Raspberry. Because the raw data is ° / sec and that has to multiply for DELTA time in order to have measured angle.

The I2C on GoPiGo is connected directly to Raspberry, if i understand. So in my mind, angular misured is affected by an error due to DELTA time not deterministic.

In my mind, the best way is to do something like GoPiGo Api encoder.
It would be possible to manage the MPU-9250 directly by Atmega 328 microcontroller and have two APIs like this?

set_target_MPU_angle (90)
right ()

Another idea would be to use Atmega 328 as a clock generator? So how to create DELTA time more accurately and send an interrupt to Raspberry?

Thanks a lot for support!

Hi @gabriele.piscitelli,

Managing the the MPU-9250 from the GoPiGo would be inefficient.
It’s possible, but it’s going to add lots of overhead and issues on the I2C line if you choose to go this way.

The best thing you can do (and the fastest) is to have 2 threads:

  1. One thread deals with gathering and processing data from the IMU with the Raspberry Pi.

  2. The other one adjusts GoPiGo robot depending on what we measure with the IMU.

This is a rough idea of how the code should work.
But on principle, that’s how it should to be working.

Thank you!

Hi…i am a new user here. As per my knowledge when you turn 1 wheel forward and 1 wheel backward the very smallest amount of error in the amount each wheel turns will relate to the direction the robot is facing to be incorrect multiplied by a factor of 4. Normally what you need to do is increase accuracy by turn using a gyro.

low cost pcb assembly

1 Like

It is error prone. If you gear it down it’s more accurate, but even then… It’s awul still!

Thanks to all,

@LulaNord @Shane.gingell
I have a curiosity: what multiplier factor 4 is about? How is it calculated?

thanks!

Excuse me for being negative for a sec. To fix the imprecise encoder information, the proposal is adding a much more precise gyro to give short term accuracy, and to fix the gyro drift the suggestion is to add a compass, and to fix the compass, add some “dwell time”. It sounds “traditional” and limiting.

Humans navigate very well without accurate or precise location using only vision (with touch for safety limiting the drive, and for local precision.)

The Pi camera can sub for a gyro, for encoders, for a distance sensor, for a light sensor, for a line following sensor, for a motion detector, and more. It keeps the hardware simple and fixed at birth, while allowing new “learned” behaviors to the limits of the imagination of the robot’s parent.

Doctoral research was mentioned. Adding a PiCam and ROS interface layer would allow a DI robot to be a resource at the front of robotics learning.

Hi @cyclicalobsessive,

First of all, the compass doesn’t drift, so the chain is ended with this sensor.
It’s a good idea. Yeah it’s complex, but it works really fine.

Second of all, it might be also good to use the camera for driving the robot around and skip the encoders / IMU.
And the algorithms are already encompassed in ROS’s software framework.

A problem may arise when it’s (complete) dark around and the camera can’t sense anything.
I think an IR camera should do the trick, but we’ll end up with 2 cameras on-board.
But then again, it depends on the application - there aren’t so many users that drive a GoPiGo in the dark.

Have you been working with ROS framework so far?

Thank you!

@RobertLucian asked:

Have you been working with ROS framework so far?

I decided ROS is too much for me at present. With the GoPiGo+PiCam+us_dist()+servo() robot, I want to recreate some of my RugWarrior bots, and Braitenburg vehicle bots which will educate me on OpenCV and creating “sensors” from PiCam frame analysis.

Eventually, I expect to outgrow one RPi3 and will integrate ROS to allow my bot to be a mobile sensor platform with a second or more RPi for specific functions - but that will not be for several years, I expect.

Alan

Hi @cyclicalobsessive,

Thanks for the input.
This seems to be a project that’s going to take a while : I like this kind of project.

I also saw you’ve come up with a new topic which is about your GoRWPiGo.
That is one nice project and I’m curious to see how it evolves.

Please keep us updated with any of your advancements.

Thank you!

Hopefully a few things here to help you out.

First human brain are super complex and capable of some sheer amazing things and it is quite often a goal of robotics to be able to get a robot to do what a human does. Teams of some of the worlds most intelligent engineers takes decades quite often to achieve robots completing simple human tasks.

I have built computer vision robots where the robot driving around based upon the video feed from the Pi Cam and you will find this is way more complex than using a combo of wheel encoders, gyro and compass. Those 3 sensors are only a few bytes of info to process where as each frame of a pic from the Pi Cam has 3 bytes of info for every pixel so even in a low res like 320x200 that’s 64,000 pixels x 3 bytes per pixel for a total 192,000 bytes of info then to extract out the info you need (where obstacles are and where you need to go) is some super complex programming.

Google car to drive around uses over 20 sensors to achieve autonomous driving.

If you look on the internet Slam is the most common used high performance navigation technique used and it uses Lidar( a rotating laser that works like a radar) and a gyro and compass. Google it

The easiest by far to get started with is using wheel encoders to measure distance traveled and use gryo to maintain direction and gryo to turn accuracy. I coach a high school kids robotics team in a number of competitions and this is the common method used by all the teams.

2 Likes