Choosing An IMU? Fusion Data? Quaternions all over the floor!

I am a neophyte to the whole subject of localization using inertial measurements (3 accelerometers, 3 gyros, [and 3 magnetometers] ). For years I have seen books, papers, and courses at the university level, and how hard the problem is, to the result I have generally felt the whole topic to be beyond me. Additionally, since all my prior robots have been unable to use heading, location or “known objects” to any advantage, I have not been inclined to invest in the hardware to learn what can and cannot be accomplished by a “non-research” robot.

Obviously, one way to start would be the DI IMU.

The DI IMU has 9DOF measurement, with 16bit gyros, 14bit accelerometers, ?-bit magnetometers, and includes a 100Hz “IMU fusion processor” which I would hope could make it easy to define a [0x, 0y, 0z, 0heading], drive a little, stop and then obtain the latest [x,y,z,heading] “pose.” (It seems to be much more capable than that, but that is the level of my thinking at this point.)

But even before that, I would need to purchase some IMU, and I don’t know how to compare the DI IMU with other apparently similar units, such as the Seeedstudio Grove IMU 9DOV v2.0 available at slightly less than half the price of the DI IMU.

The Seeedstudio unit is based on the MPU-9150 chip which purports to have 16-bit Accelerometers, 16-bit gyros, and 13-bit mags, claims to have a “fusion processor” and claims higher max sample rates (filtered 256Hz). I can’t tell if all the “fusion processor” is actually doing is handling a configurable FIFO queue of individual sensor readings, or is also performing some advanced calculations or filtering.

The DI IMU would appear to have an “it’s our product” support advantage.

Are there other advantages of the DI IMU?

Other DI IMU Questions:

  1. I cannot figure out from the sample program for the DI IMU sensor how to program a “define zero, drive some, print pose”, and the more interesting: “define zero, drive some, display an [x,y] vs time map.” Any hints where to start?
  2. Don’t the motors/encoders make magnetometer readings worthless?
  3. To get path data, I should use the BNO055 IMU Operating mode and fetch Fusion data, correct?
    and what is “Fusion data”?
  4. Are the quaternion and Euler angles the “fusion data”?
  5. Is the fusion simply averaging between the sensors, or something better?
  6. Can you point me to something to learn how to convert “fusion data” to [x, y, headingXY]

And perhaps the most important question: Is localization with any IMU, indeed, too complex for a home robot?

I found this example from a university discussion on SLAM that makes me think this is hopeless. I can clearly see the math is beyond me, and it looks like the raw data out of the IMU is no where near useful without my robot having a Ph.D in Math:

Found a phenomenal 90 page introduction and analysis (university level, but first and last sentence of some paragraphs are understandable):

Manon Kok, Jeroen D. Hol and Thomas B. Sch ̈on (2017), ”Using Inertial Sensors for Position and Orientation Estimation”, Foundations and Trends in Signal Processing: Vol. 11: No. 1-2, pp 1-153. http://dx.doi.org/10.1561/2000000094

My conclusions from the paper:

  1. IMU heading error quickly becomes 10-30 degrees
  2. By itself, no IMU can give useful path and heading.

And my conclusions from this mental excursion:

  1. Our body also suffers from the same rapid “eyes closed” error accumulation
  2. Brooks is correct that environmental references are stronger than symbolic representations (world coordinates), for the robot mobility domain.
  3. Vision will be the most powerful sensor my robot already has, among encoders, TOF distance, microphone, and camera
  4. Robotics is (still) hard!

Another concern: the BNO055 seems to require I2C clock stretching, which may require DI software I2C, which my GoPiGo3 robot / DI distance sensor combination did not tolerate well.

Is it possible to run the DI IMU with software I2C at the same time running the DI distance sensor with hardware I2C?

(The Aug 2019 MagPi features an article using a BNO055 IMU, and an Adafruit gyroscope comparison which rated the sister BMI055 highly, combined to renew my curiosity about the DI IMU.)

Yes, I think so. Although we know there is a bug with it stemming from the linux kernel. Either way, our official story is that we support that.

As far as the math that’s required in these, yes you need lots of it. I like to think that one doesn’t have to despair about it and instead just focus on learning it without thinking how much there is. That’s my 2 cents.

1 Like

No. There’s an inherent “noise” in the process of measuring the trajectory of the GoPiGo3 based on the encoders. The encoders are really precise over short periods of time and the magnetometer is over long ones - the opposite is valid too.

If I remember correctly, yes. One quick way to determine that is seeing if they are stable when they are being read.

Far from it. The usual algorithms that are being employed are Kalman Filters, Complementary Filters, band-pass filters or a combination of all these. The rate at which you can fetch that on the DI IMU is 100Hz.

In this type of task, it’s generally not enough relying on inertial measurements only. You also need a type of sensor that gives you details about your position in a 3D environment - think of motion capture devices or GPS. You could do it without that, except that it gets way more complex. See your comment " 1. Brooks is correct that environmental references are stronger than symbolic representations (world coordinates), for the robot mobility domain.".

That’s what Elon Musk thinks too.

And I’m still looking in awe at what Boston Dynamics has recently done.

Or you could just use the analog port on the GoPiGo3. It’s much simpler and it’s hassle-free. Although the software-implemented I2C should do its job too - minus that bug.

1 Like