Choosing An IMU? Fusion Data? Quaternions all over the floor!

I am a neophyte to the whole subject of localization using inertial measurements (3 accelerometers, 3 gyros, [and 3 magnetometers] ). For years I have seen books, papers, and courses at the university level, and how hard the problem is, to the result I have generally felt the whole topic to be beyond me. Additionally, since all my prior robots have been unable to use heading, location or “known objects” to any advantage, I have not been inclined to invest in the hardware to learn what can and cannot be accomplished by a “non-research” robot.

Obviously, one way to start would be the [u]DI IMU[/u].

The DI IMU has 9DOF measurement, with 16bit gyros, 14bit accelerometers, ?-bit magnetometers, and includes a 100Hz “IMU fusion processor” which I would hope could make it easy to define a [0x, 0y, 0z, 0heading], drive a little, stop and then obtain the latest [x,y,z,heading] “pose.” (It seems to be much more capable than that, but that is the level of my thinking at this point.)

But even before that, I would need to purchase some IMU, and I don’t know how to compare the DI IMU with other apparently similar units, such as the [u]Seeedstudio Grove IMU 9DOV v2.0[/u] available at slightly less than half the price of the DI IMU.

The Seeedstudio unit is based on the MPU-9150 chip which purports to have 16-bit Accelerometers, 16-bit gyros, and 13-bit mags, claims to have a “fusion processor” and claims higher max sample rates (filtered 256Hz). I can’t tell if all the “fusion processor” is actually doing is handling a configurable FIFO queue of individual sensor readings, or is also performing some advanced calculations or filtering.

The DI IMU would appear to have an “it’s our product” support advantage.

Are there other advantages of the DI IMU?

Other DI IMU Questions:

  1. I cannot figure out from the [u]sample program[/u] for the DI IMU sensor how to program a “define zero, drive some, print pose”, and the more interesting: “define zero, drive some, display an [x,y] vs time map.” Any hints where to start?
  2. Don’t the motors/encoders make magnetometer readings worthless?
  3. To get path data, I should use the BNO055 IMU Operating mode and fetch Fusion data, correct?
    and what is “Fusion data”?
  4. Are the quaternion and Euler angles the “fusion data”?
  5. Is the fusion simply averaging between the sensors, or something better?
  6. Can you point me to something to learn how to convert “fusion data” to [x, y, headingXY]

And perhaps the most important question: Is localization with any IMU, indeed, too complex for a home robot?

I found this example from a [u]university discussion on SLAM[/u] that makes me think this is hopeless. I can clearly see the math is beyond me, and it looks like the raw data out of the IMU is no where near useful without my robot having a Ph.D in Math:
ExtendedKF_Example

1 Like

Found a phenomenal 90 page introduction and analysis (university level, but first and last sentence of some paragraphs are understandable):

Manon Kok, Jeroen D. Hol and Thomas B. Sch ̈on (2017), ”Using Inertial Sensors for Position and Orientation Estimation”, Foundations and Trends in Signal Processing: Vol. 11: No. 1-2, pp 1-153. http://dx.doi.org/10.1561/2000000094

My conclusions from the paper:

  1. IMU heading error quickly becomes 10-30 degrees
  2. By itself, no IMU can give useful path and heading.

And my conclusions from this mental excursion:

  1. Our body also suffers from the same rapid “eyes closed” error accumulation
  2. Brooks is correct that environmental references are stronger than symbolic representations (world coordinates), for the robot mobility domain.
  3. Vision will be the most powerful sensor my robot already has, among encoders, TOF distance, microphone, and camera
  4. Robotics is (still) hard!
1 Like

Another concern: the BNO055 seems to require I2C clock stretching, which may require DI software I2C, which my GoPiGo3 robot / DI distance sensor combination did not tolerate well.

Is it possible to run the DI IMU with software I2C at the same time running the DI distance sensor with hardware I2C?

(The Aug 2019 MagPi features an article using a BNO055 IMU, and an Adafruit gyroscope comparison which rated the sister BMI055 highly, combined to renew my curiosity about the DI IMU.)

1 Like

Yes, I think so. Although we know there is a bug with it stemming from the linux kernel. Either way, our official story is that we support that.

As far as the math that’s required in these, yes you need lots of it. I like to think that one doesn’t have to despair about it and instead just focus on learning it without thinking how much there is. That’s my 2 cents.

2 Likes

No. There’s an inherent “noise” in the process of measuring the trajectory of the GoPiGo3 based on the encoders. The encoders are really precise over short periods of time and the magnetometer is over long ones - the opposite is valid too.

If I remember correctly, yes. One quick way to determine that is seeing if they are stable when they are being read.

Far from it. The usual algorithms that are being employed are Kalman Filters, Complementary Filters, band-pass filters or a combination of all these. The rate at which you can fetch that on the DI IMU is 100Hz.

In this type of task, it’s generally not enough relying on inertial measurements only. You also need a type of sensor that gives you details about your position in a 3D environment - think of motion capture devices or GPS. You could do it without that, except that it gets way more complex. See your comment " 1. Brooks is correct that environmental references are stronger than symbolic representations (world coordinates), for the robot mobility domain.".

That’s what Elon Musk thinks too.

And I’m still looking in awe at what Boston Dynamics has recently done.

Or you could just use the analog port on the GoPiGo3. It’s much simpler and it’s hassle-free. Although the software-implemented I2C should do its job too - minus that bug.

2 Likes

At risk of resurrecting a dead horse, I found an interesting article on OZZMAKER that talks about making a balancing robot, and how to read IMU data to get the best accuracy and precision.

Viz.:

The paragraph that really begins to tell the tale is in the section titled “Theory” and goes like this:

The basics behind a balancing robot is based on the Inverted Pendulum concept. The goal is to have a control algorithm called Proportional Integral Derivative (PID) to keep the robot balanced by trying to keep the wheels under the center of gravity. Eg. If the robot leans forwards, the wheels spin forward trying correct the lean.

One axis of an accelerometer and one axis from a gyroscope are used to measure the current angle and the rate of rotation. A well timed looped is needed to keep track of everything.
Calculations are then done to provide power via PWM to the motors and in the right direction to keep the robot upright.

The read is actually less chewy that I would have expected and the math is remarkably trivial.

I suspect that with the appropriate modifications, (he’s using a different IMU and only using one axis from both the accelerometer and gyro, but the theory and calculations should be very similar if not exactly the same), this could be used for accurate navigation.

It might be worth a read.

What say ye?

Jim “JR”

Re: Balancing Robots

Yes, I have read that and other IMU examples, and came to the conclusion that deriving instantaneous (short interval) rotational data is easy, but maintaining accurate (enough) “pose” over time will require fusing IMU and vision data, and maintaining accurate (enough) “localization” requires either IMU and vision, or IMU and LIDAR.

At least the not finding of ANY easy to understand and reproduce IMU+encoder tracking articles suggests to me it must be too hard for my feeble brain.

1 Like

At least YOU have the software chops to program Carl to do things Charlie is still dreaming about.

I, unlike all you “College” trained guys, am what they call a “mustang” in the Navy - I came “up through the ranks”.

My expertise is primarily hardware based. I became involved with software in the late’70’s, early '80’s when hardware started moving into software based devices. I cut my teeth on 6502 and 8080/8085 assembly language.

I have several books on “Teach Yourself [c, c++, c#, Fortran, COBOL, Python, Java (several flavors, with and without milk), etc.]”

Every time I sit down with one of them I either end up with a splitting headache or bored silly.

Python is a classic example of a splitting headache: It has a clearly defined “Zen” and a style guide you ignore at your peril. Just when you think you’ve gotten your arms around it, there are a zillion bonziod rules that totally ignores the established plan.

Classes and methods are a prime example:
I’ve seen so many flames on so many fora about the DRY rule (Don’t Repeat Yourself), then you get to classes and you spend more time writing “self” than you do coding!

As for me, I don’t worry about who does or doesn’t have"soul". If I’m having fun trying to get Charlie to do something, then he’s got all the “soul” he needs. And if there’s ever any doubt, I’ll fit a sound card and have him play Dionne Warwick! :wink:

1 Like

Thank you for the compliment, but (queue two guys trying to prove who missed that class…):

Me: Mechanical Engineer “trained” and 1st job was designing heat sinks for a computer. Taught self 8080 microprocessor architecture and assembly language. Now: Can program assignment, test, and goto in any language, but cannot program anything with Greek letters.

It seems smart robots think in Greek.

1 Like