Get_motor_status() and get_motor_encoder(): weird timings?

I ran across a very puzzling increase in the motor and encoder status function timings as the delay between calls goes up.

The “normal average” time for two get_motor_status(), (left+right), seems to be around 110us, and the “normal average” time for two get_motor_encoder() calls seems to be around 100us.

If I insert successively longer delay between each set of two calls, the time goes up to 300us for the (left+right) calls.

Any ideas what is going on with these weird timings?

(BTW, I also tested repeated runs with delay always 0.010 sec , no increase in timings.)

Here is my test code: timeMotorStatus.py and timeGetMotorEncoder.py

Timings for get_motor_encoder()

“”"
pi@Carl : ~/Carl/systests/motors $ ./timeGetMotorEncoder.py

Timing get_motor_encoder().

Timing: 0.000113

Timing: 0.000098

Timing: 0.000096

Timing: 0.000091

Timing: 0.000097

Timing: 0.000092

Timing: 0.000090

Timing: 0.000092

Timing: 0.000103

Timing: 0.000092

Delay between readings: 1ms

2x get_motor_encoder() Average Time: 96 us

Delay between readings: 5ms

2x get_motor_encoder() Average Time: 99 us

Delay between readings: 10ms

2x get_motor_encoder() Average Time: 106 us

Delay between readings: 100ms

2x get_motor_encoder() Average Time: 140 us

Delay between readings: 500ms

2x get_motor_encoder() Average Time: 298 us

Delay between readings: 1000ms

2x get_motor_encoder() Average Time: 301 us

Delay between readings: 2000ms

2x get_motor_encoder() Average Time: 302 us

“”"

Timings for get_motor_status()

“”"
Timing get_motor_status().

Timing: 0.000128

Timing: 0.000115

Timing: 0.000109

Timing: 0.000114

Timing: 0.000113

Timing: 0.000108

Timing: 0.000108

Timing: 0.000108

Timing: 0.000114

Timing: 0.000108

Delay between readings: 5ms

2x get_motor_status() Average Time: 112 us

Delay between readings: 10ms

2x get_motor_status() Average Time: 116 us

Delay between readings: 100ms

2x get_motor_status() Average Time: 140 us

Delay between readings: 500ms

2x get_motor_status() Average Time: 287 us

Delay between readings: 1000ms

2x get_motor_status() Average Time: 291 us

Delay between readings: 2000ms

2x get_motor_status() Average Time: 317 us

Raspbian is not a realtime operating system. When you add delays, it could be that the OS assumes a lower priority for that thread, and doesn’t bother to give it the same attention. Is this repeatable, or perhaps just coincidence? It seems unlikely, but it could be due to RPi CPU load changes of other processes.

To investigate further, you could connect it to a logic analyzer to visualize the SPI transaction, and “see” the bitrate. A logic analyzer would also allow you to see the delays between SPI transactions.

My guess is that the communication bitrate remains the same, and that the time between transactions changes. In this case, the “extra” delay/time would be due to the RPi OS. Probably because of a thread priority setting.

The GPG3 firmware does not dictate the speed or timing of SPI transactions; it is entirely up to the RPi. The bitrate limits the speed at which the GPG3 FW needs to process data. Note that if you attempted to increase the bitrate, it might be too fast for the GPG3 FW to respond to, leading to communication errors.

Everywhere you listed timings as “ns” (nano-seconds), I think you actually mean “us” (micro-seconds). 1ns is 0.000000001 second. 1us is 0.000001 second.

1 Like

Thanks - right on - I’ll update.

The answer is sort of repeatable. If I remove the printing till last, and call the functions 20,000 times, the first 1000 or so calls take 110us and the remainder take 220us.

This is still not a limitation for getting the current angle around the time of a range call, just an interesting artifact of trying to characterize the GoPiGo3 board performance to optimize my robot capabilities.