Really? You are trying to solve “world peace” for all the non-Dexter uses? Good luck.
Previously when attempting to solve the generalized SPI mutex problem, you balked at having to put a generalized mutex code somewhere accessible to all SPI users. So now where are you going to put it? In a venv?
Stick with solving your problem and be happy if the DI mutex solves your problem on your robot. Do you really want to invest all the time and effort to making a generic solution which in actuality is one specific solution for a virtual crowd of “users” you don’t know and don’t know you?
Question:
If you “enclose” a bit of class method code within a mutex block as above, does all the code referenced by that code get included?
Assumptions:
I am assuming that nothing is multi-threaded.
Actually, I don’t know if ANYTHING within the GoPiGo library code is multi-threaded.
Everything necessary has already been included.
I am also assuming that the pseudo-code I’ve written “works”. (Though you can correct obvious usage errors.)
Viz.:
[class instances]
spi = SPI_Communication_Library
spi_mutex - SPI_Mutex
something_else = Some_Other_SPI_Class
[. . . . some code goes here . . . .]
# Do an SPI communication transaction
try:
spi_mutex.acquire()
spi.comm(data, address, [r|w])
finally:
spi_mutex.release
Where within “spi.comm” there is a reference to “something_else.method”
Viz.:
def spi.comm(data, address, direction)
[some code]
# "Some_Other_SPI_Class is an additional class that's
# essential for SPI communication.
# (maybe it bit-bangs the bytes?)
something_else.method(parm1, parm2, parm-n)
[more code]
return(result)
Do enclosed classes/methods “inherit” the mutex’s protection?
When I questioned Waveshare about mutexes to see if they’ve already done anything like this, (and if they’ve already proved it doesn’t work, why reinvent the wheel?), they requested a pull request to their repo if I got a mutex working.
So, I figured I’d give it a try.
All I’m going to do is provide the mutex library in the same way that Waveshare provides its own libraries. If the user wants to put this stuff in a venv, that’s the moose’s problem.
“spi_transfer_array” is the method called by everything else in the class(es) for any SPI communication
def spi_transfer_array(self, data_out):
"""
Conduct a SPI transaction
Keyword arguments:
data_out -- a list of bytes to send. The length of the list will determine how many bytes are transferred.
Returns a list of the bytes read.
"""
result = GPG_SPI.xfer2(data_out)
return result
However, spi_transfer_array references another class/method, “GPG_SPI.xfer2”, which is defined near the top as:
So you have to create a setup to create the egg with the mutex code, or you have to fork the entire waveshare library and add your mutex to their codebase and egg setup.
And BTW, setup tools is deprecated and I don’t know what the new library creation tool is. My modified-for-Bookworm GoPiGo3 API still uses setup, and I ignore the deprecated warning.
They don’t have eggs, just a lot of discrete chickens. (.py files) for their libraries. All I would need to do is push mine to their repo.
In the gopigo library there is a “master” routine that sends SPI messages called spi_transfer_array as noted in the code snippit above.
The active part of it is a direct call to spidev which is instantiated as GPG_SPI.
My thought is to “wrap” the call to GPG_SPI in a mutex “try” block.
I suspect that spidev opens the SPI channel, asserts the correct chip select line, bit-bangs the data, releases the chip select line, and returns whatever value(s) are returned.
If my assumption is correct than “wrapping” the GPG_SPI with the mutex should be sufficient.
This assumes that there are no other paths to spidev, (or hard-coded SPI routines), outside of the ones I’ve found. If there are other, undocumented, SPI paths I’m doomed!
Right, wrap all operations involving the GPG_SPI object AND the WaveShare object.
DI uses ifMutexAcquire() and ifMutexRelease() because the Easy Sensors can be instantiated to use the I2C mutex or not, not being the default, because 99% of DI product programs are single threaded. The fact that the waveshare device has a daemon based API rather than a single threaded API means you don’t have that luxury. Also the I2C mutex has to cover three I2C interfaces - HW I2C, SW1, and SW2 I2C.
Did you look at how this guy structured a “pythonic” interface to the waveshare display?
I don’t see a mutex. He is checking a GPIO “display busy” pin to avoid overlapping the SPI bus. The GoPiGo3 could do the same idea by always waiting for any display requests to finish (display not busy) before issuing any GoPiGo3 SPI commands.
Using a mutex is the more “software engineering principled” approach, so perhaps you want to fork his repo, add your mutex, mutex acquire/release in the appropriate places, and setup.py and have a cleaner interface.
If my understanding is correct, it’s not a “mutex” per se, but rather “hardware flow control” to make sure that they don’t try to send additional commands until the display has finished the present one.
Based on my examination with a 'scope, the sequence is more like this:
Everything static, paper_busy is not asserted, SPI quiet.
Command is sent by the Waveshare SPI routine.
Chip Select asserted.
SPI comm takes place
Chip Select released.
Paper busy asserted.
The SPI buss has been released because some e-Paper commands can take upwards of 15 seconds to complete, particularly the “init” and “clear” commands.
If more than one thing is happening, the code spins on the paper_busy status, not the SPI buss.
Once paper_busy is released, the code goes back to step 2 and continues until there is nothing left to do whereupon it sends a “sleep/power-down” command that turns off 99% of the electronics and releases the electric charges used to create the display. This is why the “init” part of the wake-up routine takes so long, it has to init all the electronics and pre-charge the e-Paper display layers.
During all of this, the SPI buss is not busy, so waiting on the paper_busy wouldn’t be the best choice IMHO.
I didn’t see an official example of using the display - only how to use the display as a login window session.
Do you even need the waveshare daemon running if you are not trying to use the OS windowing system on the device? Like I was saying before, if the GoPiGo3 program is the only user, then it can wait until the non-window-API returns “done”, before using the SPI bus for GoPiGo3 stuff.
And here’s the support page where all the software and libraries are downloaded. Pay particular attention to the “Python” section under “Raspberry Pi” https://www.waveshare.com/wiki/2.7inch_e-Paper_HAT
This thing is so slow that there’s no way in that it could be used as a primary display. Especially since its primary use-case is as a static display for store shelves or electronic badges, etc.
Maybe you have this confused with the Vellman VMP-400?
There appears to be a difference between the older 26 pin GoPiGo boards and the newer 40 pin boards, at least with respect to how they interact with the Waveshare e-Paper displays.
Charlie, a 26 pin robot, behaves nicely with the Waveshare displays whereas Charline doesn’t.
I am going to have to do additional research on this.
Like I said, I need to do additional research on this to discover what’s going on here.
I think this is EXTREMELY interesting and since there’s a difference in behavior, this could break the issue wide open!
There could be a side-effect caused when Waveshare’s display(s) have access to all 40 pins. As far as I can tell, (as of now), that’s the big difference. I will have to put a test cover on Charlie and a 26 pin extension header so I can connect and remove displays.
Test cover:
A top plate with a cutout that allows a tall pin header to poke through so I can connect things while still having a top cover in place.
I have two now. One that fits Charlie’s 26 pin header, (which I have had for awhile), and a “new” one that fits Charline’s 40 pin header.