This is the link I was hoping you would find:
It was the culmination of this one:
This is the link I was hoping you would find:
It was the culmination of this one:
I’ve enjoyed using git and GitHub. there is a learning curve to be sure.
No - this is really not how you want to use git (or GitHub). Branches really are for development and testing of the same basic project - not for separate projects.
You can create one master “project” and then create directories for sub-projects under this. I’ve done that. It works easy enough, and then you only have to clone one “project”
I haven’t had much issue with dependencies, but I suspect @cyclicalobsessive is correct - that is something that lends itself to python virtual environments. I’ve used them before, but not in the context of development with git,
/K
What does a Python “virtual” environment buy me that running Python direct doesn’t?
In each virtual environment you can have different sets of libraries (or versions of libraries). This lets the programmer work on different projects without having to worry about version conflicts.
I have no idea how it might work to use a different GitHub project branch with different virtual environments. I suppose it’s possible.
/K
My idea, (and concern), is how a particular library revision will affect the entire robot as opposed to one or two specific programs.
IMHO, it doesn’t matter if program “x” works with a modified library, if it’s going to crash the rest of the 'bot.
Of course I want to test with specific apps, but I want to know how it affects the entire robot too.
Fair point. I was thinking you’d load the entire driver library into each virtual environment. If a particular library does impact the entire robot, you just remove the entire environment.
The downside to virtual environments is that they require much more disk space, since a lot of things are duplicated for each environment (although maybe some of that is handled intelligently in the background - I don’t know).
/K
I have enough disk space available that this is not a constraint.
Unfortunately, my limited, (read “nonexistent”), experience with Python in general and with Python virtual environments in particular, makes me suspicious.
For example, what happens if I try to instantiate multiple different versions of gopigo/easygopigo in different virtual containers, and then try to do something like move the 'bot?
Since there are multiple instances of different versions of the same classes, who wins? Which witch is which? Who’s on first?
It seems like a recipe for mass confusion.
Don’t understand the question actually.
“experience with Github”:
“Good” python programmers use virtual environments, local and site packages, and even test and public PyPi packages. Me? I do everything above currently released site packages, and have a system of diff and release to a /plib/ folder. Every project starts with:
import sys
sys.path.insert(1,"/home/pi/rosbot-on-gopigo3/plib")
or sometimes for quicky tests, I simply copy the needed files since the python path always checks “./” first.
It’ll be the one in the environment that is currently active. That’s the whole goal - to let you change environments so that you can have different behaviors.
/K
pythonpath is the traffic cop. If a desired version is not in the path before an undesired version, you aren’t thinking path all the time.
Python will tell you where it got every module:
>>> import gopigo3
>>> print("gopigo3 used:", gopigo3.__file__)
('gopigo3 used:', '/usr/local/lib/python2.7/dist-packages/gopigo3-1.2.0-py2.7.egg/gopigo3.pyc')
This is the path on Carl for python2 before I prepend my “/home/pi/Carl/plib/”
>>> import sys
>>> print(sys.path)
['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-arm-linux-gnueabihf', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/home/pi/.local/lib/python2.7/site-packages', '/usr/local/lib/python2.7/dist-packages', '/usr/local/lib/python2.7/dist-packages/wiringpi-2.60.0-py2.7-linux-armv7l.egg', '/usr/local/lib/python2.7/dist-packages/smbus_cffi-0.5.1-py2.7-linux-armv7l.egg', '/usr/local/lib/python2.7/dist-packages/python_periphery-2.1.0-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/cffi-1.14.3-py2.7-linux-armv7l.egg', '/usr/local/lib/python2.7/dist-packages/pycparser-2.20-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/scratchpy-0.1.0-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/DI_Sensors-1.0.0-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/Line_Follower-1.0.0-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/Dexter_AutoDetection_and_I2C_Mutex-0.0.0-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/gopigo3-1.2.0-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/brickpi3-0.0.0-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/grovepi-1.4.1-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/pivotpi-0.0.0-py2.7.egg', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/python2.7/dist-packages/wx-3.0-gtk3']
" “Good” Python programmers ". . . .
. . . . usually work on programs that stand alone while they do something, and 99.99999% of “good” Python programs have absolutely nothing to do with robotics or complex interrelated systems.
This is where the challenge of robotics raises its ugly head - everything is interrelated and codependent in some way.
Unlike a program for a pinochle game, robotic functions can, and do, interact in ways that may not be easy to predict.
This is why I am uneasy about rushing into something I know so little about.
Don’t rush… don’t skip the research phase, don’t skip the design phase, don’t loose patience with yourself.
I can’t say I’m “thinking path” any of the time.
Mostly I type in the appropriate magic words and hope they work!
My guess is that you and I are handicapped by having learned programming in an era of no libraries.
For me, It started with “standard C++ libraries” and standard Ada libraries. These libraries were collections of “programming related” tasks.
The first explosion in available libraries in my programming life, was Microsoft Foundation Classes.
This was a 3 foot wide bookshelf of libraries that were my first “application related” reusable code.
I never got comfortable with more than a tiny fraction of those classes.
(I had 20 years of Java in there with IBM - same story - it’s all in the libraries.)
When I started playing with RaspberryPi, I decided to “learn Python.”
I quickly discovered that is actually more impossible for me than learning to use the MFC library,
because only “Gurus” write in Python - everyone less uses Python to call packages the Gurus publish
on GitHub, PyPi, and the Linux repos.
Python is a pretty amazing language but the real power is in the packages.
ROS is a fairly simple architectural concept with the real power in the packages.
Both are not overly complex as you might think - the power lies in the complexity available. What might be overly complex is finding the right version of the right package to reuse.
(something weird going on - I’ve lost the line wrap)
Research:
So, you are suggesting that I do some fundamental research into Python virtual environments?
Anything else I should study before I begin the design phase?
That’s part of the problem here:
With a hardware problem, I usually have a clear idea of the scope of my knowledge and which specific parts need amplification.
With a major software design, there are so many unknowns that everything is ultimately in question and I’m not sure where to start researching since I can’t research everything!
I suggested it because it is the “proper” way. Like I wrote, I don’t do that.
My real opinion is that you are perhaps thinking a bit grandiose?
The simplest approach is no virtenv, no packages, no path worries - just copy and rename “my_” this and that. I do this and it is nearly invisible:
import my_easygopigo3 as easygopigo3
It can use the site gopigo3.py if my change is only in the top package.
If the my_ packages get nested, in my_this.py I “import my_that as that”
Eventually this bites when DI changes something I based a “my_” module on.
I, literally, learned programming where the only thing you had was the processor’s op-code table, a hand-drawn memory/IO map of the system and the hardware address of the peripherals.
That, along with data sheets for each IC, the ability to think in hex, and a sharp pencil with a good eraser while you hand-coded your program and manually typed it into an EPROM burner.
Then a gift from the Gods - an ancient computer with, (gasp!), an assembler! and the ability to transfer Intel Hex files directly to the EPROM burner via a serial cable. (No more hand-keying hex into the burner!!)
Then I graduated to systems with built-in function calls where you’d put something on the system’s base page and jump through a special “magic interrupt” vector to make something happen, like read from a tape or diskette.
All this talk about “packages”, “libraries”, “classes” and “methods” has me wondering what time-warp I’ve just zoned into!
peripherals? I only had 16 toggle switches, and an “Address/Data” toggle and a “write 8 and increment addr” button. EPROM and EEPROM came eventually, but at first it was “don’t pull that plug”.
(There was a guru working on a paper-tape reader for the pirate tape of microsoft basic he acquired.)
Actually, my first programming was punching cards of FORTRAN statements on an IBM room filling thing, but I didn’t actually understand anything about what I was doing. 1968
Possibly true.
I normally think about one single project/problem at a time - AKA the KISS rule.
However, there are several ideas I have for several base classes that might interact, so I thought a more global approach might be good.
Though now I’m thinking that “one project, one problem” might still be a good idea, (with the occasional back-track to see if I broke something else), as a global solution may cause more problems than it solves.
Despite that, I still need to know more about all this stuff and my reading list is getting longer than the “honey-do” list!
I really need to be two of me.
One for my wife to order around and another to get real work done!
Yes. Peripherals. All the stuff that wasn’t the 8080 CPU and the 8024 clock generator chip.
You know, silly things like a UART chip for serial I/O to a VT-100 dumb terminal, memory, the latches for the individual 7-segment display chips, the address of the keyboard’s matrix, the /enable line for the +26v programming boost voltage to write to the array of special $150+ each EEPROM chips that this program was trying to test, (and God Himself help you if you got the timing wrong and fried a grand plus of chips all at once!), and such like.
If you’re thinking things like disk drives or monitors, (or even paper-tape), you’re getting ahead of me.
I started with a pencil and a notebook as my development environment, a 2708 EPROM, a programmer, and a UV eraser cabinet. Not to mention a tolerant boss!
There was nothing else.
I had to read the data sheets, decode the register addresses, figure out how to initialize the beasties, and so on. One EPROM at a time and pray it doesn’t blow up something expensive.
Flip-switches and a pre-defined system would have been a godsend.
(I did program a GE-4020 mainframe, in octal, as a freshman and spent more time in front of ancient 50’s relic IBM-026 keypunch machines than I care to remember. I considered the (relatively) modern IBM-029 keypunch with the full Selectric keyboard and drum-card capabilities a religious experience!)