Scope of topic:
In perhaps what is a gross violation of forum etiquette, (ONE question per topic), my intention for this topic is to allow me to ask questions, and hopefully gather answers about, the “Remote Camera Robot” project files and functionality as an ongoing topic.
Rather than spam the fora with a whole slew of postings that ask one question - I am hoping to be able to ask all about this project in one place. This way, someone else who may be interested in this general topic can peruse this one thread and see everything that’s happening.
Ultimately, I want to be able to control Charlie as if he were one of those fancy “bomb handling” robots, remotely using a joystick.
Assumptions: (Please correct me if I am wrong)
The “remote_robot” python file is, in essence, a “wrapper” for the actual functionality of the software.
a. The “remote_robot” software starts up things like the web server, starts streaming the camera, and loads up the “nipple.js” software on the client system and gets it running.
b. It takes the processed mouse inputs from the browser and translates them into corresponding robot movements.
c. It shuts down the software and closes the browser on command.
The real “heavy lifting” is done by the “nipple.js” file, which is run on the client side within the browser.
a. The “nipple.js” program resides on the client-side browser and accepts mouse and keyboard inputs.
b. It translates mouse-movements into “force”, (absolute value of distance from center), and “direction”, (angle from some arbitrary "0’ radial that the mouse is moving.)
Note that the mouse motion is a “drag”, (left-drag), of the center dot in an arbitrary direction from the center resting position.
c. It sends these values, (along with other stuff I haven’t figured out yet), back to the 'bot for translation into actual robot movements.
Ideally, if I wanted to add functionality to the program, I would add it to the “nipple” file and add corresponding code to the robot file to handle the new messages. (i.e. I could use joystick Z-axis, (twist), motion to allow me to swivel Charlie’s head.)
These assumptions are based on the relative size and complexity of the documents - the “robot.py” file is five pages long, with the lion’s share being constructors and destructors, whereas the “nipple.js” file is 22 A4 pages long and looks like something out of NASA.
At a high level, am I understanding this correctly?
================================================
First question:
Are these files functionally documented anywhere? This way I can avoid asking questions that may already have answers. (i.e. A “man page” for “nipple.js” or the equivalent?)
I have read from several sources that mathematical computations involving constants should be done once at the beginning of the program and stored as variables. This way you don’t repeat a potentially expensive calculation over and over again.
Example:
nipple.js, lines 70 and 74:
(70) return a * ( 180 / Math.PI )
(74) return a * ( Math.Pi / 180 )
The two conversion coefficients never change, therefore they are candidates for pre-calculation.
This way, every loop through the conversion is multiplication by a constant instead of two floating point calculations.
If you only do this once or twice during the execution of the program, that’s one thing, but if you’re doing this over and over again, this can save dramatic amounts of time in an already constrained time-budget. As Warren Buffet said “A few trillion here, a few trillion there and pretty soon we’re talking real money!”
Line 5: var isTouch = ! ! (‘ontouchstart’ in window);
I’ve been researching the " ! ! " and I am getting conflicting answers. One StackOverflow article compares it to " ~ ~ ", (double-tilde), in that it truncates a floating point number. Another claims that it is a “double negation” which converts the value to a Boolean.
Looking at the statement itself, it makes more sense, (to me), if it’s creating a Boolean value.
Viz.:
Is there a touchscreen and is it enabled for use? Then “isTouch” becomes True
Else “isTouch” becomes False.
I saw a post on another"programming" site that asked if using library code was “cheating”.
Answer:
You just got out of college, didn’t you? (Professors discourage use of library functionality for whatever reason.)
No, it’s not cheating.
Looking at the code of a well written and established library can be an education in itself.
This is one of the reasons I am interested in snooping around in the libraries. I really don’t like things that are “magic” - drop something in and the answer pops out. I want to understand what is happening “under the hood”. Not only can I use these libraries more effectively, I can learn some clever programming techniques.
Aww heck, easy’s no fun! I already know how to do “Hello World!” ( )
Seriously, it’s interesting and something I’d like to do - create a remote control robot with vision. Kinda like what the cops and military have, but MUCH less expensive!
Remember when I was talking about “rewriting” parts of the EasyGoPiGo libraries? I’m thinking that - since most everything there is a class - once I understand it better I can simply super-class it and overload what I want to change. This way I get to have my cake and eat it too.
I could get excited for you if you were planning a “remote suggestion autonomous decision controlled robot”. That is actually what DARPA has been encouraging.
A “remote controlled three wheel car with streaming video” may offer lots of learning opportunities, but I it doesn’t fit my concept of “robot”. Sorry, this is your thread - I’ll go back to lurking.
I have to admit - at least in theory - I have to agree. IMHO, a “robot”, by definition, implies some kind of autonomous action or control.
However, as you have admitted, I have decided to take one of the “chewiest” projects to work on - it involves both server-side Python and client-side JavaScript running in a browser instance. IMHO, trying to program Charlie to “bring me a beer” on command is a little bit of a stretch.
I want to be able to:
Figure out how this works - that is, try to get my arms all the way around it so that I feel like I can discuss it with a degree of confidence and competence.
Figure out how to control the sensitivity of motion to make it easier to control.
Figure out how to read a USB joystick on the client computer - from within a browser instance - and pass messages to Charlie to make him do things on command.
Once I can do this, I can add some “intelligence” to the process:
Object avoidance:
a. Use the distance sensor to avoid getting too close to something.
b. Use the bumper to sense if he’s hit something outside his field of view.
(Advanced goal)
Autonomous exploration of an area:
Move Charlie into an area of my choosing, and let him wander independently. I can take control if necessary to extricate him from a nasty situation, or bring him back. I may need extra camera sensors for things like “looking behind him” or such.
(REALLY advanced goal)
Shamelessly (ahem!) “borrow” some of your code to allow him greater range of free motion when exploring an area.
(Really, really incredibly advanced goal:)
Using some schematics and code for DIY virtual walls I saw online, create a small IR “beacon”, (like a robot vacuum’s “virtual wall” but 360°), that I can have Charlie carry with him and put down when he’s going into an enclosed space as a way for him to automagically know where the “exit” is.
And so on.
However, I have to learn how to roll over onto my tummy before I can learn to crawl.
I’m not exactly sure, but if that means what I think it means, that is the ultimate goal.
Ideally, I could move Charlie into an area manually and then tell him to “do something” whereupon he’d do what I asked, or give a good reason why not.
I haven’t even begun to list the super-duper-stretch goals that this would entail - though I can easily imagine them - like putting a “pipper” on an object in his view and have him go explore it.
But that is so very far into the future that I hope to survive that long! (knock-knock!)
Or, I could paint the blocks with some kind of florescent paint and put a “UV” LED on Charlie, and let him look for bright blocks. . . . I still think a distinctive beacon would be a good idea. Let’s not put Charlie in a “hall of mirrors”, OK?
Twenty years ago a very smart Rug Warrior Pro owner that lived about an hour south of me, actually built an LED “ID broadcasting” beacon for his bot and graced me with one as well. Our robots had an IR remote sensor which was used to send commands to the bot, and this fellow created a beacon that broadcast one of the commands the detector could recognize. The code to “recognize the beacon” never made it into the 32K bytes of Interactive C code I was carefully cultivating.
Beacon technology can be quite interesting though. The iRobot uses that same infrared remote detector chip. The iRobot dock broadcasts two independent, synchronized, directional infrared codes, so that the bot can know if it is to the left or the right of the dock. When the bot is centered between both LEDs, the synchronized patterns of the left and right IR LEDs combine to make a valid “centered” code.
Sounds like a “localizer” beacon to me! Though in my case, I’d either try to have Charlie find the beacon by dead-reckoning, or arrange a “VOR” type series of radials that Charlie could home in on. I’m hoping dead reckoning works because trying to create an IR VOR beacon would - without a doubt - be a non-trivial challenge.
As far as being “very smart”, I’m sure he was. Though looking at the schematics for DIY virtual walls, it’s not really that difficult if you don’t mind doing some embedded controller programming in Arduino, some Wiring analog, or Circuit Python. You’ve already done far more complex programming on Carl - this would be a cinch!
There are three steps: (one step if you’re using a Roomba)
Attach an IR receiver to your Pi, and download the requisite package to analyze the received signal.
Point it toward an existing “wall” and record the characteristics of the signal. (i.e. Pulse rate, duration, etc.)
Program a small, inexpensive micro-controller to modulate an IR transmitter to the same characteristics.
If you have a Roomba of some known vintage, all you do is build up from the schematic and download the pre-written code.
An Adafruit Gemma M0 would work wonders. It’s small, sips current, and can drive a LONG string of addressable NeoPixels using DMA - so modulating one LED would be child’s play.
Hmmmm. . . .
Instead of wasting a nice micro-controller, maybe a 556 dual timer IC would work well enough? One for the pulse string, and one for the silent period.
This has a “test window” with, (relatively), simple code that automatically discovered my Saitek X-52 HOTAS joystick, every button, every analog controller, and both the joystick with Z-axis twist and the throttle controller.
I’m going to be all over that code like a wet rag!