Sense, Think, Act starts with sensors

The sophistication of sensors becoming available to a Raspberry Pi 4 based robot like the GoPiGo3 is impressive: e.g. this $160 3D LIDAR:

5v 0.5A USB-C power, data, and control.

1 Like

Wow - that’s really impressive. Amazing how these technologies just get smaller and cheaper.
/K

2 Likes

Indeed exciting and inviting, but the complexity of “Learning to program a mobile robot” has exploded from

  • learning to trigger and read ultrasonic range sensors and LED obstacle sensors from 2000 to 2017,
  • and learning to configure and read the Dexter Industries “Smart Sensors”
    • I2C based Time Of Flight Distance Sensor, Inertial Measurement Unit), with
  • single threaded execution,

to now:

  • multi-threaded, multi-processing programs with messages, mutexes, and callback groups
  • that configure, control, and read multiple sophisticated sensors
    • USB: LIDAR, Stereo Depth Cameras,
    • I2C: Voltage and Current Sensor

What can be accomplished with basic sensors in the “Sense, Think, Act” paradigm or even more so “Synthetic Psychology” is fun to explore, but the sophistication and technologies required for “Intelligent Behavior” is really daunting.

2 Likes

That depends on the depth of your interest and your wallet.  (And accessibility in this screw-ball political environment!)

Me?  I’m having pleanty of fun without it!  At least not yet. . .
:wink:


It should be noted that the first time either Carl or Finmark SLAMmed into their LIDAR sensors, the 2D maps looked like something a pyrotechnic factory would make.  Or, perhaps Dadaist art worthy of the Guggenheim?  No offense to you guys, it was the best you could get at the time without a Top Secret security clearance, but it left much to be desired.

Until now. . . .

Now with this sensor, you get a 3D rendition of what’s there that actually looks like what’s there - in a vaugely Impressionistic style perhaps - but absolutely recognizable.  And for less than a buck-seventy?  Wow!  It makes you wonder what the military has, if this is unclassified and in the open market. . .

I’m really tempted to get one, but like the Dexter IMU I bought, I suspect it would be sitting in my “Robot Parts” box, collecting dust after 30 minutes of playtime.  I guess I’ll wait that one out too.

2 Likes

Indeed there is plenty of fun and plenty to learn with the basic sensors and programming using the “not at all basic” GoPiGo3 API. The programming complexity explodes to the point of questioning the definition of fun and one’s motivations for their robot.

With interest to use stereo depth sensing, I have chosen to climb the ROS 2 complexity hill. Something as simple as two lines calling the GoPiGo3 drive_cm(17cm) and set_speed(50DPI) become intricately coupled, mutli-threaded, asynchronous flow through pages of code for which even the pseudocode takes a page:

GoPiGo3 API:

egpg = EasyGoPiGo3(use_mutex=True)
egpg.set_speed(50)  # DPI
egpg.drive_cm(17)  # forward 17 cm

ROS 2 Pseudocode:

#!/usr/bin/env python3

# FILE: drive_node.py

"""
    Offers /drive_distance service

    dave_interfaces.srv.DriveDistance.srv
        # Request
        # distance to drive (meters), positive or negative
        float32 distance
        # positive speed m/s
        float32 speed
        ---
        # Result
        # status: goal reached: 0, stall occurred: 1, time expired: 2
        int8 status

    CLI:   ros2 service call /drive_distance dave_interfaces/srv/DriveDistance "{distance: 0.017, speed: 0.05}"

    Design:  Uses multi-threaded execution and a ReentrantCallback group 
             to allow the drive main callback, the drive service callback, and the motor status callback to be executing simultaneously:

             When /drive_distance service request arrives drive_distance_cb 
               - copies request msg local
               - sets drive_state to drive_distance_init
             drive_main_cb (coded state machine)
               - States: init, ready, drive_distance_init, drive_distance_active
                 - drive_distance_init:
                   - records current motor encoder positions (previously set in motor_status_cb)
                   - records current time
                   - publishes /cmd_vel twist to start motion
                 - drive_distance_active:
                   - watches for stop conditions
                     - requested distance is reached (encoder positions updated by motor_status_cb in separate thread)
                     - [motor has stalled] 
                     - (distance/speed + tolerance) time has passed)
                  - publish all stop /cmd_vel
                  - return drive_distance done and result [goal reached: 0, stall occurred: 1, time expired: 2]

             When /motor_status arrives motor_status_cb executes:
               - saves a copy of current encoder postions
               - saves motor stall status

"""

All this because I chose to sink $350 into a really smart sensor (that itself takes pages of special parallel processing programs) and hope to use enormous libraries of ROS 2 code that other folks have written for more than a basic “map my house floor plan” demonstration that was not so easy even using that “already written” library code.

With every step forward I make, it appears my “aware, learning, autonomous, interactive, self-contained mobile home robot” goal is still exponentially more complex than my patience and motivation.

GoPi5Go-Dave’s life is in the hands of a tired, old guy.

2 Likes

The problem is robots can’t just look at that multi-color splash of points and know “what’s there”.

It takes a page of code to set up the sensor, and another page to understand the most basic output of the sensor (“bottle x,y,z meters from camera”), and another hundred pages for the robot to sense what is behind the bottle, next to the bottle, behind the robot and next to the robot.

2 Likes

That’s because the goal of an “aware, learning, autonomous, interactive, self-contained mobile home robot” is a lot like a fractal - the closer you get the more detail there is.  And no matter how close you get, the detail never diminishes.

In other words, your goal is a “robot” that emulates all the fine details of a human brain.  Or even the brain of a jellyfish for that matter.

As you have discovered, imitating intelligence, even artificially generated intelligence, is something that requires more effort, (and deeper pockets!), than you or I have.

IMHO, your problem is one of scope - knowing when “enough is enough”.

You have this strange idea that your Raspberry Pi powered GoPiGo robot should be able to randomly wander up to you at some point, ask if you’re going out somewhere later on, and ask you to pick up some more batteries if you do.

Not gonna happen, not without some serious modifications to what is meant by all of this.

Your requirements need to be SPECIFIC, ACHIEVABLE, and TESTABLE within the constraints of your projected lifespan, budget, amount of antacid on hand, and desire to have actually completed something.

In other words, you’re trying to send men to the moon with a black-powder rocket.  (i.e. Trying to bite off more than you can chew.)

My recommendation:

  1. Take very careful stock of what equipment you have on-hand, and the potential limits of what it can, and cannot, do. based on your skill, knowledge, and level of patience.

  2. Define a SMALL, SHORT-TERM GOAL that can be used as a building block for a larger goal if you want.  Or a small short-term goal for its own sake.  (My favorite!)

  3. Work to accomplish that small, short-term goal.

  4. When done, jump back to #1 and take stock of the situation as it might have been changed by accomplishing the goal you just met and define another small short-term goal.

If you keep trying to implement your (apparent) perception of “intelligence” within a computer algorithm, it’s going to be a tough nut to chew.

========================================

How about a “What’s that?” goal?

  1. Be able to place something in front of [name of robot] and have the robot speak what it thinks it is.  Voluminous logging is optional. :wink:

  2. Next, try a more complex “what’s that?” goal.
    Allow the robot to wander for [X] minutes, (I would start at five minutes or less), periodically stopping when it’s near enough to something, to see if it can identify anything, and then continue to look.  Ignore the ability to re-dock.  At the end of the given time have the robot report everything it found.

  3. Next, do the same thing but have the robot exclude things it’s already seen before.  (No, you DON’T have five cats!)

  4. Next, do the same thing but make the robot sensitive to different instances of the same object.  (i.e. You have two cats, but they don’t look the same, or you have more than one kitchen chair, or knowing the difference between you and your wife, etc.)

========================================

Or. . . .

  1. Place [name of robot] in some random place in the room its dock is in such that the dock is in clear sight, have it find its own dock and move to within [X] [feet|inches] of it.  Ignore self-docking as a goal.

  2. Include the ability to dock in the previous goal.
    (Hint:  A relatively thick black line extending directly out from the dock for [X] feet/inches to help it find the center line.)

What say ye?

2 Likes

Oh, that is good. Pi5 is twice as powerful… Johnny-Five didn’t understand, “more data” is not the goal… more batteries - YES!

2 Likes

Yeah - did that… but the robot didn’t find too many bicycles and elephants in my house. Unless I start a cottage industry on my “Sim Computer with massive GPU”, to create and update custom “objects that have been seen by a home robot” models - Dave only gets:

# Tiny yolo v3/4 label texts n=80 classes
labelMap = [  
    "person",         "bicycle",    "car",           "motorbike",     "aeroplane",   "bus",           "train",
    "truck",          "boat",       "traffic light", "fire hydrant",  "stop sign",   "parking meter", "bench",
    "bird",           "cat",        "dog",           "horse",         "sheep",       "cow",           "elephant",
    "bear",           "zebra",      "giraffe",       "backpack",      "umbrella",    "handbag",       "tie",
    "suitcase",       "frisbee",    "skis",          "snowboard",     "sports ball", "kite",          "baseball bat",
    "baseball glove", "skateboard", "surfboard",     "tennis racket", "bottle",      "wine glass",    "cup",
    "fork",           "knife",      "spoon",         "bowl",          "banana",      "apple",         "sandwich",
    "orange",         "broccoli",   "carrot",        "hot dog",       "pizza",       "donut",         "cake",
    "chair",          "sofa",       "pottedplant",   "bed",           "diningtable", "toilet",        "tvmonitor",
    "laptop",         "mouse",      "remote",        "keyboard",      "cell phone",  "microwave",     "oven",
    "toaster",        "sink",       "refrigerator",  "book",          "clock",       "vase",          "scissors",
    "teddy bear",     "hair drier", "toothbrush"
]

2 Likes

Then how about the “go look for something” goal?

2 Likes

Yes, any one and all of them are desired. Why does it take a month of learning, a month of programming, another of debugging, and another of bitchin’ about how hard that simple to express, non-generalized function was to actually do.

2 Likes

I’m not gonna’ touch THAT ONE even with someone else’s ten foot pole!
:rofl:

2 Likes