Raspbian For Robots : Buster and Raspberry Pi 4

Originally published at: https://www.dexterindustries.com/raspbian-buster-and-the-raspberry-pi-4/

2/3 We’re releasing an update to Raspbian for Robots and this one is based on Buster !This update makes Raspbian for Robots Raspberry Pi 4-compatible out of the box/download. We’ve taken the opportunity to make some changes to how we pre-install all the tools that make Raspbian for Robots an ideal OS for running the…

1 Like

So this is going to be a Real, Honest to Dex, Official Release - and not a beta?

Oooh!

Charlie wants one!  When do you think it’ll be ready?

Thanks!

P.S. Have you been able to take a look at the dhcpd.conf file issue in /boot?  It’s really annoying to have the updates to dhcpd5 fail.

Yesterday at 10:45am DexterOS 2.5.0 25Mar2020 10:45 The link will download a zip file.

or visit the “Get DexterOS” page: https://www.dexterindustries.com/dexteros/get-dexteros-operating-system-for-raspberry-pi-robotics/

And to clarify - that is only the DexterOS available now, and we are still looking forward to the non-experimental Buster based “Raspbian For Robots” soon.

1 Like

Sorry to get your hopes up. This is exactly the same post as what was posted in December. There was a glitch on the forums as some of you may have noticed, and the post had disappeared. It simply got re-posted.

Raspbian for Robots is a bit on the back burner while we are planning out features for both R4R and DexterOS. We’ll get back to it as soon as we can.

1 Like

Understood R4R is “coming soon”, but now I’m confused about yesterday’s DexterOS 2.5.0 image - actually not confused, just curious. Jim is the DexterOS guy.

1 Like

OK, I’ll take that as a complement.   :wink:

This DexterO/S image has been live for a while now - at least a couple of weeks - and yes, I already downloaded it.  I haven’t had a chance to mess with it, because I’ve been working on fitting a SSD to Charlie.  As a goof of course, the additional almost 2A load won’t increase his “playtime” very much.  I read an article about it, and was curious if it could be done with R4R.

It worked, and I plan on posting pictures of Charlie with the “backpack”, but it’s not permanent.

I’m also working on getting a “real” development environment established so that I don’t loose work if/when I re-flash Charlie’s image.

BTW, on that same topic, what’s the best way to “un-flash” :wink:, Charlie’s image to a binary file that Etcher can use?  I know I can use “dd” to make a copy, but how do I verify that’s it’s absolutely complete and correct?  What method does Dexter use to create it’s images?

I prefer to use “ddrescue”, (gddrescue), to make images because, unlike dd, it goes all the way to the end of the device, even if the block size/cluster size isn’t exact.  Whereas when dd hits the end of the device, and if the end of the device doesn’t end on a copy-block boundary, it just stops, leaving whatever was left at the end of the device un-copied.

we use a combination of gparted and dd, since you asked.

1 Like

Using dd, how do you make sure you get to the absolute end of the media? Set the block size to 512 bytes?

Likewise, how do you insure you copied the first 512 bytes correctly?

It has been my experience with dd - and hard drives - that cluster-size boundaries, partition boundaries, and raw media boundaries often do not match.

In the past I have tried copying media using dd, (using what I believed to be the correct cluster size of 4096 bytes, or something else not equal to a 512 byte cluster), and found that the end of the physical media/partition didn’t end on an even multiple of the cluster size.  Because the physical media usually doesn’t end on an even power of two, the final read would fail.  This caused essential data located at the very end of the drive to be missing or corrupt.

Also, (and no, I don’t know why), if I copy a hard drive from the very first sector to the very last, getting a raw file that is absolutely identical to the number of bytes on the media, the MBR never  copies correctly and the image will fail.  I work around this by making both a complete copy, AND  a copy of the first 512 bytes, (the MBR), and then restore the MBR after I have restored the rest of the image.

On the other hand, it has been my experience that ddrescue, (or dd_rescue), reads how big the media is before copying it.  When it reaches the end of the drive it subdivides the block size it reads to make sure it gets all the way to the very last byte.  It also copies the MBR correctly.  Though I admit a bit of paranoia, and copy the MBR seperately anyway.   :wink:

I don’t know why this is true, and I have not been able to find an answer either.

We never noticed this issue but we do set the block size to 1M, and the number of blocks to copy.
bs=1M count=5500 (where 5500 varies depending on what we’re dding)

1 Like

I set the block size to 1M, but copy the whole thing. (I thought it is just empty space up at the end anyway.)

1 Like

And that makes a dangerous assumption:  That the data is organized on the device is a perfectly linear fashion within the partition and/or device.  Unfortunately this is seldom true.

When you write to the filesystem on a device - any device - the O/S, driver, controller, and hardware controller on the device keep track of the mapping between “logical” sectors within the filesystem  and “physical” sectors on the device.  In other words, clusters “A”, “B”, and “C” within the filesystem are absolutely NOT  guaranteed to correspond to physical sectors “x”, “y”, and “c”. In fact, they’re not even guaranteed to be in the same place five minutes from now.

As far as I know from reading the man pages, “dd” reads and writes at the physical device layer, bypassing all the controller logic. In other words, if you ask dd to copy sector “1”, you will  get the first  user-writable sector on the disk.  Unless you specify an offset, reading 1 meg of data will retrieve the first 1.024 million physical sectors on the disk.  Are these the first 1.024 million logical  sectors?  Maybe, but then again, maybe not.

======================================================

As an aside, the new “advanced format” disk drives should be copied with a block size that’s an even multiple of 4096 bytes - the hard-coded cluster size on these devices.  Since these devices operate on fixed-length blocks - 4096 bytes - if you read or write something that’s not aligned on a 4096 byte boundary, it has to read, (and possibly write), a minimum of two  clusters to make sure the data fits, even if the data is smaller.

The “Advanced Format” spec that the hard drive manufacturers use now specifies a “requirement” that all reads and writes be aligned on a 4096 byte boundary. With the exception of the first million bytes, (which can be “natively” read in 512 byte blocks for compatibility reasons), every sector on the device is the size of an NTFS cluster - 4096 bytes.

And just in case you have been puzzled why the first meg of any large drive is skipped when partitioning - that’s the reason.  Partitioning tools align the partitions to a 4096 byte boundary by leaving the first meg empty for whatever boot and partition mapping is needed.

Of course, you can read or write any amount of data you want, but if you specify a data length that’s not  a multiple of 4096, the spec specifies that the reads can, and probably will, take orders of magnitude longer than a properly aligned access.

This is why, when using a utility like “dd”, your copy block size should be an even multiple of 4096 bytes - it will make the copy WAAAAY faster.

Another dangerous assumption - especially with NTFS formatted volumes and GPT formatted drives.  NTFS places backup copies of the MFT and volume bitmap at the end of the partition.  GPT places a backup copy of the partition table and configuration dataset at the physical end of the drive.

If the end of the partition and/or drive gets corrupted/truncated, an NTFS volume will complain bitterly and a GPT partitioned drive won’t even mount.  Many operating systems and drive repair tools attempt to “fix” truncated NTFS volumes and GPT partitioned drives by copying the main copy of the data to where the backup “should” be; then fixing up the bitmaps and partition sector counts - which, (usually), corrupts the drive beyond all hope of repair.

I don’t know enough about the absolute structure of ext2-3-4 formatted volumes, but I do know that it places inodes and superblock copies all over Hell and Half of Texas. So the assumption that the “end” of a partition is “empty” is scary.

Even if the “end” of the partition shows “empty” on tools like GParted, that dies not mean that the actual, physical, arrangement of the data is all concentrated at the beginning.

In the past, I have tried to reduce the size of a raw backup by using tools like GParted to reduce the partition size as much as possible, and “concentrate” the data all at the beginning.

If I am doing this to add another partition to the structure of the drive, (done at the logical level), it is virtually guaranteed to work.  When I have done this to try to reduce the physical size of a bit-mapped copy, it’s an even money bet - sometimes worse - when I go to re-write the volume.

I get gooseflesh when I read something like these two comments. . .

We use gparted before using dd to clean up the card. We have yet to encounter any issues related to sd card copying. Maybe we’ve been lucky? Or maybe the tools have gotten smarter?

1 Like

Maybe both? (laughing!)