Andrew Que Sites list Photos
Projects Contact
Zach and Lunch

Zach and Lunch

So it took a trip to Stack Exchange in order for me to get a solution to the problem I was having with using two different programs to unload a pipe. In command on a question I posted it turns out I was close. On further investigation I found that dd wasn’t reading 1 MiB of data from stdin as I had hoped. Instead, it was reading 65516 bytes. That number seemed a bit odd as it is not quite 64 KiB, which is 65536. In fact, it is 20 less than 65536. Since we were transmitting the block number as a text field we reserved 20 bytes for the value. This is because a 64-bit can have a maximum of 20 digits (although it is unlikely we would ever use that many).

I’m not sure why dd is only working with 64 KiB when it was asked to use 1 MiB, but I found a parameter iflag=fullblock that solves the problem. The manual states says it all if you take the time to read everything:

Note if the input may return short reads as could be the case when reading from a pipe for example, ‘iflag=fullblock’ will ensure that ‘count=’ corresponds to complete input blocks rather than the traditional POSIX specified behavior of counting input read operations.

This was exactly the case I ran into. So with this fix, I can now run only native commands on the remote machine for doing block synchronization

January 22, 2022

Lessons Learned Using dd Command

Typically I use dd to mirror disks without too much thought. Today I ran into why that isn’t always a good idea. I picked up a 3 TB drive for backups. I have a 3 TB encrypted LUKS encrypted drive I setup when I was first building the Snow-Dragon. I don’t need this space since the Data-Dragon houses around 40 TiB of storage so the drive has sat largely idle most of its existence. When I started looking at doing off-site backups, I asked the question of how to back this drive up should I decide I want to utilize the space in the future. That started me investigating doing a device backup using hash blocks to track changes.

With the basics of that system working, it was time to try it in an actual backup scenario. The first thing that needs to be done is duplicate the drive. After giving the disk a test, I started a disk copy using dd. It stopped after 16 GiB and complained the disk was full.

Before the test began, I had issues the following command to check on the drive:

root@snow-dragon:~# fdisk -l /dev/sde
Disk /dev/sde: 2.75 TiB, 3000592977920 bytes, 732566645 sectors
Disk model:   FA GoFlex Desk
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Then the copy command:

root@snow-dragon:~# dd if=/dev/md1 of=/dev/sde bs=32K status=progress
16689266688 bytes (17 GB, 16 GiB) copied, 106 s, 157 MB/s
dd: error writing '/dev/sde': No space left on device
511089+0 records in
511088+0 records out
16747343872 bytes (17 GB, 16 GiB) copied, 106.669 s, 157 MB/s

Then after the copy command, the query resulted in this:

root@snow-dragon:~# fdisk -l /dev/sde
Disk /dev/sde: 15.61 GiB, 16747343872 bytes, 32709656 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

What happened? After some investigation I found that my source disk is organized into 512 byte sectors, and the new disk into 4096 byte sectors. The copy changed the size of the disk. I was fighting to reverse this and nothing seemed to work until a system reboot. Then the disk returned to reporting its true size. So I’m not sure what I did that fixed it.

So I went about partitioning the disk and copying the partition using dd rather than the entire disk.

New 3 TB drive arrived today. It has been quite cold lately with the temperature being around -9 °F (-23 °C) this morning so the first thing I needed to do was allow the drive to warm up. Then it was time for a disk test.

For testing I was going to use badblocks. However, I read this article which suggest using a LUKS-based approach instead. It was painfully slow, but did work.

The drive vibrates sometime terrible. I’ve run into this a number of times with newer drives. While I’m not happy about the vibration, this is just a backup drive. Since I can’t show an error I am unlikely to be able to get a replacement.

   Pictured are the results of my frame test.  As I had hopped, the LEDs provide enough light to get the painting to fluoresce.  There is some edge glow, but it isn't too bad.  The LEDs are bright enough that even having a dim room light does not take away from the effect.  That means the test is successful.  I plan on assisting Kim with a future blacklight painting project, and this time I want to produce a very good painting surface and a nice self-illuminating frame.
   I had ordered a small 3 TB hard drive last weeks which has apparently been lost in shipping.  This is a first.
   Pictured is the dryer opened up.  It sometimes sounds like a truck and I wanted to see if there was an obvious reason why.  I suspected it might be the drive motor's centrifugal switch not engaging.  AC motors sometimes use a starting capacitor to get the motor moving, and then switch it off after it has reach some speed.  However, the centrifugal switch seems just fine.  I ran the motor without any load and it vibrates pretty heavily.  I suspect there is a bearing going out.  Time to look into either replacing the motor or the entire dryer.
   My post about needing to use modprobe to load the network block device driver was a mistake on my part.  I thought modprobe enabled the driver forever, but that is not the case.  I had to edit /etc/modules and add nbd.  That should solve the problem in the future.
   Pictured is the painted frame from yesterday with its first coat of paint.  I started by using a metallic paint on the inside of the frame to reflect more of the black light.  I'm not sure if it will do much good, but that's why I tried it.  Then I needed to sand the edges where each segment of the frame met.  I did a poor alignment job during the glue-up so a fair bit of material needed to be removed to line them up.  Something else to keep in mind for next time.
   Yesterday I started the process of making a large frame for the blacklight paintings done on New Year's Eve.  This is actually a test of a concept.  The frame actually extends out past the painting so that ultra violet (blacklight) LEDs can be mounted on the inside edges.  The hope is to create the frame will produce enough light to cause the painting to fluoresce.  I ordered some the highest density UV LED strip lights I could find and they should arrive before the frame is complete.
   The first part of the process was to edge glue some 1"x4" boards which I did yesterday.  Today I cut the boards with the needed 45 degree angles to create the frame and glued them together.  For clamping I am using ratcheting tie down straps.  There are a couple of mistakes already, but none I care too much about.  The point of this operation is for testing, not presentation.


Read an article about an e-mail provider getting hit by a ransomware attack and taking people’s e-mail down. These days most people keep their e-mail in on the e-mail server—something I’ve never done. While I have run my own e-mail server for over 20 years, I have never considered storing my e-mail on the server. I’ve used Thunderbird since its release in 2004 and have always kept my e-mail saved locally. In fact in the early 2000s one of my task before going to a job site with my laptop was to synchronize my e-mail and to make sure I closed my e-mail program on my main computer. When I got back home, the reverse process took place. I never saw this as a problem.

The trend for over a decade is to move data into cloud storage and something I have vehemently resisted. One problem is that if you lose an Internet connection, you don’t have your data. More than that, however, you don’t have your data—someone else does. That means they have control over your data. Paid storage solutions seemed like a great way to put your data in shackles. You are at the mercy of the storage provider. They can decide to charge more, sell the company to someone else (who will probably decide to charge more) or fold. In addition, you have no idea what those people are doing with your data. Sure, there are privacy policies, but are they legally binding? And even if they are, what about bad actors? I personally would never consider putting any data online unless it was encrypted, and only by open third party encryption software.

So yesterday I was fairly excited because I thought I had created a method to do my block hash incremental backup over SSH. Turns out I made one mistake in that I was not actually sending the data—just the block number. Since my test used SSH on the local machine, the source file existed and the copy still produced matching files. However, this would have failed on an actual remote device.

The problem I was unable to solve was how to send two items over SSH. The first was the block number that would initiate the dd command, and the second was the block data that dd used to copy to this location. No matter what combination I tried I could not get a setup where dd worked.

The only other solution was to write a small program for the receiving side that would simply take a block number, followed by a block of data. This works, but I this is less than ideal mainly because it requires a program to be installed on the remote computer. Still, I have a functional setup for now and plan to test this out.