Andrew Que Sites list Photos
Projects Contact
Main
   Yesterday's drive test is again inclusive.  One drive that was reporting no errors the previous test now registered 63 million, and the drive that had errors reported none.  Again, the error was on a new drive with only 330 hours (about 14 days) of run time.  The drive that did not appear previously also had no errors.  When in doubt, test again.  I'm also ordering new SATA cables as clear wiggling cables helped and I don't trust the cables currently installed.  Should have results in sometime tomorrow.  Might as well run the test over and over.  Random inconsistencies might become less random with repetition.
   On Monday I did my first bike ride in a very long time.  Despite the quarantine there are no bans on solitary outdoor activities such as cycling (see section 11.c).  Temperatures were in the 50s with a mild wind from the north-west.  I did the Martinsville-Waunakee loop.  This ride is usually about 2 hours, 10 minutes and a 1,500 Calories.  This time it took me 2 hours and 29 minutes and I burned 2,113 Calories.  The increased calorie burn is likely the result of being out of shape from not riding for so long.
   The other day my new hard drive controller arrived, allowing the Data Dragon to address all 14 hard drives.  There are 13x 4 TB drives, and 32 GB SSD.  I had suspected one drive I removed was no good because it failed to appear to the machine, but decided to give it a full test to find out.  The test finished today, and the results are puzzling.  I didn't notice but only 12 of the 13x  4 TB drives actually showed up.  One of the others, which had been functional, did not appear.  In addition, one of the new drives I picked up from Pluvius registered several million failures.  I don't truest the results of this test.
   I decided to wiggle drive cables and start the test again.  This time, all 14 drives registered as being online.  The test takes around 20 hours so we'll find out then what happens.

April 01, 2020

Bad traffic from Amazon

Noticed a huge amount of traffic dragging through my site all coming from the same subnet. I’ve noticed this kind of thing in the past and it is usually from China, but this time it was form a subnet owned by Amazon. Someone is running a script that is downloading everything from my site. Unlike most websites, DrQue.net is not sitting in a data center with giant Internet pipes, and I need to share that bandwidth. So I temporally blocked a large block of IP space from: 54.174.54.0/22. Initially I just did blocked a class A starting at 54.174.55.0, but then I started seeing requests from 54.174.54.* and 54.174.53.* so I blocked those too.

sudo iptables -A INPUT -s 54.174.53/22 -j DROP

Now I just have to remember to remove that rule sometime in the future.

March 31, 2020

Custom Python Message Of the Day (MOTD)

Data-Dragon's MOTD

I typically replace the motd (message of the day) on all my main Linux computers. Getting exactly what you want from motd isn’t always straight forward. Initially, the message of the day was stored in a file called /etc/motd. On Debian there is a set of scripts to generate all kinds of extra crap located /etc/update-motd.d/. Generally I empty this directory out less one file, and edit that file to be the message of the day. On Proxmox, the message of the day is just the kernel name followed by the Debian legal message. The legal message comes from /etc/motd.

On the Data-Dragon, I truncated /etc/motd (just an empty file now). Then I replaced /etc/update-motd.d/10-uname with a contents of a Python script that generate a custom color pattern.
 

March 30, 2020

Cinnamon CPU temperature select

I use Cinnamon desktop environment on my main console and an desktop applet called CPU Temperature Indicator to have my CPU temperature displayed. A few days ago I became a little alarmed that my CPU temperature was being reported as 170°F/77°C. That’s way too hot for a normal CPU and I wondered if maybe my fans had stopped or something. After some investigation I discovered that the applet I was using to display CPU temperature was, in fact, displaying the GPU temperature. Not exactly sure why but GPUs usually run hot, so 170°F isn’t that unreasonable. While this had been working something must have changed to cause the applet to display a different sensor.

It doesn’t look like there is any way to select which sensor the applet displays, but while searching for a solution I ran across the directory the applet is installed: ~/.local/share/cinnamon/applets/temperature@fevimu. There is no configuration file, but the source code for the application is in applet.js. I opened this up and near the top was the line:

const cpuIdentifiers = ['Tctl', 'CPU Temperature' ]

I happen to know that my CPU temperature sensor is called “CPUTIN”, so I added that to this list

const cpuIdentifiers = ['Tctl', 'CPU Temperature', 'CPUTIN' ]

I restarted Cinnamon (Ctrl-Alt-ESC) and that did the trick—the CPU temperature was being displayed.

March 29, 2020

Key Master

Key Master

Key Master

I mentioned in my encrypted ZFS test article that I want the ZFS key to come from a remote computer. When encrypting anything the question is: who are you trying to keep from viewing the data? In my case it is anyone who physically acquires the computer. If the computer is stolen, the data should be inaccessible. A simple passphrase would solve this, but there is the problem of automation. If the server reboots for some reason we don’t want it to wait around for someone to login and remount the ZFS storage. Retrieving the key from a remote source is the alternative. This remote system is a key server, and for our setup it will take the name Key Master.

The requirements for the Key Master are very simple: serve keys only to authenticated clients. Since we are trying to prevent the keys hosted by the Key Master from being used outside the network, the Key Master should not talk to the Internet. The keys will also be stored in RAM so that if the Key Master is ever turned off/unplugged the keys are lost. We never want the plain text keys to end up written to disk, so we use a read-only file system, no swap space, and place the keys in a RAM disk.

For our setup the key is being hosted from a Raspberry Pi Zero W. The computer is setup headless, wireless setup with SSH access, a read-only file system and a RAM disk. What this allows is a remote computer to load a key, but that key’s plain text is never stored anywhere but RAM. With a read-only file system and no swap we are guaranteed this.

Why is having the Zero W store the key secure? If a thief takes the Zero W, they are going to disconnect power. That will cause the key to be lost. In addition, the Zero W is very small and can easily be concealed, making it unlike the thief would be able to find it. There are some other reason I may elaborate on latter, but for now we need to make this key server.

The setup is very simple. A base install of Raspbian Lite gets us started. Then follow directions to get a headless boot. Once the device was on the network I could assign it a static IP for our router, reboot, and SSH to the device. Then I did the basic housekeeping: change the default user name and password, updates, etc. I also disabled and uninstalled things I don’t need, following these directions used for a fast boot.

To make he file system read-only I followed these directions. That worked fine and now my Pi Zero was able to run read-only. I can still remount the file system read/write to make changes. One item I wanted to add was SSH public key only logins—no passwords. That is as simple as editing /etc/sshd_conf and changing PasswordAuthentication to no. Restart SSH and login only by authenticated public keys.

Each server that needs to fetch a key then has an account on the Key Master. I give a 1 MB RAM disk for each user. While the keys are not that large, this allows the keys to be created and encrypted on the Key Master completely in RAM. The encrypted keys can be exported for archive with the plain text never leaving the safe system.

March 28, 2020

Data-Dragon Rebuild

Data-Dragon with new Motherboard

Data-Dragon with new Motherboard

The new parts for the Data Dragon have arrived. An AMD Ryzen 5 1600 6 Core/12 thread, Gigabyte AX370-Gaming K5 Motherboard with 8 SATA III ports, 120 GB SBX Eco NVMe PCIe SSD, and 8 GB of Timetec PC4-19200 DDR4 2400 ECC RAM.

The computer isn’t complete. More RAM is on order as is an 8-port SATA card. For now, however, I can get the basics of the system setup.

After assembly, I installed the latest version of Proxmox, 6.1-7. This will by the hypervisor that runs various the virtual machines we will have running. Initially the Data Dragon was just to be a data storage pool. However, on Zach departure we the houses other server and need something to take over those virtual machines. The specifications for the Data Dragon were almost good enough—I just need more RAM.

March 27, 2020

Data-Dragon Drive Test Results

The test of all 12 hard drives in the Data Dragon has finished. The results: no failures. I’m actually confused by these results as I was sure at least one of the drives had issues. Right now the only drive I know has problems is a 4 TB drive from the original array that died some years ago and was replaced with a hot spare.

The test used the program Badblocks for testing. It writes and then verifies 4 patterns: 0x55, 0xAA, 0xFF, and 0x00. Since I had 12 drives to test, I used the script Bulk Harddrive Tester (BHT) to run all the tests in parallel. Test ran for 3 days, writing/reading a total of 192 TB of data.

I know some of the drives have issues. SMART reports a lot of errors found. These drives are all have +22,000 (2.5 years) of continuous runtime, and the report states most have a couple hundred to a few thousand reported errors.

So while the test has passed, I’m not sure I’m convinced. Looks like Badblocks has the ability to write and verify a pseudo-random pattern. I might have to run one more test before I’m convinced the drives are functional.

March 26, 2020

ZFS Encryption

With the ability to setup and write an uncompressed ZFS file system, now it is time to do the same for an encrypted file system. First, I need to get rid of all the data I created yesterday. That requires destroying the ZFS pool and wiping the test drives.

zpool destroy tank

This will get rid of the pool. Scary how simple that line is, because now the storage is gone. So careful using this as root because it won’t even ask—it will just do it. Copying zeros to the devices will erase all the old data.

dd if=/dev/zero of=/dev/sdb bs=1M
dd if=/dev/zero of=/dev/sdc bs=1M
dd if=/dev/zero of=/dev/sdd bs=1M
dd if=/dev/zero of=/dev/sde bs=1M
dd if=/dev/zero of=/dev/sdf bs=1M
dd if=/dev/zero of=/dev/sdg bs=1M
dd if=/dev/zero of=/dev/sdh bs=1M
dd if=/dev/zero of=/dev/sdi bs=1M

Viewing with a hex editor, it didn’t seem to overwrite the first bit of the drive, but everything else is empty. That’s good enough for the next test. We just don’t want any retirements of the plain text from the last test floating around.

Now the create the encrypted pool. I’ve read that the root pool shouldn’t be encrypted so we’re going to do this in two parts. The first will create the pool, and the second the encrypted section of the pool.

zpool create -f -o ashift=12 -O mountpoint=none -O relatime=off -O compression=off tank /dev/sdb

We need a key, so we’ll make one up.

dd if=/dev/urandom of=zfsTestKey.bin bs=1024 count=1

For my setup I will not store the key on the VM, but that’s for latter. For now we have a key file. Now use it to create an encrypted ZFS area.

cat zfsTestKey.bin | zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase -o mountpoint=/dataDump tank/dataDump

Why am I piping in the key rather than using the file directly? Because latter I will be piping in the file from a remote server.

root@zfs-test:/mnt# df -h /dataDump/
Filesystem      Size  Used Avail Use% Mounted on
tank/dataDump   832M  256K  832M   1% /dataDump

800 copies of 1984 latter I have a mostly full drive.

root@zfs-test:/dataDump# df -h /dataDump/
Filesystem      Size  Used Avail Use% Mounted on
tank/dataDump   831M  809M   23M  98% /dataDump

The resulting data:

Most of the drive is like this. There are some blank spots and the header, but otherwise the drive is full of seemingly random data. No results for searching “big brother” and no plain text at all.

We have no created an encrypted ZFS file system, but how do we remount it?

cat zfsTestKey.bin | zfs load-key tank/dataDump
zfs mount tank/dataDump

root@zfs-test:~# df -h /dataDump/
Filesystem      Size  Used Avail Use% Mounted on
tank/dataDump   831M  809M   23M  98% /dataDump

So there we have it. Encryption seems to do what I think it should be doing. Not a deep test of the encryption but more proving that encryption does seem to be applied.

March 25, 2020

ZFS Plain Text

I wrote about how I was considering using Proxmox with ZFS for the Data-Dragon.  My first question about ZFS encryption was “can I see it work?” The reason I setup 8 drives with a fixed-size was so I had a raw binary representation of the drives with no magic caused by dynamic sizing.

First, I want to see that I could “see” plain-text on an unencrypted ZFS pool. ZFS uses compression by default, so I’d have to disable that. My plan is then to make several copies of some text document. Then with the VM shutdown, I can use a hex editor to look at the raw data on the virtual drives. I should be able to find my plain text.

The hardware setup:

root@zfs-test:~# fdisk -l | grep "^Disk /dev"
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdc: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdd: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sde: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdf: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdg: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdh: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdi: 1 GiB, 1073741824 bytes, 2097152 sectors

So I have 8 disks, sdb through sdi to use for the ZFS pool.

Create the uncompressed pool:

zpool create -f -o ashift=12 -O mountpoint=/dataDump -O relatime=off -O compression=off tank raidz /dev/sd[b-i]

The result:

root@zfs-test:~# zpool status
  pool: tank
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  raidz1-0  ONLINE       0     0     0
	    sdb     ONLINE       0     0     0
	    sdc     ONLINE       0     0     0
	    sdd     ONLINE       0     0     0
	    sde     ONLINE       0     0     0
	    sdf     ONLINE       0     0     0
	    sdg     ONLINE       0     0     0
	    sdh     ONLINE       0     0     0
	    sdi     ONLINE       0     0     0
root@zfs-test:/# df -h /dataDump/
Filesystem      Size  Used Avail Use% Mounted on
tank            6.2G  256K  6.2G   1% /dataDump

So we have a 6.2 GB storage space of plaintext. Now I needed some plain text. I decided a text version of George Orwell’s 1984 would work well. But a single copy might make it hard to find the text. So I threw together a little Python script to make a bunch of copies. 2000 copies latter the drive space looks like this:

root@zfs-test:/dataDump# df -h /dataDump/
Filesystem      Size  Used Avail Use% Mounted on
tank            6.2G  2.0G  4.2G  33% /dataDump

Then I shutdown the ZFS test VM. When I look at the directory with the VM files I have the following:

que@snow-dragon:~/VirtualBox VMs/ZFS test$ ls -l *.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs1.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs2.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs3.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs4.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs5.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs6.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs7.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs8.vdi
-rw------- 1 que que 4831838208 Mar 27 17:18 'ZFS test.vdi'

The files zfs1.vdi through zfs8.vdi are all the virtual drives used by ZFS. I brought up a hex editor and had a look in zfs1.vdi.

Results

Success. Plain text was found. So test one verifies that without compression I am able to see plain text on an unencrypted ZFS drive.