Andrew Que Sites list Photos
Projects Contact

March 30, 2020

Cinnamon CPU temperature select

I use Cinnamon desktop environment on my main console and an desktop applet called CPU Temperature Indicator to have my CPU temperature displayed. A few days ago I became a little alarmed that my CPU temperature was being reported as 170°F/77°C. That’s way too hot for a normal CPU and I wondered if maybe my fans had stopped or something. After some investigation I discovered that the applet I was using to display CPU temperature was, in fact, displaying the GPU temperature. Not exactly sure why but GPUs usually run hot, so 170°F isn’t that unreasonable. While this had been working something must have changed to cause the applet to display a different sensor.

It doesn’t look like there is any way to select which sensor the applet displays, but while searching for a solution I ran across the directory the applet is installed: ~/.local/share/cinnamon/applets/temperature@fevimu. There is no configuration file, but the source code for the application is in applet.js. I opened this up and near the top was the line:

const cpuIdentifiers = ['Tctl', 'CPU Temperature' ]

I happen to know that my CPU temperature sensor is called “CPUTIN”, so I added that to this list

const cpuIdentifiers = ['Tctl', 'CPU Temperature', 'CPUTIN' ]

I restarted Cinnamon (Ctrl-Alt-ESC) and that did the trick—the CPU temperature was being displayed.

March 29, 2020

Key Master

Key Master

Key Master

I mentioned in my encrypted ZFS test article that I want the ZFS key to come from a remote computer. When encrypting anything the question is: who are you trying to keep from viewing the data? In my case it is anyone who physically acquires the computer. If the computer is stolen, the data should be inaccessible. A simple passphrase would solve this, but there is the problem of automation. If the server reboots for some reason we don’t want it to wait around for someone to login and remount the ZFS storage. Retrieving the key from a remote source is the alternative. This remote system is a key server, and for our setup it will take the name Key Master.

The requirements for the Key Master are very simple: serve keys only to authenticated clients. Since we are trying to prevent the keys hosted by the Key Master from being used outside the network, the Key Master should not talk to the Internet. The keys will also be stored in RAM so that if the Key Master is ever turned off/unplugged the keys are lost. We never want the plain text keys to end up written to disk, so we use a read-only file system, no swap space, and place the keys in a RAM disk.

For our setup the key is being hosted from a Raspberry Pi Zero W. The computer is setup headless, wireless setup with SSH access, a read-only file system and a RAM disk. What this allows is a remote computer to load a key, but that key’s plain text is never stored anywhere but RAM. With a read-only file system and no swap we are guaranteed this.

Why is having the Zero W store the key secure? If a thief takes the Zero W, they are going to disconnect power. That will cause the key to be lost. In addition, the Zero W is very small and can easily be concealed, making it unlike the thief would be able to find it. There are some other reason I may elaborate on latter, but for now we need to make this key server.

The setup is very simple. A base install of Raspbian Lite gets us started. Then follow directions to get a headless boot. Once the device was on the network I could assign it a static IP for our router, reboot, and SSH to the device. Then I did the basic housekeeping: change the default user name and password, updates, etc. I also disabled and uninstalled things I don’t need, following these directions used for a fast boot.

To make he file system read-only I followed these directions. That worked fine and now my Pi Zero was able to run read-only. I can still remount the file system read/write to make changes. One item I wanted to add was SSH public key only logins—no passwords. That is as simple as editing /etc/sshd_conf and changing PasswordAuthentication to no. Restart SSH and login only by authenticated public keys.

Each server that needs to fetch a key then has an account on the Key Master. I give a 1 MB RAM disk for each user. While the keys are not that large, this allows the keys to be created and encrypted on the Key Master completely in RAM. The encrypted keys can be exported for archive with the plain text never leaving the safe system.

1 comment has been made.

From Marc


November 07, 2020 at 10:24 AM

Good Afternoon,

I just stumbled into your blog and I am searching for the same setup for my xigmanas setup as I could not find any other solution for remote reboot etc.

Instead of a raspberry zero, I would like to use my usual raspberry.

I like the idea of using the ram for storing the key, but I struggle with the actual setup of the key and how to archive that.

So some more details guidance would be highly appreciated!


March 28, 2020

Data-Dragon Rebuild

Data-Dragon with new Motherboard

Data-Dragon with new Motherboard

The new parts for the Data Dragon have arrived. An AMD Ryzen 5 1600 6 Core/12 thread, Gigabyte AX370-Gaming K5 Motherboard with 8 SATA III ports, 120 GB SBX Eco NVMe PCIe SSD, and 8 GB of Timetec PC4-19200 DDR4 2400 ECC RAM.

The computer isn’t complete. More RAM is on order as is an 8-port SATA card. For now, however, I can get the basics of the system setup.

After assembly, I installed the latest version of Proxmox, 6.1-7. This will by the hypervisor that runs various the virtual machines we will have running. Initially the Data Dragon was just to be a data storage pool. However, on Zach departure we the houses other server and need something to take over those virtual machines. The specifications for the Data Dragon were almost good enough—I just need more RAM.

March 27, 2020

Data-Dragon Drive Test Results

The test of all 12 hard drives in the Data Dragon has finished. The results: no failures. I’m actually confused by these results as I was sure at least one of the drives had issues. Right now the only drive I know has problems is a 4 TB drive from the original array that died some years ago and was replaced with a hot spare.

The test used the program Badblocks for testing. It writes and then verifies 4 patterns: 0x55, 0xAA, 0xFF, and 0x00. Since I had 12 drives to test, I used the script Bulk Harddrive Tester (BHT) to run all the tests in parallel. Test ran for 3 days, writing/reading a total of 192 TB of data.

I know some of the drives have issues. SMART reports a lot of errors found. These drives are all have +22,000 (2.5 years) of continuous runtime, and the report states most have a couple hundred to a few thousand reported errors.

So while the test has passed, I’m not sure I’m convinced. Looks like Badblocks has the ability to write and verify a pseudo-random pattern. I might have to run one more test before I’m convinced the drives are functional.

March 26, 2020

ZFS Encryption

With the ability to setup and write an uncompressed ZFS file system, now it is time to do the same for an encrypted file system. First, I need to get rid of all the data I created yesterday. That requires destroying the ZFS pool and wiping the test drives.

zpool destroy tank

This will get rid of the pool. Scary how simple that line is, because now the storage is gone. So careful using this as root because it won’t even ask—it will just do it. Copying zeros to the devices will erase all the old data.

dd if=/dev/zero of=/dev/sdb bs=1M
dd if=/dev/zero of=/dev/sdc bs=1M
dd if=/dev/zero of=/dev/sdd bs=1M
dd if=/dev/zero of=/dev/sde bs=1M
dd if=/dev/zero of=/dev/sdf bs=1M
dd if=/dev/zero of=/dev/sdg bs=1M
dd if=/dev/zero of=/dev/sdh bs=1M
dd if=/dev/zero of=/dev/sdi bs=1M

Viewing with a hex editor, it didn’t seem to overwrite the first bit of the drive, but everything else is empty. That’s good enough for the next test. We just don’t want any retirements of the plain text from the last test floating around.

Now the create the encrypted pool. I’ve read that the root pool shouldn’t be encrypted so we’re going to do this in two parts. The first will create the pool, and the second the encrypted section of the pool.

zpool create -f -o ashift=12 -O mountpoint=none -O relatime=off -O compression=off tank /dev/sdb

We need a key, so we’ll make one up.

dd if=/dev/urandom of=zfsTestKey.bin bs=1024 count=1

For my setup I will not store the key on the VM, but that’s for latter. For now we have a key file. Now use it to create an encrypted ZFS area.

cat zfsTestKey.bin | zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase -o mountpoint=/dataDump tank/dataDump

Why am I piping in the key rather than using the file directly? Because latter I will be piping in the file from a remote server.

root@zfs-test:/mnt# df -h /dataDump/
Filesystem      Size  Used Avail Use% Mounted on
tank/dataDump   832M  256K  832M   1% /dataDump

800 copies of 1984 latter I have a mostly full drive.

root@zfs-test:/dataDump# df -h /dataDump/
Filesystem      Size  Used Avail Use% Mounted on
tank/dataDump   831M  809M   23M  98% /dataDump

The resulting data:

Most of the drive is like this. There are some blank spots and the header, but otherwise the drive is full of seemingly random data. No results for searching “big brother” and no plain text at all.

We have no created an encrypted ZFS file system, but how do we remount it?

cat zfsTestKey.bin | zfs load-key tank/dataDump
zfs mount tank/dataDump

root@zfs-test:~# df -h /dataDump/
Filesystem      Size  Used Avail Use% Mounted on
tank/dataDump   831M  809M   23M  98% /dataDump

So there we have it. Encryption seems to do what I think it should be doing. Not a deep test of the encryption but more proving that encryption does seem to be applied.

March 25, 2020

ZFS Plain Text

I wrote about how I was considering using Proxmox with ZFS for the Data-Dragon.  My first question about ZFS encryption was “can I see it work?” The reason I setup 8 drives with a fixed-size was so I had a raw binary representation of the drives with no magic caused by dynamic sizing.

First, I want to see that I could “see” plain-text on an unencrypted ZFS pool. ZFS uses compression by default, so I’d have to disable that. My plan is then to make several copies of some text document. Then with the VM shutdown, I can use a hex editor to look at the raw data on the virtual drives. I should be able to find my plain text.

The hardware setup:

root@zfs-test:~# fdisk -l | grep "^Disk /dev"
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdc: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdd: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sde: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdf: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdg: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdh: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk /dev/sdi: 1 GiB, 1073741824 bytes, 2097152 sectors

So I have 8 disks, sdb through sdi to use for the ZFS pool.

Create the uncompressed pool:

zpool create -f -o ashift=12 -O mountpoint=/dataDump -O relatime=off -O compression=off tank raidz /dev/sd[b-i]

The result:

root@zfs-test:~# zpool status
  pool: tank
 state: ONLINE
  scan: none requested

	tank        ONLINE       0     0     0
	  raidz1-0  ONLINE       0     0     0
	    sdb     ONLINE       0     0     0
	    sdc     ONLINE       0     0     0
	    sdd     ONLINE       0     0     0
	    sde     ONLINE       0     0     0
	    sdf     ONLINE       0     0     0
	    sdg     ONLINE       0     0     0
	    sdh     ONLINE       0     0     0
	    sdi     ONLINE       0     0     0
root@zfs-test:/# df -h /dataDump/
Filesystem      Size  Used Avail Use% Mounted on
tank            6.2G  256K  6.2G   1% /dataDump

So we have a 6.2 GB storage space of plaintext. Now I needed some plain text. I decided a text version of George Orwell’s 1984 would work well. But a single copy might make it hard to find the text. So I threw together a little Python script to make a bunch of copies. 2000 copies latter the drive space looks like this:

root@zfs-test:/dataDump# df -h /dataDump/
Filesystem      Size  Used Avail Use% Mounted on
tank            6.2G  2.0G  4.2G  33% /dataDump

Then I shutdown the ZFS test VM. When I look at the directory with the VM files I have the following:

que@snow-dragon:~/VirtualBox VMs/ZFS test$ ls -l *.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs1.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs2.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs3.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs4.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs5.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs6.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs7.vdi
-rw------- 1 que que 1075838976 Mar 27 17:18  zfs8.vdi
-rw------- 1 que que 4831838208 Mar 27 17:18 'ZFS test.vdi'

The files zfs1.vdi through zfs8.vdi are all the virtual drives used by ZFS. I brought up a hex editor and had a look in zfs1.vdi.


Success. Plain text was found. So test one verifies that without compression I am able to see plain text on an unencrypted ZFS drive.

   In order to test my old and new drives in the Data-Dragon, I needed to get an OS on the test computer.  I initially was going to go for the latest version of Ubuntu Server, but the installed was having problems.  So I backed up to the long-term stable version with did fine.  With the computer running I started the drive testing using BHT.  This is expected to take a few days to complete.


   A was looking around for a better way to test my bunch of hard drives and came across someone who sold drives on ebay.  They did a burn-in test using a script called Bulk Harddrive Tester (BHT).  This uses program called Badblocks to do the test. It writes and then verifies 4 patterns: 0x55, 0xAA, 0xFF, and 0x00.   Badblocks only runs on a single hard drive, so the script allows it to run on multiple drives at once.
   The problem I had was that BHT is written for KornShell.  I've been using a Knopix live disk for testing and it does not have KornShell installed.  I also don't have an Ethernet connection on this machine at the moment.  So I figured now was the time to get this system better setup.
   First, I installed 4x additional 4 TB hard drives I picked up from Pluvius.  These are to replace any of the failing drives my testing finds.  I figure if I'm going to test, let's test everything, old and new.  I also replaced all the old 100 mm fans in the system.  The case is from the Blue Dragon commissioned back in 2008.  All of the 100 mm fans were weak, and one was completely dead.  So I replaced them all with fluid bearing fans.  Hopefully I get +12 years out of those.  After the rebuild, I moved the machine to a location I could give the machine an Ethernet connection. 

March 22, 2020

Data Dragon Drive Tests

I’ve been using Knopix disk to test the drives from the Data Dragon. The SMART results are not looking so good. The Data Dragon started with 8x 4 TB HGST MegaScale drives. I added a hot spare some time ago when there was a reported drive error. Knopix has a SMART tool and discovered the following:

Serial number



Last error









































Clearly there are some drives that have errors. One of the drives, EU7H, won’t even show up. My next task was to run the SMART self-tests. I first ran the short test, followed by the long test for each drive. All of them, less the unresponsive drive, reported success. I don’t trust these results, so more testing is required.