Andrew Que Sites list Photos
Projects Contact
Main

December 31, 2021

Good Bye 2021

Kim and the Blacklight Lion

Kim and the Blacklight Lion

   A long day of setup for the evening's gathering.  The living room needed to have the truss installed and then the entire room was covered in black canvas.  I hung all 5 4' double T12 fixtures for the maximum amount of blacklight.  In addition, I hung the 4 blank canvases.  The group worked for about 6 hours to get the house ready but shortly after we finished people started to arrive.
   Pictured is Kim in front of her neon lion painting.  She said the gessoed canvas worked much better than 2020 NYE one she worked with before.  The results were impressive.  Now if only I had picked up canvas without a seam running through it.
 
Gessoing Canvas

Gessoing Canvas

   I have four stretched canvases roughly 5'x5' in size.  Kim noted that last time she painted that the canvas was drinking in the paint and it was hard to get much build up.  So this time I picked up gesso to prime the canvas.  Turns out a gallon only covered 2 and a half canvases.  So I have two canvas with 2 coats, one with a single coat, and one with no coat.
   After the gesso was dry I hung the canvases and use my sprayer to coat the canvas in black paint.  This is a blacklight paint party and we have neon paint, so a flat black is exactly what is needed for the background.  Nothing about the gesso and painting process was difficult or took too long, but I spent most of the day doing it as I needed to wait for the paint to dry.  However, by the end of the day I had 4 canvases ready for tomorrows event.
 
Canvas Frame

Canvas Frame

   Building some canvas frames for the coming New Year's Eve blacklight paint gathering.  I did this for the New Year of 2020 and it went over really well.  For that event I made 3 canvases and reserved one for Kim of Witte Artistry.  Initially I thought we'd recycle the frames with fresh canvas for each painting party, but everyone liked Kim's work so much we ended up keeping it.  In addition we had a request for two reserved canvases.  So I picked up wood for two more canvases.  My hardware store had painter's canvas on sale.  Sadly, all sizes but the one I wanted were on sale.  So I bought a larger size and cut them down.  There is a seam in the canvas, but for how we are using the canvas I don't feel bad about that.

December 28, 2021

A Credit Story

Elmwood Park

Elmwood Park

When I moved into Elmwood Park in 2013 I had a credit score of 0 (or so I was told). Although I had bought a truck with a secured loan in early 2000 that loan had long since fallen off my credit record. There was simply nothing on my record as I paid cash for everything. Knowing I would one day want to buy a house I went to my bank and talked to them about how to build my non-existent credit.

The credit building process started with a secured credit card—one for which I had money locked in an account I couldn’t touch in case I defaulted on payment. I use the credit card to buy gas and nothing else, and have it scheduled to automatically pay itself off each month. This established a payment history and the next year I had the secured backing from the card removed. Then at around 6 month increments I added to my credit. I increased the ceiling on my credit card, opened an overdraft protection account, and picked up another credit card.

Around June of 2016 I went to the bank to see about getting a personal loan just to have a loan record on my credit report. By this time my credit score was around 750 and the bank told me that I would already get the best rates they had to offer. We approached the landlord to inquire about possibly buying Elmwood Park and to our delight they said they had already talked about us being the last renters.

The next year was spent saving the rest of the house down payment. Then we had an appraisal done on the Elmwood Park and used that number as the starting point for an offer to buy. The landlord made a counter offer which was found acceptable and taken. It took months to get the land contract and loan paper work in order, but the house finally closed on February 9 of 2018. By the time the loan rate was locked in my credit score was around 780 and this got the best 15-year interest at the time of 3.5%.

Interest rates are currently extremely low. Although my loan was less than 4 years old, I looked into refinancing. Refinancing has costs involved and I calculated I would only save about $3,000 over the lifetime of the loan. That isn’t a huge savings but enough to consider. However, my required monthly payment would go from just under $1,500/month to just under $900/month. There is piece of mind knowing I’m only responsible for $900/month should the economy shake things up.

When I bought Elmwood Park I had a large down payment (around 25%), and paying more than required means I had a lot of equity. Since the payments on all my credit items are automatic, my credit score has matured nicely. When I started the refinance process I had a credit score of 814. With that number the person I worked with at the bank was happy to throw money at me. The time period since my initial loan was short enough I didn’t have to get a new appraisal which would save on closing costs. The process took a little over a month and a half to complete, but I got locked into the lowest mortgage interest rate available: 2.125%. This is a full 1.375 points lower than my initial loan.

As part of the refinance process I also threw a bunch of money at the loan principle. My new goal is to have the house completely paid off by the start of 2025. I’ve kept a spreadsheet of my loan since the beginning, and it has always matched perfectly with the bank. So I’ve adjusted the numbers and shouldn’t have a problem making this new 2025 goal. That’s a house in 7 years on a 15 year loan.

December 27, 2021

NBD for Encrypted Backups

I would like to have off-site backups that are inaccessible if somehow the backup drive/machine is stolen or compromised. The easy answer is to simply encrypt the backup drive. That works just fine if the drives were physically stolen. However if an attacker got onto the backup server while the server had the backup drive mounted they would have access to the plaintext data.

I started looking into using a Network Block Device (NBD) to address this problem. It simply makes a remote block device, such as a disk, appear local. Since LUKS just needs a block device we can apply the encryption to the network block device just as if the drive were local. The benefit here is that the remote device never sees any plaintext—only encrypted data. Even if an attacker could get the entire setup, drive computer, and all, even while running; there they have no advantage because the plaintext isn’t handled by the backup system in any way.

So, how does this setup work? First, we need to install NBD server on the backup machine.

apt install nbd-server

Once installed, we need to setup a block device to share. For that, we edit the file /etc/nbd-server/config.

[generic]
[exportName]
        exportname = /dev/sda

Everything else currently in the file can be removed. The default setup is designed to look for additional configuration files in /etc/nbd-server/conf.d. Unless you plan on having a lot of shared network block devices I wouldn’t bother using that. Just remove everything and add our single share.

It is important that if using an actual block device (in our case, /dev/sda), remove the alternate user and group. This is because block devices can only be accessed by root. Using the nbd user won’t work and when attempting to connect to the block device the following error will occur:

Negotiation: ..Error: Connection not allowed by server policy. Server said: Access denied by server configuration
Exiting.

With the block device setup, restart the service.

service nbd-server restart

Now, on the machine that will use the block device, install and activate nbd-client.

apt install nbd-client
modprobe nbd

The machine is now ready to connect to our Network Block Device.

nbd-client <backup machine> /dev/nbd0 -N exportName

Here, backup machine is either the backup machine’s IP address or name. The name in the last parameter is the name of the block device name which is from our config file. The mount location is /dev/nbd0 which is the first network block device. By default there are 16, nbd0 to nbd15, and any of these can be used.

Now this works fine on a local network, but an off-site backup server probably isn’t on the local network. For this we can use an SSH tunnel. The new version of the protocol runs on port 10809. On the client computer we can establish a tunnel like this:

ssh -N -L 2000:127.0.0.1:10809 <server name> &

Here we create a tunnel on the client machine at port 2000 that tunnels to 10809 on the backup server. Now we can connect to the NBD like this:

nbd-client 127.0.0.1 2000 /dev/nbd0 -N exportName

In this version of the command line we specify the port number. This is left over from the old version but completely function when using tunneling. With an SSH tunnel, our backup server can be anywhere on the Internet.

For a bit more security, we can edit the NBD server configuration file and only allow tunnel connections.

[generic]
        listenaddr = 127.0.0.1

[exportName]
        exportname = /dev/sda

This will only allow connections from localhost, which SSH can use.

December 26, 2021

GPG Key Generation Without e-mail Address

Generating a new GPG key is a pain. I won’t use an e-mail address in my GPG key and most interfaces won’t let you generate a key without a valid e-mail address. However, if one uses batch mode this can be accomplished.

First, create a file that has the key details.

que@Snow-Dragon:~$ nano gpgGeneration

Add the following to the file, tailored to your needs:

%echo Generating a OpenPGP key
Key-Type: RSA
Key-Length: 4096
Name-Real: John Smith
Name-Email: www.example.com
Expire-Date: 400
Passphrase: RATH7MjVcyScPDzp8gtAuH3whWe9rW2jQTUgay6wwwpjzhL8XAq6LyTj9jSDuL6H
%commit
%echo done

For the passphrase I used a randomly generated one that is then changed right away—the random one is just used to create the key. The name and e-mail address should be modified according to one’s needs. The trick here, and why we are using this method, is that the e-mail address can be a website (anything really) which is what I am after.

Now create the key:

que@Snow-Dragon:~$ gpg --batch --generate-key gpgGeneration
gpg: Generating a basic OpenPGP key
gpg: key B9FFB6D8F9073D1A marked as ultimately trusted
gpg: revocation certificate stored as ‘/home/que/.gnupg/openpgp-revocs.d/907F7F1375F784AEB8252A21B9FFB6D8F9073D1A.rev’
gpg: done

Now change the passphrase:

gpg --change-passphrase B9FFB6D8F9073D1A

Note that the parameter is the key id highlighted in yellow from the previous command.

This step is important! You do not want the plaintext of the passphrase for your secret key in a file. Even if you delete the file, the data isn’t gone. So use a temporary passphrase and change it right away.

December 22, 2021

Reverse File Encryption for Backups

At work we have a client who has an important requirements about the work we do: at the end of the project, all artifacts will be transferred to them and we will not retain a copy. This is important to them in order to protect intellectual their property. If we do not have a copy, we cannot leak their data. When the project concludes, we give them all their data and delete it.

While isolating their data isn’t too difficult, how does one deal with backups? Running without backups isn’t a good idea so backups are needed, but there is the risk of those backup being compromised by an attacker. And the requirements state we need to remove all IP from our systems on the conclusion of the project. Erasing backups goes against the purpose of making the backups. So what is the alternative.

The solution is EncFS running in reverse mode. This will allow a directory to be mirror in encrypted form. The backup service can then run on this ciphered mirror. When the project finishes, the key to the ciphered mirror is discarded. Thus, while the data exists in backups, it is moot as there is no way to access use it.

For our setup, I generate a random encryption key and store it on a USB drive. Should backups be needed, this USB drive will allow the backups to be deciphered. Once the encrypted volume is mounted, the drive can be taken to a secure location until the project ends. After that, the drive can either be securely erased, destroyed or given to the client.

Here are the steps needed to setup such a system on a Debian server:

Install EncFS.

sudo apt install encfs

Create random password. This should be stored on an external drive such as a USB disk. Note this file must be kept secure. The following will create a 512-bit random key, saved in base-64.

dd if=/dev/urandom bs=64 count=1 | base64 -w 0 > /mnt/usb/password.b64.txt

For this example we use two paths: /srv/project contains the data we need to have backed up. /srv/backup is the mount point where the encrypted data will appear and what the backup services uses.

Now to create an encrypted mount for this:

encfs --public --reverse --extpass="cat /mnt/usb/password.b64.txt | base64 -d" /srv/project/ /srv/backup/

Note that the public option allows other users to have access to the backup directory. This is useful as the backup service will likely have a dedicated backup user who will need this access. By default this mount is read-only. Granting global read access won’t be a problem as the encrypted data is safe by its nature. The extpass parameter tells encfs to type the password file and decode the base64. The output from this command will be used as the password.

After the command runs it will create a configuration file in the /srv/project directory. This configuration file is needed to decipher the backups and should be moved onto the USB drive.

mv /srv/project/.encfs6.xml /mnt/usb/encfs6.xml

Note that although the configuration file is absolutely required to decrypt the data, it does not contain the key. Thus it is safe to allow this file to be created locally and moved. The key, if truly desiring no traces, must remain on the USB drive and nowhere else.

Now everything in /srv/backup is encrypted and safe to allow a backup program to duplicate.

To unmount:

sudo umount /srv/backup

To mount in the future, use:

sudo encfs -c /mnt/usb/encfs6.xml --public --reverse --extpass="cat /mnt/usb/password.b64.txt | base64 -d" /srv/project /srv/backup

The key file will not be needed again unless the ciphered directory is unmounted. It should be moved to a secure location.

Should the backups be needed, they can be restored to some location like any backups. Then the encrypted directory needs to be mounted to see the plaintext.

Mount the encrypted backup to restore:

encfs -c /mnt/usb/encfs6.xml --extpass="cat /mnt/usb/password.b64.txt | base64 -d" /srv/restoredCipher /srv/restorePlaintext

This command assume the backup has been restored to a directory called /srv/restoredCipher and there is an empty mount point called /srv/restorePlaintext.

The plaintext directory should have a perfect copy of the original data. To verify a copy correctly matches the original, run this command in both /srv/project and /srv/restorePlaintext:

find . -type f | sort | xargs sha256sum - | sha256sum -

Note: you must be in the directory for this to work or the path names will show up and the sums will not match.

The command will get a list of all files, sort the list, generate an SHA-256 sum for each file, and then sum the list of sums. It is effectively the SHA-256 sum of all files. If the two match, the two trees are the same.

How well this works remains to be seen. Although the project will be designed and build in a Linux environment, the backups will be done with a Window computer. Thus the file names are limited to 255 characters and are not case sensitive. This should be alright as long as none of the project files do not have long names.