Andrew Que Sites list Photos
Projects Contact
Main

My update to Mint caused Network Block Devices to stop working. The solution was simple:

sudo modeprobe nbd

The question is, why did this break? I’ve never been a fan of updates because stuff like this happens all the time. It also changed my theme and made the title bar on windows larger. I absolutely hate things like that. I don’t care what the UX team decided looks better—don’t change my theme—especially without even asking! LibreOffice is notorious for this. With every other version it seems I have a full new set of icons and I have to guess what they made chart and link.

January 12, 2022

Synchronizing Large Files by Hash Blocks

Successfully tested synchronization of a block device using block hash compares. As outlined in the first article, I wrote a program that will output the hashes of blocks of a file. These hashes can be compared against hashes saved on disk to detect changes to that block of the file. For backups we assume both files start in identical form. An initial set of block hashes are calculated and saved. Then, in theory, changes to the original files can be detected by looking for blocks that have a different hash, and sending those blocks to the backup file.

To test this, I first needed a block device. We can create a file for this.

dd if=/dev/zero of=source.img bs=1M count=1024

This will create a 1 GB file with all zeros. Now we can create a file system on that file.

mkfs.ext4 -E root_owner=$UID:$GID source.img

Note that I am using the option to set the root owner. This will allow the current user to read/write to the file system. Otherwise, root privileges are needed. We now have a blank formatted disk. Next, we’ll make our backup copy of this blank file system.

cp source.img backup.img

Then an initial hash of the image needs to be taken.

./blockHash backup.img > source.hash.txt

At this point we now have a backup in sync with the original, and a method to track changes. So we need to change the file system, and to do that we need to mount the file system. We’re using the command udisksctl.

udisksctl loop-setup -f source.img
udisksctl mount -b /dev/loop0

Note that the output of the first command will display the loop device, which is /dev/loop0 unless there are other loop devices already in use. The second command mounts this and will display a mount point where in /media with the current user name and UUID of the mounted disk image. Go to that directory and add some files.

dd if=/dev/urandom of=/media/user/00000000-0000-0000-0000-000000000000/file.bin bs=1M count=32

This will create a 32 MiB file with random data called file.bin. Now unmount the drive.

udisksctl unmount -b /dev/loop0
udisksctl loop-delete -b /dev/loop0

I create a new program that will calculate the block hashes and compare them against the initial hashes from a file. It will output the block numbers that have changed. In addition, it will output the new hash file to a specified file.

./blockDifference source.img source.hash.txt new.hash.txt

The purpose of this output is to be used to copy the changed blocks. We can actually use dd to do this copy. What we need is dd command in the following format:

dd conv=notrunc if=source.img of=backup.img seek=<block> skip=<block> bs=1M count=1 

There are a couple of important items to note on this line. The conv=notrunc means the block will be copied, but the file not truncated. Otherwise, the first block to be changed would end the file, and that isn’t desired. The seek and skip parameters are the key to this setup. seek specifies the block offset in the input file, and skip the offset in the output file. This is given in block size multiples so we set the block size to 1 MiB using bs=1M. The count specifies we just want to copy a single block.

Knowing this is the desired command format we need a way to translate from the single block number from the difference program to a dd command. For this we can simply use xargs.

xargs -n 1 -I{} dd status=none conv=notrunc if=source.img of=backup.img seek={} skip={} bs=1M count=1 

Putting this all together:

./blockDifference source.img source.hash.txt new.hash.txt \
  | xargs -n 1 -I{} \
    dd \
      status=none \
      conv=notrunc \
      if=source.img \
      of=backup.img \
      seek={} \
      skip={} \
      bs=1M \
      count=1 

Running the above command will compare our current disk image file against the previously stored hashes. When the hashes don’t match, the block number is sent to stdout. That is piped to a dd command that will copy this specific block from our current disk image into the same location on the backup image. At the same time, all the block hashes are being saved to a new file called new.hash.txt.

If we compare source.img and backup.img there should be no changes.

cmp source.img backup.img

No output means no changes.

What this test shows is there is a fairly easy way to synchronize two large files by using block hashes.

January 11, 2022

Hasing Large File by Blocks

Did an experiment last night where I piped the content of a 3 TB hard drive into xxHash in 1 MiB increments. This test was mainly to measure the time this takes which is about 335 minutes (5 hours, 35 minutes). Why? I’ve been considering a different type of backup script to mirror an entire block device. This would allow encrypted drives (such as LUKS) to be backed up without the need to mount the drive. Here’s how it would work.

First, two drives are either mirrored or the raw content of one drive is copied to a file on the other. This is the initial backup.

Now some kind of checksum is made on blocks of the disk. For the experiment I used 1 MiB blocks and chose the extremely fast xxHash function. The hash does not need to by cryptographically strong—it just needs to be large enough to have a very low collision probability. That is, we just need a hash that can show if the content of a block has changed. The hash of each block is stored in a file.

When it is time to do a backup, the hash of blocks is again calculated. Blocks whose hash is not the same can then be updated on the backup drive. When this synchronization is finished, the new hash file is saved.

The point of taking hashes of blocks is to minimize the amount of data needing to be transferred to the backup disk. A few new files on a couple terabyte drive, even if that drive is encrypted, will only change a limited amount of the overall data. The backup system is able to detect that change and disseminate those changes to the backup location.

In theory this system would work. What I wonder about is if it would be fast enough to make it worthwhile? The backup process always involves reading the complete content of the drive to be backed up in order to compare hashes. For my setup, this took about 5 and a half hours. The second part is the transfer of this data to the backup machine. Here the block size is important because all changes happen only in block size increments. That test still needs to be conducted. For now, I have the answer about the time needed to hash an encrypted drive.

The Red-Dragon was still running Ubuntu 16.04 LTS. The computer runs for about 20 minutes a day to do backups. Guess I should run through the update process. My history with Ubuntu dates back to March of 2007 when I installed Ubuntu 6.06 LTS on the Black Dragon. These days I’ve been moving away from Ubuntu and more toward Debian.

I mainly did the update because the version of NBD on the Red-Dragon would not talk to the one on the Snow-Dragon. After updates this was fixed. The problem was, wake on LAN stopped working. Somewhere I had installed a script to setup wake-on-lan, but it was removed or not called after the update process. I followed these directions and get it running again. I am far more comfortable with Systemd than I used to be and have started using it over init.d.

January 09, 2022

Google Forcing Data Collection

Back 2010 Google got slack for collecting Wifi data without allowing people to opt out. Since then they have had a “high resolution” GPS mode which is enabled by default. Up until a month or two ago, Google Maps worked fine without high resolution mode enabled. Now it isn’t possible to enable navigation unless this mode is turned on.

I’ve watched how Google has tried to push this mode on people since the enable option was first introduced. At some point, every time you tried to get your location on Google Maps, a pop-up would tell you that high resolution mode wasn’t enabled and asked if you wanted to change it. Seems they were doing everything to push this mode onto users.

Now, there is no choice. You have to use this mode or not use maps. This seems to be Google’s business model when it comes to privacy. Get user blow-back? Apologize, make it seem like you’ve done something about it, and then slowly reintroduce your intrusive behavior while no one is paying attention. It’s not like there is a real alternative.

January 08, 2022

Remote Backups using NBD Scripts

Yesterday I wrote about setting up a remote encrypted network block device that can be used for backups. In this article we will follow-up with using this setup to do automated backups.

For automation it is easiest just to setup a cron job on the client machine to run rsync at time intervals you want backups. With the SSH tunnel open, the NBD mapped and the LUKS drive mounted, you could just leave the setup open and run rsync. However, that isn’t always the best option as you would have to reestablish this link any time the computer resets or if the SSH tunnel goes down for some reason.

For doing this I have made two scripts. One will handle mounting and dismounting the backup server, and the other will use this script to do backups. Let’s start with the fairly lengthy mount script:

#!/bin/bash
#========================================================================================
# Uses: Control/monitor the mounting of remote encrypted network block device.
# Date: 2022-01-06
# Author: Andrew Que <https://www.drque.net>
#========================================================================================

# Drive server.
server=<backup server name or ip address>

# Username to use for tunnel.
user=backup

# Local port (can be any free port). 
# Using 11809 as not to prohibit 10809 on the this machine.
tunnelPort=11809

# Tunnel control file.
tunnelControl=/var/tmp/backupServer.tunnel

# Name of NBD share.
nbdName=usb

# Network block device.
device=/dev/nbd0

# Partition on block device.
partition=/dev/nbd0

# LUKS name
luksName=backupServer

# Mount point
mountPoint=/mnt/backupServer

# Key file.
keyFile=<LUKS key file>

# Command to fetch key.
keyCommand="cat $keyFile"

#========================================================================================

#----------------------------------------------------------------------------------------
# Uses:
#   Setup a mount point to the backup machine.
# Output:
#   Returns 1 if there was an error, 0 if not.
#----------------------------------------------------------------------------------------
mountFunction() {
  # Flag set to 1 if an error is encountered.
  local isError=0

  if [ $isError -eq 0 ]; then
    # Setup a tunnel remote machine.
    echo -e "\tOpening tunnel..."
    ssh -M -S $tunnelControl \
      -o ExitOnForwardFailure=yes -f -N -L \
      $tunnelPort:127.0.0.1:10809 \
      $user@$server

    # Problems?
    if [ $? != 0 ]; then
      echo "Unable to make tunnel '$user@$server'." > /dev/stderr
      isError=1
    fi
  fi

  if [ $isError -eq 0 ]; then
    echo -e "\tConnecting to block device..."

    # Connect to network block device.
    sudo nbd-client 127.0.0.1 $tunnelPort $device -N $nbdName > /dev/null

    # Problems?
    if [ $? != 0 ]; then
      echo "Unable to connect block device '$device'." > /dev/stderr
      isError=1
    fi
  fi

  if [ $isError -eq 0 ]; then
    echo -e "\tUnlocking LUKS..."

    # Unlock LUKS device.
    local unlockCommand="sudo cryptsetup luksOpen --key-file=- $partition $luksName"
    eval "$keyCommand | $unlockCommand; "'PIPE=(${PIPESTATUS[@]})'

    # Problems?
    # Make sure to check both parts of the command.
    if [[ ${PIPE[0]} != 0 ]] || [[ ${PIPE[1]} != 0 ]]; then
      echo "Failed to unlock '$luksName'." > /dev/stderr
      isError=1
    fi

  fi

  if [ $isError -eq 0 ]; then
    echo -e "\tMounting drive..."

    # Mount unlocked LUKS device.
    sudo mount /dev/mapper/$luksName $mountPoint

    # Problems?
    if [ $? != 0 ]; then
      echo "Failed to mount '$mountPoint'." > /dev/stderr
      isError=1
    fi
  fi

  return $isError
}

#----------------------------------------------------------------------------------------
# Uses:
#   Unmount remote drive.
#----------------------------------------------------------------------------------------
unmountFunction() {
  getStatusFunction

  # Unmount
  if [ $isMounted == 1 ]; then
    echo -e "\tUnmounting..."
    sudo umount $mountPoint
  fi

  # Close LUKS if open.
  if [ $isLUKS == 1 ]; then
    echo -e "\tClosing LUKS..."
    sudo cryptsetup luksClose $luksName
  fi

  # Close network block device if open.
  if [ $isNBD == 1 ]; then
    echo -e "\tDisconnecting NBD..."
    sudo nbd-client -d $device
  fi

  # Close tunnel if open.
  if [ $isTunnel == 1 ]; then
    echo -e "\tClosing tunnel..."
    ssh -q -S $tunnelControl -O exit $server
  fi
}
#----------------------------------------------------------------------------------------
# Uses:
#   Get the status of each of the mount operations.
# Output:
#   $isMounted - 1 if mounted.
#   $isLUKS - 1 if LUKS is open.
#   $isNBD - 1 if network block device is attached.
#   $isTunnel - 1 if tunnel is open.
#----------------------------------------------------------------------------------------
getStatusFunction() {
  # Unmount
  mountpoint -q $mountPoint
  isMounted=$(( $? == 0 ))

  # Close LUKS if open.
  sudo dmsetup ls --target crypt | grep $luksName > /dev/null
  isLUKS=$(( $? == 0 ))

  # Close network block device if open.
  nbd-client -c $device > /dev/null
  isNBD=$(( $? == 0 ))

  # Close tunnel if open.
  sudo netstat -tunlp | grep $tunnelPort > /dev/null
  isTunnel=$(( $? == 0 ))

  isPartlyMounted=0
  if   [ $isMounted == 1 ] \
    || [ $isLUKS == 1 ] \
    || [ $isNBD == 1 ] \
    || [ $isTunnel == 1 ]; then
      isPartlyMounted=1
  fi

  isFullyMounted=0
  if   [ $isMounted == 1 ] \
    && [ $isLUKS == 1 ] \
    && [ $isNBD == 1 ] \
    && [ $isTunnel == 1 ]; then
      isFullyMounted=1
  fi
}

#----------------------------------------------------------------------------------------
# Uses:
#   Check to see if remote drive is mounted and display what connections are established.
# Output:
#   Returns 0 if everything is ready, 1 if there is a problem.
#----------------------------------------------------------------------------------------
statusFunction() {
  local isError=0

  echo "Checking status"
  getStatusFunction

  if [ $isMounted == 1 ]; then
    echo -e "\t[X] Mounted............: $mountPoint"
  else
    echo -e "\t[ ] Not mounted........: $mountPoint"
  fi

  if [ $isLUKS == 1 ]; then
    echo -e "\t[X] LUKS open..........: /dev/mapper/$luksName"
  else
    echo -e "\t[ ] LUKS not open......: /dev/mapper/$luksName"
  fi

  if [ $isNBD == 1 ]; then
    echo -e "\t[X] NBD connected......: $device"
  else
    echo -e "\t[ ] NBD not connected..: $device"
  fi

  if [ $isTunnel == 1 ]; then
    echo -e "\t[X] Tunnel open........: localhost:$tunnelPort"
  else
    echo -e "\t[ ] Tunnel not open....: localhost:$tunnelPort"
  fi

  if [ $isFullyMounted == 1 ]; then
    echo "Connect is ready"
  else
    echo "Connect is NOT ready"
  fi

  return $isError
}

#----------------------------------------------------------------------------------------
# Uses:
#   Prompt for sudo password so it is cached.
# Output:
#   Returns 1 if there was an error, 0 if not.
#----------------------------------------------------------------------------------------
unlockSudoFunction() {
  # Prompt for sudo password so it is cached.
  sudo printf ""

  local isError=0

  # Problems?
  if [ $? != 0 ]; then
    echo "Password failure." > /dev/stderr
    isError=1
  fi

  return $isError
}

#----------------------------------------------------------------------------------------
# Uses:
#   Print the status of the process.
# Input:
#   $1 - Status; 0 for success, 1 for failure.
#----------------------------------------------------------------------------------------
printStatus() {
  if [ $1 -eq 1 ]; then
    echo ""
    echo "¡¡¡ Errors encountered !!!"
    echo ""
  else
    echo ""
    echo "Complete."
    echo ""
  fi
}

#----------------------------------------------------------------------------------------
# Uses:
#   Control/monitor the mounting of remote encrypted network block device.
# Input:
#   Command - mount/unmount/status
# Output:
#   0 if there are no errors, 1 for errors or not mounted.
#----------------------------------------------------------------------------------------
isError=0

case $1 in
  mount)
    unlockSudoFunction

    if [ $isError -eq 0 ]; then
      getStatusFunction
      if [ $isFullyMounted == 1 ]; then
        echo "Already mounted."
      else
        if [ $isPartlyMounted == 1 ]; then
          echo "Setup partly mounted.  Unmounting..."
          unmountFunction
        fi
        echo "Mounting"
        mountFunction
      fi
    fi
    isError=$?

    # If there is a failure to mount, unmount all work before failure.
    if [ $isError -eq 1 ]; then
      unmountFunction
    fi
    printStatus $isError
  ;;

  unmount)
    unlockSudoFunction
    if [ $isError -eq 0 ]; then
      getStatusFunction
      if [ $isPartlyMounted == 1 ]; then
        echo "Unmounting"
        unmountFunction
      else
        echo "Not mounted."
      fi
    fi
    printStatus $isError
  ;;

  status)
    unlockSudoFunction
    if [ $isError -eq 0 ]; then
      statusFunction
    fi
    isError=$?
  ;;

  *)
    echo "Syntax: $0 <action>"
    echo "  Actions:"
    echo "    mount   - Mount drive."
    echo "    unmount - Unmount drive."
    echo "    status  - Show status of mount."
    isError=1
  ;;

esac

# Return error code.
exit $isError

This script has three functions: mount the backup drive, dismount the drive, and show the mount status of the drive. At the top is the configuration, so if copied and filled in, this script should work for the setup outlined in this article set.

There are four steps to the mount/dismount process:

  • Establish a tunnel.
  • Connect the network block device.
  • Unlock the LUKS encrypted partition.
  • Mount the encrypted partition.

The important part of this script is the mount and dismount when there is a failure. At the first failure, the mount process aborts and dismount everything up to that point.

With this script it is now possible to use it in an automated backup, and this is fairly simple. We will assume the mount script is in /usr/local/sbin and is called backupMount.

#!/bin/bash
# Mount the remote backup drive.
/bin/local/sbin/backupMount mount

# Did it mount?
if [ !? == 0 ]; then

  ### Syncronize.
  rsync -a --delete <local path> /mnt/backupServer

  # Unmount backup drive.
  bin/local/sbin/backupMount unmount
fi

The script mounts the backup drive, using rsync do synchronize some directory needing to be backed up, and dismounts. This process does not continue if the mount fails. This process can be improved by adding logging.

#! /bin/bash

#========================================================================================

# Timestamp format for log file name.
timestamp=`date +"%Y-%m-%d_%T.%3N"`

# Location to save log file.
logDirectory="/var/logs/backups"

# Log file name.
logFile="$logDirectory/$timestamp.txt"

# Error file (saved only if there is a problem).
errorLogFullName="$logDirectory/$timestamp.errors.txt"

# Error file for session.
errorLogFile="$logDirectory/errors.txt"

# File to create to signify process is running.
pidFile="/var/run/backups.pid"

# Script to mount/unmount backup drive.
mountScript=/usr/local/sbin/backupMount

#========================================================================================

# Note the start time (for time measurement).
start=`date +%s.%N`

# Create PID file (to signal script is running) and error file.
touch $pidFile $logFile

# Remove session error file and create new one.
unlink $errorLogFile
touch $errorLogFile

# Link the most recent log file to the current log.
ln -fs $logFile $logDirectory/backups.txt

# Empty trash before backup.
rm -Rf /d/.Trash-1000

# Begin report.
now=`date`
echo "Start: $now" | tee $logFile

# Mount the remote backup drive.
$mountScript mount 2>> $errorLogFile | tee -a $logFile

# Did it mount?
if [ ${PIPESTATUS[0]} == 0 ]; then

  # Synchronize.
  rsync -a --delete <local path> /mnt/backupServer \
    2>> $errorLogFile \
    | tee -a $logFile

  # Unmount backup drive.
  $mountScript unmount 2>> $errorLogFile | tee -a $logFile
fi

# Measure backup time and log it.
end=`date +%s.%N`
delta=`awk '{print $1-$2}' <<< "$end $start"`
now=`date`
echo "Finished: $now ($delta seconds)" | tee -a $logFile

# Make the log file read-only.
chmod 444 $logFile

# If there was an error, send an e-mail detailing what went wrong.
if [[ -s $errorLogFile ]] ; then
  read errorMessage < $errorLogFile

  # Create a link to this error file.
  mv $errorLogFile $errorLogFullName
  ln -s $errorLogFullName $errorLogFile

  # Make error file read-only.
  chmod 444 $errorLogFile

  # Send e-mail about error.
  errorMessage="Backup process failed with the following output:\n\n$errorMessage"
  echo -e "¡¡¡ ERROR !!! Backups failed: $errorMessage"
  echo -e "Subject: ERROR: Backups failed.\n\n$errorMessage" | msmtp <e-mail address>
else
  chmod 444 $errorLogFile
fi

# Script is done.  Remove PID.
rm $pidFile

This is a script similar to several I use for automated tasks. Each time the script runs it will create a log file and an error file. The log file has the timestamp in the file name and output from the synchronization is placed in the log file.

If an error occurs, an error file with the timestamp is also created, and the contents of the error file are e-mailed to the administrator. If no errors occur, no timestamped error file is created. All commands have stderr piped to the error log. Anything in the error log signifies an error, thus a size other than zero is an indication of errors.

Since I use a status monitor script to show the status of all periodic processes, a process identification file (PID) file is also created at the scripts start and removed at the end. This allows the monitor to know when the process is running.

One item to note is that when piping to tee, the PIPESTATUS variable must be used to check the return status of the command. In a pipe, multiple commands are running (in this case, the mount script and tee) and the return code variable ($?) only returns the last command status (in this case, tee).

January 07, 2022

Remote Encrypted Backups using Network Block Device

So a few days ago I wrote about using a Network Block Device (NBD) for doing completely secure off-site backups. This article will expand on that idea and detail setting up a complete system. Let’s begins with an overview of the setup.

Layout

The graphic above shows the setup that will be outline in this article. A Raspberry Pi 4 connected to an external USB drive makes up the backup server, but the technique would work with any Internet ready computer with a mass storage device.

The setup works by running a Linux-based computer with a NBD server sharing an attached hard drive over an SSH tunnel. This allows a client computer to open an SSH tunnel to the device and connect to the hard drive as a block device. Linux Unified Key Setup (LUKS) is used to encrypt the drive at the block device level.

The encryption/decryption of the data stored on the disk happens not on the remote backup system that has the disk connected, but the client to backup. At no time does the backup system have any sensitive information. This system allows the backup system to safely operate in an insecure locations. Should the backup system be stolen or hacked the data is still secure. Even if the attacker could control the backup system they would have no advantage getting access to the plaintext data saved on the disk.

There are several steps in setting up such a backup system. Most of the steps will be outlined in this article. Items that will not be covered:

  • How the IP address of the remote machine is maintained. It is assumed that someone implementing this can handle getting the address to the remote machine.
  • Router setup to map the SSH port from the Internet to the backup system.

In addition, familiarity with setting up a Linux-based system is assumed, as well as basic knowledge of the Raspberry Pi.

Commands will need to run on both the backup system and the client to backup. These commands are distinguished by the two prompts:

Commands running on the backup system will start like this:

backup@backup-server:~ $ 

And commands on the client like this:

user@client:~ $ 

Step 1: Make an operating system SD card

For the Raspberry Pi we will use Raspberry Pi OS Lite. No monitor is used and thus no graphics are required for the backup server. We will skip installing the operating system but assume it installs with SSH enabled. See this article for how that is done. Nothing precludes using a Raspberry Pi with full desktop installed, just make sure to install an SSH server.

Step 2 Repurpose backup user (optional)

Debian-based systems (including the Raspberry Pi) have a default user called backup. We can repurpose this user to actually perform backups. By default, the backup user cannot login in any manner. We want to allow this user to be able to establish an SSH tunnel.

The backup user needs a home directory so to hold SSH configuration. We could use the standard /home directory, but since this user can’t actually login and only needs an SSH key, we can put the home directory elsewhere. The default home of backup is /var/backup. This directory contains some backup files and probably best to use another path. The directory /var is typically for variable data, and this user’s home directory will never change. I will use /etc/backup as /etc is for configurations files and our user data is more of a configuration. It doesn’t actually matter what path you choose.

backup@backup-server:~ $ sudo mkdir -p /etc/backup/.ssh
backup@backup-server:~ $ sudo chown -R backup:backup /etc/backup
backup@backup-server:~ $ sudo chmod 700 /etc/backup/.ssh

Modify the backup user to use this directory:

backup@backup-server:~ $ sudo usermod -d /etc/backup backup

In order for the backup user to create an SSH tunnel we need to give it an authorized SSH key. This allows the client computer to establish the tunnel as the backup user. The key should be for the user doing the backups on the client. If backups are a cron job, the root used can be used. The public key is stored in ~/.ssh/ and typically have the name id_*.pub with the actual name depending on the key type. We’ll assume an ed25519 key type.

user@client:~ $  cat ~/.ssh/id_ed25519.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBNh7YrOdzl30lFHDNFUkSt2+cICYmEp6eKsGR/rQ0w7 root@computer

Copy this key and add it to the authorized keys on the backup server.

backup@backup-server:~ $ sudo echo \
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBNh7YrOdzl30lFHDNFUkSt2+cICYmEp6eKsGR/rQ0w7 root@computer" \
>> /etc/backup/.ssh/authorized_keys
backup@backup-server:~ $ sudo chmod 600 /etc/backup/.ssh/authorized_keys

We should now be able to connect to the backup server using the backup user, but not get a prompt.

user@client:~ $ ssh backup@backup-server

If successful you should get the banner followed by the text

This account is currently not available.
Connection to backup-server closed.

This is exactly what we want. The backup user will allow an SSH tunnel to be established, but have no other abilities.

For added security we could limit the client computer so it can only forward the NBD port by adding the prefix permitopen="localhost:10809" to the line in the authorized_keys file. The file would look something like this:

permitopen="127.0.0.1:10809" ssh-ed25519 AAAA… root@computer

Without this the network on the remote computer is exposed to the backup client computer via tunneling.

Step 3: Disable non-SSH logins (optional)

Since we will be using this device over an SSH tunnel there is no need to ever allow normal logins. If the backup user from step 2 was repurpose, the next line can be skipped. Otherwise we first have to get the client’s SSH key to the Pi.

user@client:~ $ ssh-copy-id <backup server>

This will make it so the client computer (i.e. the computer that runs the backups) is recognized by the Pi. Verify this works by connecting via SSH. If there is no prompt for a password, it worked.

Now delete the passwords for both root and the backup user.

backup@backup-server:~ $ sudo passwd -d root
backup@backup-server:~ $ sudo passwd -d <backup user>

This will make it so you can no longer login using a terminal—you must use SSH from the client computer.

Step 4: Setup the NBD server

Install the Network Block Device server:

backup@backup-server:~ $ sudo apt install nbd-server

Configure the server:

backup@backup-server:~ $ sudo cat <<EOT >> /etc/nbd-server/config
[generic]
        listenaddr = 127.0.0.1
[usb]
        exportname = /dev/sda
EOT
backup@backup-server:~ $ sudo service nbd-server restart

We assume the attached disk being used for backups is /dev/sda which is a safe assumption for single drive connected to a Raspberry Pi. If the backup server has more drives to share, then they can be added the /etc/nbd-server/config file. In addition, since NBD shares block devices, mdadm RAID-arrays could be shared.

Step 5: Connect to NBD

Install the Network Block Device client on client computer.

user@client:~ $ sudo apt install nbd-client
user@client:~ $ sudo modprobe nbd

Start an SSH tunnel:

user@client:~ $ ssh -f -N -L 10000:127.0.0.1:10809 <backup server>

Attach to block device:

user@client:~ $ sudo nbd-client 127.0.0.1 10000 /dev/nbd0 -N usb

If this worked, the device /dev/nbd0 will now be ready for use. If it is already a formatted drive it can be mounted. For the next section we will assume the drive is empty and not yet setup. To have encrypted backups, LUKS needs to be setup on the disk. Otherwise, there isn’t much of a point to doing backups in this way.

Step 6: Partition Disk

If the disk is already setup with LUKS, you can skip this setup. Otherwise, we need to setup the disk. This article assumes we start from step 5 with the backup server now tunneled to the client and the NBD connected. These steps would also work if the drive were connected directly to the client computer, albeit using different block device paths, which is useful for setup and initial synchronization.

First we need to create a key. You can use a password if you like, but a key file is better. Backups are typically automated and a key file on the client can be used to mount the remote drive as part of the backup script. We will assume the client computer is secure, but the backup server is not. Thus we just store the key in plain text on the client computer.

user@client:~ $ dd if=/dev/random bs=32 count=1 > backupKey.bin

This will generate a random key file called backupKey.bin. After creating this file, keep it safe. If you lose this file, there is absolutely no way of recovering your backup data. Placing it on a USB disk and store it in a fireproof box or safe deposit box. You could encrypt it and place it on the backup computer, cloud storage, or even e-mail it to yourself (assuming you have web-based e-mail).

In this instance, the key file is 32 bytes, or 256 bits. There is no reason it needs to be this size and could be larger. However, 256 bits is a typical block cipher key size and should be sufficient.

Now use this key to setup LUKS on the remote drive:

user@client:~ $ sudo cryptsetup -v luksFormat --key-file=backupKey.bin  /dev/nbd0

This will setup the encrypted disk. Now we need to open it and create a file system.

user@client:~ $ sudo cryptsetup open /dev/nbd0 backupDrive –key-file=backupKey.bin
user@client:~ $ sudo mkfs.ext4 /dev/mapper/backupDrive

In order to use this disk we need a mount point. We will make one in /mnt/backupDrive.

user@client:~ $ sudo mkdir /mnt/backupDrive
user@client:~ $ sudo mount /dev/mapper/backupDrive /mnt/backupDrive

The only thing that might need to be done is changing the owner of the backup drive. Depends on if you are doing backups as root, yourself, or a backup user. In this case, the backup drive mount will be owned by the user backup.

user@client:~ $ sudo chown backup:backup /mnt/backupDrive

That’s it. The disk is now ready for backups.

Step 7:Initial Backup

For backups I typically use rsync. The first pass could take awhile, but subsequent runs will only update files that have changed. The command looks like this:

user@client:~ $ rsync -av --delete /<source path>/ /mnt/backupDrive

Best to do this and leave it for awhile such as letting it run overnight.

As this point the backups to the backup system are possible. In the next article we will cover automating the system.

January 06, 2022

Google Data Collection

Back 2010 Google got slack for collecting Wifi data without allowing people to opt out. Since then they have had a “high resolution” GPS mode which is enabled by default. Up until a month or two ago, Google Maps worked fine without high resolution mode enabled. Now it isn’t possible to enable navigation unless this mode is turned on.

I’ve watched how Google has tried to push this mode on people since the enable option was first introduced. At some point, every time you tried to get your location on Google Maps, a pop-up would tell you that high resolution mode wasn’t enabled and asked if you wanted to change it. Seems they were doing everything to push this mode onto users.

Now, there is no choice. You have to use this mode or not use maps. This seems to be Google’s business model when it comes to privacy. Get user blow-back? Apologize, make it seem like you’ve done something about it, and then slowly reintroduce your intrusive behavior while no one is paying attention. It’s not like there is a real alternative.

   The fully setup dining room for New Year's Eve.  It was decided that having laser and the haze machine would be used in this room along with blue lighting.  The living room was setup with blacklight so using blue in this room wouldn't take away from the backlight glow effect in the next room.  Haze allows the lasers to be seen. 
   The setup for New Year's Eve is generally complex.  Here is the setup of the painting room in progress.  It starts by installing the ceiling truss system, followed by the wall canvas and then the ceiling canvas.  Once the canvas is hung, they are partly rolled up and clamped so the floor canvas can be put down.  After that all the blacklights are positioned and the stretched painting canvas hung.  This took a couple an hour or two, but since we have done this setup several times we are pretty good at the process.