Andrew Que Sites list Photos
Projects Contact
   Morning twilight at Vilas Park.  Looks like I will not be able to ride tomorrow or Wednesday as rain is forecast.  With the cooler weather I've decided not to take the north side of the lakes on my way back home.  The ride is shorter this way and doesn't take as long.  Seems when the temperature is below 50 I slow down.

October 22, 2016

MySQL Snapshots

I have been using Back In Time to do snapshot-style backups of my website. This allows me (or will) to see my website at any point in time. One exception is the database. The dynamic content of several pages is driven by MySQL. While I can snapshot the entire database and archive that, most of the database doesn’t change very often. So this complete copy is wasteful. I got to thinking: how can I make a snapshot that only changes when the part of the database changes?

For typical backups, I use mysqldump to dump the entire database to a compressed file. This makes restoring the database quite easy. But there are additional options with mysqldump. You can dump just a single database, or a single table in a database. A little searching around I found scripts that do dumps of each table into individual files. They even compressed the data. This is almost everything I need.

One requirement I have is to only update table files that have changed. This is because the snapshots use the modified date to see if a file has changed. It is no good having the contents be identical if the file is still marked as modified. For this, I need an intermediate step.

First, I dump the table data to a temporary file. I then decompress the old table to an other temporary file. These files are then compared. If they are different, the old compressed table file is removed. Then the new table data from the temporary file is compressed into the old table file. If they are not different, the data is left alone.


# Script setup.

# Statistic counters.

# For each database (excluding information_schema)...
for db in $(mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' | grep -v information_schema)
  # For each table in the database...
  for table in $(mysql -NBA -u $MYSQL_USER -p$MYSQL_PASS -D $db -e 'show tables')
    echo -n "$db.$table..."

    # Count this table.

    # Place the table contents into a temporary file.
    tempFile=$(mktemp /tmp/
    mysqldump -u $MYSQL_USER -p$MYSQL_PASS --dump-date=false $db $table > $tempFile 2>/dev/null

    # Extract the old table data into a temporary file.
    checkFile=$(mktemp /tmp/
    7z x $BACKUP_DIR/$db.$table.7z $db.$table -so > $checkFile 2>/dev/null

    # Compare the new and old table.
    diff -q $tempFile $checkFile > /dev/null

    # Are the files different?
    if [ $? -ne 0 ]
      echo "Archiving."

      # Remove old table data.
      rm -f $BACKUP_DIR/$db.$table.7z

      # Compress new table data.
      cat $tempFile | 7z a -si $BACKUP_DIR/$db.$table.7z > /dev/null
      echo "Unchanged."

    # Remove temporary files.
    rm $tempFile
    rm $checkFile

# Print statistics of update.
echo ""
echo "Archive complete.  $changes of $count tables updated."

This should work well for my little web server. Tables don’t change rapidly and the entire database is small. On larger system this method may not be useful.

October 21, 2016

SSHFS with EncFS

The other day I wrote about how an encrypted home directory does not prevent the computer administrator (i.e. root) from having access to you files. If you are logged in, root can see what you see.

So is it possible to store files on a remote Linux machine such that only you have access to the data? The short answer is yes. Keep in mind that root is capable of everything. If you are trying to keep files from the administrator, you can’t ever expose the means of deciphering those files. That means you must do the decryption on your local computer rather than the remote computer. Luckily, this isn’t terribly difficult.

I did a test using sshfs and EncFS. SSH Filesystem (sshfs) is a filesystem designed to mount a directory on a remote computer using an SSH connection. EncFS is a filesystem that simply mounts a directory where it encrypts both the file contents and file names. Both are FUSE (filesystem in userspace) that don’t require root access on the local computer doing mounting. This is perfect for what we’re trying to do.

The goal is to use a remote, untrusted Linux system as a secure file repository. We first make a directory on the remote server to place our file system. Here we’ll make a directory called .private on the remote machine untrusted.

user@untrusted:~$ mkdir .private

We then mount the home directory of the remote system on our local machine using sshfs. So first create a mount point on the local machine (we’ll call it .remoteEncrypted), and then do the sshfs mount.

user@localMachine:~$ mkdir ~/.remoteEncrypted
user@localMachine:~$ sshfs user@untrusted:~/.private ~/.remoteEncrypted

And then we mount the encrypted directory now accessible by our sshfs mount. We’ll call the local mount point remoteData.

user@localMachine:~$ mkdir ~/remoteData
user@localMachine:~$ encfs ~/.remoteEncrypted ~/remoteData

Now the files in remoteData are stored in encrypted form on the remote machine untrusted. This is a two stage process. Files are first encrypted on our local machine with EncFS, then sent to the remote machine using sshfs where they are saved. The remote machine never sees unencrypted data. Thus even root has no more advantage than any other attacker when trying to gain access to you data. They could erase or alter it randomly, but not recover the original information. In addition, even the connection to the remote machine is encrypted so a 3rd party ease dropper sees nothing.

This setup is good for cloud-based storage and in particular off-site backups. There is no reason to trust a cloud computer and since many of these providers make their money by data mining you may as well assume your data isn’t safe from inspection. By using an encrypted filesystem about all they can see are file sizes and change dates—meta data—nothing else.

October 20, 2016

Encrypted home directories with SSH logins

Experimented with Linux encrypted home directories and SSH logins.  Most Linux users know that you can log into a Linux machine using SSH key authentication.  This allows one to generate a key pair--a public key and a private key--and use that to log with SSH rather than a password.  By placing the public key into the ~/.ssh/authorized_keys file, one can log into the Linux computer using their private key.  This can be more secure than a password, and also faster.  I had a couple of questions I wanted to answer. 1. Does encrypting the home directory prevent root from gaining access to user files? 2. Is it possible to have SSH key authenticated logins with an encrypted home directory?

The answer to the first question is: sort-of but not fully. When the user is not logged into the system, the home directory for that user is encrypted and unavailable to root. However, when the user is logged in, the home directory is mounted like an ordinary mount. Thus root is able to access it like it can any other mount. However, root cannot gain access to encrypted data if the user is not logged in, nor have any better luck gaining access to the encrypted data than any other hacker with access to the hardware.

For the second question the answer is: yes. SSHD is setup to look for the authorized_keys file which is typically in /home/<user>/.ssh. But this location can be redirected to the file /home/.ssh/<user>. The downside of this that one must disable StrictModes which will otherwise prevent the use of this file because of permissions. Typically, a user’s home directory is setup so no one but the user can read/write it. In addition, the .ssh sub-directory and authorized_keys must also be read/write only to the user. Using the new scheme, the /home/.ssh directory needs to be writable only to the user, which isn’t possible. Thus StrictModes must be disabled.

So this is something to keep in mind.  If you have an account on a remote Linux box and encrypt your home directory, it does not protect you from the administrator. 

   Another lovely day for riding.  In the morning I lost the strong drive I had yesterday but had a very strong 16 MPH tailwind.  I sailed into work barely breaking a sweat and still made it under an hour.  At the end of the day I took a leisurely stroll home, stopping regularly to capture pictures.  The color is pretty good and today (unlike yesterday) the skies are mostly clear.  I am pretty happy with the results.
   The ride to work this morning today was in short as the temperature was 70°F (21°C).  The winds were mostly from the south and fairly strong, but for some reason I really had a lot of push.  In fact, I averaged 937 Calories/hour for a solid hour which is nearly a personal record.  Strange when this happens because the last time it did the situation was similar.  For no particular reason I felt like pushing hard and did.  The return trip from work was warmer still at 81°F (27°C) and with the weather so nice I biked the north side of the lakes and into Waunakee before heading home.  I lost daylight around one third of the way into the ride.  The skies were fairly overcast but I did get a couple of pictures when the sun was not hidden by clouds. 
   Went up to Devil's Lake State Park this afternoon to do a little photography.  The color isn't yet prime, but is pretty good.  The sky was fully overcast though so I didn't get the vivid color I was hoping for.  There is still time left in the season though.
Repair Complete

Repair Complete

   Finished up the repair to the trim around the screen porch on the second floor.  We had to replace the screen and there was a fair bit of rot.  So I decided I would replace some of the wood as well.  I think the repair worked out pretty good and looks a lot better than it did.