Andrew Que Sites list Photos
Projects Contact
Main

This article of the series on running PHP 5 and 7 in Apache will cover getting a test environment setup to build PHP with FastCGI Process Manager (FPM).

The first thing I wanted to do was setup a clean environment to do the build. Initially I was thinking I should get a virtual machine running. However, I’ve lately taking a liking to Linux containers (LXC). I set one up for a work project that needed to CentOS build environment. The build does better with a lot of processing power and memory, and a container is better than a virtual machine for this. So I followed a similar recipe to get my build environment setup for the PHP 7 and 5 test. Some people may choose to use Docker for container management. LXC is so simple and this doesn’t need to be deployed, so I don’t see a reason to add Docker.

I run Linux Mint on the Snow Dragon and LXC is installed. A coworker runs a much more lien Liunx OS and had to install it. Since I didn’t have to do this I will assume LXC is installed. For the test environment I am just going to use the latest Debian. So the container creation command looks like this:

sudo lxc-create -t /usr/share/lxc/templates/lxc-download -n php7_5 -- -d debian -r bullseye -a amd64 

With the container created, I don’t need to do anything else to get it configured. Sometimes directories between the container and the host need to be shared. In that case, one could edit the configuration file for the container.

sudo nano /var/lib/lxc/php7_5/config

And in that file the add the line:

lxc.mount.entry = <host_path> <client_path> none rw,bind,create=dir 0 0

Where host_path is the location to share on the host machine, and client_path is where it shows up in the running container. Note that one can change the rw part of the mount to ro to create a read-only location. Nice if you want to access files, but don’t want the container to be able to make changes.

One other item in the config

Now that the container is created, need to start it:

sudo lxc-start php7_5

Once it is running we can login:

sudo lxc-attach php7_5

Logging out of the container is as simple as typing “exit”.

To shutdown the container the command:

sudo lxc-stop php7_5

And when finished testing we can remove the container with everything in it:

sudo lxc-destroy php7_5

Just like a virtual machine, anything done in the container only effects the container. However, memory and CPU resources are shared with the host. This is perfect for build environments as the container will run at native speeds with full memory access, but without any risk of messing up the host machine’s build environment.

November 15, 2021

Running Apache with PHP 5 and 7

For some time I have wanted to be able to have DrQue.net run both PHP 5 and PHP 7. All of the sites on DrQue.net were developed in PHP 4 or 5, and most will no longer work in PHP 7. However, PHP 5 is end-of-life and for the maintained areas of the site I’d like to switch over to using PHP 7. I had looked into running two versions of PHP before but never got it working. The examples that existed used repositories that had various compiled versions of PHP. I would have been fine using that, but the repositories did not have ARM—just x86. So when I migrated to the Web-Pi I simply compiled PHP 5 from the source and used that. Functional, but I have no way to port to PHP 7 because the server currently doesn’t run it. Time to change that.

I’ve read the way to do this is to use FastCGI. Then each virtual host can specify which PHP version it wishes to use. All I needed to do is figure out how this works and compile my own. In the articles of this series I will outline what I did for a test environment so that I could assemble the pieces to roll my own Apache 2 server running both PHP 5 and 7 built form source code.

New Shear Pins

New Shear Pins

   David Blowie, our snow blower, got an oil change and fired up for a few minutes in preparation for the coming winter.  Last year I managed to break every shear pin on the auger.  They were so stuck I had to take Mr. Blowie to the repair shop to have a professional remove them.  We hadn't run the snow blower since and I had not replaced the shear pins.  Took care of that today.  Alright Wisconsin weather, it is your turn.  Let's have some snow!

November 12, 2021

Linux Missing Disk Space due to Removed Open Files

Started getting warning e-mails from the Emerald Dragon about low disk space. It is setup to do this when the disk space is below 75%. I did some basic cleanup and get the usage under 75% by removing old log files, cache, etc. However, the system has a 16 GB eMCC and I wasn’t seeing anything using this space.

I started by checking the free space using disk free (df -h).

root@EmeraldDragon:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            716M     0  716M   0% /dev
tmpfs           172M   23M  150M  13% /run
/dev/mmcblk0p2   15G   11G  4.0G  72% /
tmpfs           859M     0  859M   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           859M     0  859M   0% /sys/fs/cgroup
/dev/mmcblk0p1  128M   19M  109M  15% /media/boot
cgmfs           100K     0  100K   0% /run/cgmanager/fs
tmpfs           172M     0  172M   0% /run/user/0
tmpfs           172M     0  172M   0% /run/user/1000

Then I did a disk usage command (du -h) on root:

root@EmeraldDragon:/# du -h /
...
2.7G	/

This tells me that all the files on the system consume only 2.7 GB of space, but the disk free command says that 11 GB are used. How can the sum of all the files on disk be less than the used space on disk?

In Linux, this can happen because when you delete a file used by a running process, the file doesn’t go away until the running process stops. Typically the discrepancy is fairly small. However, the Emerald Dragon has been running over 1,570 days.

Now the trick was to find the process holding open a large collection of deleted files. The command list open files (lsof) is helpful for this. Combine with grep I could get a list of open but deleted files. In this list I found the following:

root@EmeraldDragon:/# lsof | grep deleted
… 
rsyslogd    675           syslog    6w      REG              179,2 1420744573       1656 /var/log/syslog (deleted)
rsyslogd    675           syslog    8w      REG              179,2 5424049544       1667 /var/log/auth.log (deleted)
rsyslogd    675           syslog    9w      REG              179,2 1033132864      11460 /var/log/mail.log (deleted)
in:imuxso   675   701     syslog    6w      REG              179,2 1420744573       1656 /var/log/syslog (deleted)
in:imuxso   675   701     syslog    8w      REG              179,2 5424049544       1667 /var/log/auth.log (deleted)
in:imuxso   675   701     syslog    9w      REG              179,2 1033132864      11460 /var/log/mail.log (deleted)
in:imklog   675   702     syslog    6w      REG              179,2 1420744573       1656 /var/log/syslog (deleted)
in:imklog   675   702     syslog    8w      REG              179,2 5424049544       1667 /var/log/auth.log (deleted)
in:imklog   675   702     syslog    9w      REG              179,2 1033132864      11460 /var/log/mail.log (deleted)
rs:main     675   703     syslog    6w      REG              179,2 1420744573       1656 /var/log/syslog (deleted)
rs:main     675   703     syslog    8w      REG              179,2 5424049544       1667 /var/log/auth.log (deleted)
rs:main     675   703     syslog    9w      REG              179,2 1033132864      11460 /var/log/mail.log (deleted)

It looks like the system log daemon has several removed files open. So I gave it a restart and checked the disk usage.

root@EmeraldDragon:/# service rsyslog restart
root@EmeraldDragon:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            716M     0  716M   0% /dev
tmpfs           172M   23M  150M  13% /run
/dev/mmcblk0p2   15G  2.8G   12G  20% /
tmpfs           859M     0  859M   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           859M     0  859M   0% /sys/fs/cgroup
/dev/mmcblk0p1  128M   19M  109M  15% /media/boot
cgmfs           100K     0  100K   0% /run/cgmanager/fs
tmpfs           172M     0  172M   0% /run/user/0
tmpfs           172M     0  172M   0% /run/user/1000

That was it.

Good lesson learned for a server that has run continuously for 4 ¼ years. Most people would never run into this problem because of both system restarts and large file systems. The Emerald Dragon is rather special in that regard.

November 09, 2021

Software permissions

I have long wondered why programs I execute on my computer always run with the same privileges I have as a user. 20 years ago people could generally trust software to run with user privileges. Randsomeware wasn’t a thing, and the nature of computer viruses and malware was different. These days does it make sense to grant all software you run the rights to do anything you can do?

As a Linux user, it is not too difficult for me to run a program with limited privileges. Not the case when using my Windows’ based work computer. On Android phones an application must be granted permission to do certain tasks like access stored files or edit contacts. Why don’t we have this for installed programs on the PC? Consider if permission was required by a program to:

  • Open files.

  • Write a file.

  • Delete or change files.

  • Change configuration.

  • Connect to the Internet.

The last point, specifically allowing programs to connect to the Internet, could thwart a good deal of malware hidden in seemingly legitimate software—especially if you could limit the sites that software was allowed to connect. A good deal of software these days like to phone home if for no other reason than to allow the software company to keep analytics about usage. Even this benign tumor could be allowed if the software was banned from contacting any other site than the software manufacturer. Sleeper malware that piggybacks on legitimate software would have a harder time operating because they wouldn’t be allowed to connect to anything once they woke up.

Ransomware would be largely neutered if programs had to have permissions to change or delete files. A program might be able to open any file, but need confirmation any time it wants to write data. This might be a bit of an annoyance for an office program where one might save a document on regular intervals, but it would sure make it hard for user-level ransomware to operate. Adding a shadow-copy sandbox area for programs to operate would make this even more secure. When a program is closed, the changes could be reviewed by the user before being accepted. Or the program could be given write/modify/delete permissions only to a small file area like a single directory. If good backup practices are used, this would limit what ransomware could do if it embedded itself into an otherwise legitimate program.

This doesn’t do much to stop program that somehow manage to run with elevated privileges. One of the easiest ways to get these privileges is to ask the user installing the program. But why do most programs want or need root/administrator access to install? Local user software needs to be more of a thing. Secondly, if one maintains system wide software installs, why does the installer need such a high privilege level? Installers should be able to grant access to system install paths (like /usr/local or C:\Program Files) without granting access to any other privileges.

That will take care of all but malware that exploits privilege escalation in some manner. In addition, it doesn’t stop people from granting potentially dangerous privileges to software they install. Still, this would be a good start. It would grant people like myself a lot more control over the software I run, and the ability to control software is the first line of defense at mitigating malicious software.

November 08, 2021

Ineffective Security Measures

For a couple if years ago the IT group that handles the computers at work recommended we add a banner to e-mail that come from outside the company. It looks like this:

Some of my coworkers, including my boss warned that this kind of warning might lead to alert fatigue. This is a phenomena that leads to warnings getting ignored because they are overused. This message is attached to every e-mail that does not originate from a company e-mail address. It doesn’t matter if the message contains no links or attachments. Since most of us deal with client e-mails we see this message on a large percentage of incoming e-mail. I have the feeling this message is ignored by just about everyone.

This is a good example of how not to convey security information. Luckily for our company there are other security measures in place. Attachments have to be scanned before allowed to open, and links are all filtered through a link checker. While there are privacy issues (our company is giving a 3rd party a lot of metadata about our company) such measures are far more effective than the banner.

I haven’t received a virus/Trojan/malware attachment in decades as most attackers know better than to try and attach such a program to an e-mail. Usually what I see are links. These are most often part of a phishing attack as getting malware onto a computer these days is much more difficult than it used to be.

To me it would seem the obvious solution to preventing people from clicking links would simply be to remove them from e-mails. The majority of e-mails don’t need links anyway, and the slight inconvenience of not having a link might far outweigh the burden of dealing with phishing fallout. For example, if when I order something online I usually get a confirmation e-mail that typically has a link to view the invoice. I could just as easily log into the site and navigate to my orders to get the invoice. The link is convenient but unnecessary. While this is an easy blanket solution, I have not seen it mentioned. A slightly less restrictive version of the same thing might be to allow links, but show a confirmation dialog before following the link with details about the page about to be visited.

So why hasn’t this been implemented? I have a feeling it is for the same reason major e-mail providers don’t remove spy pixels from e-mail—it effects marketing. Spy pixels are in the vast majority of e-mails and are used to collect analytics. Chances are if you get an e-mail from any Internet business, it has a spy pixel so they know each time you look at that e-mail. Most e-mail programs allow external graphics by default, and programs like Microsoft Outlook doesn’t make it easy to turn off external images. There is a reason: that’s how they make their money. If everyone blocked spy pixels, they wouldn’t be able to collect marketing data. I have a feeling that if they made it easy to block links, a correlation could be made that showed less people visiting sites from marketing e-mails.

Thus, rather than solve a major problem with a simple solution, we have things like the banner above all my external e-mails. So security is a concern, but secondary if it gets in the way of marketing. I have a feeling this gem has prevented just as many attacks:

New Moon in the Early Evening Twilight

New Moon in the Early Evening Twilight

   Unseasonably warm temperatures today necessitated a bike ride.  With the end of daylight savings time the sun sets quite early and despite departing around 2:00 pm I encountered sunset during my ride.  It likely be among the last warm rides of 2021, although we may have another warm day this week.