For the last couple of years at work I've used an old Perl script I made back in 2003 to take hourly snapshots of my project directory. The script is quite simple: it has lists of directories for which it gets a list of files and their last modification date. This data is run through a hash algorithm. If the resulting hash is different from the previous hash on record, the contents of the directory are copied to the snapshot archives. It's not efficient, but simple and makes it easy to go back in time if I do something stupid. The system has helped me many times throughout the years.
When I started using this system my source code directory was 2.3 MB in size. The projects I work with today can be much larger, so the snapshots are becoming cumbersome. Hundreds of thousands of files and tens of thousands of directories. And almost all of them are duplicates of one an other. So I've started to look into alternatives. I want to keep snapshots, but save space. I found exactly what I was looking for with a program called
Back In Time. It creates snapshots periodically in named directories, but with one major advantage: if a file has not changed from the last snapshot, the file is linked to the older version rather than copied. So only actual changes are copied.
At work I have to run a non-Linux OS, but I always have a virtual instance of Linux running. So I setup a large space for backups and started this system running. I don't like the directory naming, but use a simple script I wrote to make a directory structure that links to the snapshots. So far it has worked great. I'm going to run the two system in parallel for awhile.