One of the things it shares with Slash, is that the "site" is the database. AKA everything that makes a site, a site, is the database. This means that a backup of the site can be tossed into an everything container and be up and running in seconds (I use this with virtual machines so that I can move stuff around on a whim).
I hate backups. I don't want to take snapshots with LVM, since I know I just need the content, and typically with dumps I end up with a huge set of these files just sitting around.
So it dawns on me at dinner last night. What if I just used Mercurial?
So I add a crontab like so:
0 3 * * * mysqldump -f --single-transaction -T /var/backup/sitename sitename; (cd /var/backup/sitename; hg commit -m "auto"; hg push)
I push the backups to a centralized server.
Instant daily backups, and I can partially restore thanks to the tab format. I can pull diffs that I can directly apply and reinsert into the database.
Perfect? Nope.
Mercurial is storing deltas so the space difference does not seem to be to bad (and I am only shipping what changed over the wire). This will not work for my Archive tables, they are just too damn big (4 billion rows and growing). Those tables are just logs and I have a different long term plan for them.
But the data that makes up the site? Works well.
Since I use Innodb for sites, I can use the --single-transaction trick to take an online backup. If you are using any other storage engine, the above crontab will lock up your site while the dump is being made.
This will probably work for most revision systems (if you take one of the nifty ones that can chop history you could even automate the tossing of deltas after a week or so). Mercurial has the strong benefit of being http centric and it is incredibly easy to install as a server (and it has never eaten my data like some other open source systems have).
I welcome feedback :)