After having no real no backups of my private infrastructure for quite some time, a colleague recommended backuppc. It may look like coming directly from the 90's, but it is so straight forward in what it does that it single-handedly smashed all my half-assed-not-fully-thought-through bash scripts to pieces.
Basically it does exactly what my half-assed-not-fully-thought-through bash scripts tried to do as well: rsync directories from several machines to a central place, where it gets stored incrementally. – Only that this one works.
Install on Raspbian
Unfortunately the install routine is completely messed up and after I've been through that a couple of times, I've decided to document it here, so that I know where to find it next time.
By the way:
The install routine on Ubuntu is just as messed up and so everything I write here, is also valid for Ubuntu 19.10.
The Problem
The dependencies of the Debian package for backuppc seem to lack quite some important packages. I don't know if and how this works for anybody who is testing this, but I wasn't able to install backuppc from the package repos on a fresh Ubuntu or a fresh Raspbian.
But fear not, help is near!
The Solution
Figuring this out is a pretty tidious process, that involves a lot of $ sudo apt remove backuppc
and $ sudo rm -rf /var/lb/backuppc/localhost
.
Once you have it, it's completely trivial. Samba (especially the Samba Client) and the Perl(!) rsync bindings are missing and installing them alongside backuppc simply does the job.
So here we go:
$ sudo apt install samba-common samba smbclient libfile-rsyncp-perl backuppc
Afterwards still nothing works, unless you edit the Apache conf of backuppc:
$ sudo nano /etc/apache2/conf-enabled/backuppc.conf
and comment out the Require local
(which of course is meant to be there as a security measure to persuade us properly securing the Web Interface!).
<RequireAll>
# Comment out this line once you have setup HTTPS and uncommented SSLRequireSSL
#Require local
# This line ensures that only authenticated users may access your backups
Require valid-user
</RequireAll>
Only thing left is restarting everything, and then you're good to go:
$ sudo service backuppc restart
$ sudo service apache2 restart
Further Reading
While doing what I was doing, I came across a couple of links, that might be helpful for anybody trying to install the software:
This post describes how to setup a static website on wasabi.com. It is basically a paraphrased version of this support document, BUT with a couple updates and some clues that took me some time to find out.
Wasabi is a Cloud Object Storage, that aims to 100% compatible to Amazon's S3 - while being significantly cheaper. And they are! I'm running my Seafile server with a Wasabi backend and not only does it work seamlessly, it's also only costs a friction of what I've paid at AWS.
In an effort of consolidating services a bit more, I've started looking into ways, on how to use Wasabi for hosting my static websites as well. It's not that these are incredibly expensive on AWS (this would require them to at least have some sort of traffic), but I like to have things not spread across too many services, especially if something works that fine for me.
4 Steps to your Static Website
I'm using Wasabi as the storage and Cloudflare as my DNS provider. With this setup it doesn't really matter where to host the domain, so be aware that I simply assume your domain is already conncected to Cloudflare.
1. Create a Bucket on Wasabi
Start by creating a new bucket in Wasabi. Make sure that it has exactly the same name as your domain!
In our case the bucket will be called mywebsite.tld
.
Also you have to select a region. This is important, as the official
support document (as well as pretty much every tutorial I found)
assumes, everybody loves to simply use us-east-1
.
As a citizen of the EU, I much rather prefer hosting my files within the EU because that
makes dealing with GDPR topics a lot easier. - So whatever you choose,
remember it, because you will need this information in a second.
2. Add a Policy
Next you need to create a Bucket Policy, which makes every file in your
bucket automatically world readable.
Don't follow the official tutorial here! The approach there would
either require you to manually adjust permissions on every file or it
would not only make your bucket world readable, but also world
writeable (and we wouldn't want that, would we?).
The policy looks exactly as the one AWS is using:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mywebsite.tld/*"
]
}
]
}
The important bit is adjusting the ARN "arn:aws:s3:::mywebsite.tld/*"
to your bucket name.
Apart from uploading the files you want to serve, that's all there is to
be done on Wasabi.
3. Add a CNAME record in Cloudflare
Next thing you have to do is creating a CNAME record in the DNS section
of your Cloudflare account. Cloudflare provides a super special feature
they call CNAME Flattening. It's a non standard functionality, that
allows you to set a CNAME on the root of your domain and therefore
pointing mywebsite.tld
to Wasabi!
Now you need to remember the region of your bucket and look up the
corresponding service URLs for Wasabi's different regions in this
document.
In my case it is s3.eu-central-1.wasabisys.com
.
4. Fix the Index File Problem
According to the docs, that should be it. And indeed, it is now possible
to request any file in your bucket under https://mywebsite.tld/[PATH-TO-FILE]
.
Unfortunately something crucial is still missing. Unlike Amazon, Wasabi
does not allow you to specify default index and a default error file.
I've asked them in a support request if there is a trick, I'm missing
and they made it 100% clear, that they have no intention to solve that
and much rather consider their service as a way to server specific
files, instead of a full website!
That's a bit of a shame, I think. But fortunately we can workaround this
limitation with a Cloudflare Page Rule, that creates an HTTP 301
redirect forwarding requests with no path to an index file.
In my case that would be:
https://mywebsite.tld/ --> https://mywebsite.tld/index.html
And this is how it looks on Cloudflare: