For the past two years I've been experimenting with different ways to "decrease my digital footprint" by self-hosting as most as I can and have control of my data. Besides the privacy implications I'm also interested on the technical side, I didn't know much about how the web worked honestly and I think this has been a good learning experience.
So I'm writing this blog post as a way to document what I did for myself and for others who might be interested as well.
At first, I started with a humble Raspberry pi 2, but clearly it is not powerful enough for some tasks like streaming video to my TV, ethernet is pretty slow (though it looks like there's a way to get gigabit speeds, I haven't tried), it also doesn't have much RAM and I had a lot of trouble getting Docker to work on it (which you'll see it's pretty important on my setup), so in the end I bought a x86 Mini-ITX desktop and it's been pretty good.
These are the services I'm currently self-hosting:
- The blog you're reading right now: Ghost
- Blog commenting system: Isso
- E-mail server: Postfix + Dovecot
- Calendar: Radicale
- RSS reader: Selfoss
- Media Center: Emby
- Federated Chat and Voice: Matrix.org
- IP Camera: MotionEye
- Software Project Management: Kanboard
- Git repository: Gogs
- Bookmarks: Wallabag
- Folder syncing: Syncthing
I had a lot of trouble getting these services to work. Each of these have their own configuration, required packages (sometimes even conflicting package versions!), etc. Also, upgrading the system ran into the risk of breaking something, even security updates, it was a very fragile setup. The deal breaker for me to get this up and running was using Docker containers.
With Docker, each app runs on its own container that is upgradeable and easy to backup. Docker images are configured by developers who know how best how to set up their apps.
I'll present next my personal selection of docker images, have in mind that it's possible that the instructions here will be out-of-date soon, so I'm including their respective docker hub links for more information.
My Docker Selection
docker run --name ghost \ -e NODE_ENV="production" \ -e url="https://blog.catboli.com" \ -v ~/ghost:/var/lib/ghost/content \ -p 20000:2368 \ -d ghost:latest```
$ docker run -it --name isso -p 30000:8080 \ -e USERID=1000 -e GROUPID=1000 \ -v ~/data/isso/config:/config \ -v ~/data/isso/db:/db \ -d wonderfall/isso
I folllowed this post for instructions on how to set up Isso and integrate with Ghost.
$ docker run -d -p 20002:8080 -v ~/data/selfoss:/selfoss/data --name selfoss arckosfr/selfoss
docker run -it --name emby -e TZ=EST5EDT -p 20003:8096 \ -e USERID=1000 -e GROUPID=1000 \ -v ~/data/emby/config:/config \ -v ~/data/my_movies:/movies \ -v ~/data/my_music:/music \ -d dperson/emby
docker run -d -p 20004:80 \ -v ~/data/kanboard/data:/var/www/app/data \ -v ~/data/kanboard/plugins:/var/www/app/plugins \ --name kanboard \ kanboard/kanboard
docker run -d --name=gogs \ -p 20005:22 \ -p 20006:3000 \ -v ~/data/gogs:/data \ gogs/gogs
docker run --name="wallabag" -d \ -v ~/data/wallabag:/var/www/wallabag/data \ -p 20007:80 \ wallabag/wallabag
This assumes you have an USB camera plugged at /dev/video0:
docker run -d --name=motioneye \ --device=/dev/video0 \ -p 20008:8765 \ -e TIMEZONE="America/Los_Angeles" \ -e PUID="2018" \ -e PGID="44" \ -v ~/data/motioneye/media:/home/nobody/media \ -v ~/data/motioneye/config:/config \ -v /etc/localtime:/etc/localtime:ro \ jshridha/motioneye
This one I had some trouble setting up. I'm using postgres because I didn't have much luck using sqlite, I had many performance problems (specially after joining some of the main rooms like #matrix:matrix.org that have a lot of traffic). After switching to PSQL these problems went away.
First, I created a postgres instance:
docker run --name matrix-pgsql -e POSTGRES_PASSWORD=hardtoguesspasswd -v ~/data/matrix/db:/var/lib/postgresql/data -d postgres
Then, I generate encryption keys:
docker run --name="matrix" -v /data/matrix/synapse:/data --rm -e SERVER_NAME=yourservername.com -e REPORT_STATS=no silviof/docker-matrix generate
You now have to edit your homeserver.yaml file according your needs, the file is pretty documented.
Finally, starting it:
docker run --link matrix-pgsql:postgres -d --name="matrix" -p 8448:8448 -v ~/data/matrix/synapse:/data silviof/docker-matrix start
You also have to set up the appropriate MX records for your domain:
_matrix._tcp SRV 1h 10 0 8448 yourservername.com.
Synapse uses self-signed certificates by default and it's fine to use that directly it seems (though you can still do it). The only benefit I can think of doing it would be to be able to use a reverse-proxy.
TODO: Find out how to enable Voip for videocalls.
docker run -d --name radicale \ -p 25232:5232 \ -v ~/data/radicale:/radicale/data --read-only \ tomsquest/docker-radicale:latest
For that you'll need to check out their git repo and change the config file to set up authentication. I ended up using IMAP auth because I already have an e-mail server set up.
I didn't use docker for this though I heard it can be an option. I decided to put an old raspberry pi I had to good use and let it host my e-mail server, I followed this great tutorial by Sam Hobbs:
If you follow it I recomend you don't skip any of the testing Sam does during the way, don't assume stuff is working because setting an e-mail server is a complicated task full of gotchas. Also, do configure SPF and DKIM otherwise gmail will treat your e-mails as spam. I've had good luck and my server has been running for almost an year without major issues.
It seems there's easier ways of configuring an e-mail server than doing this way by simply running docker containers with Postfix and Dovecot, I haven't tried but that might be a better option.
Syncthing and backups
I'm using Syncthing to backup my personal data to a laptop, I know Syncthing is not a backup tool but I'm syncing to a Btrfs volume and I have a systemd script that creates a subvolume once a week using
btrfs subvolume snapshot.
EDIT: To backup my server data I have a daily systemd service that uses rsync and copies all the data to a different machine. Something important is that you need to stop all your docker containers and then copying their volume containers data somewhere. I had some data corruption because my backup script was reading the data on the host while the containers were still running. My solution is a bit clunky because I have some downtime while rsyncing the files. A better option could be to use Btrfs on my server and simply create subvolumes once a day and also to use the
btrfs send feature instead of Rsync.
I have subdomains for each of my services that I want to expose to the internet and I use Nginx reverse-proxy to map these subdomains to the port exposed in the docker container.
To get free SSL certificates for my HTTPS websites I'm using Let's Encrypt by following this tutorial.
It's incredible that we might be reaching a point where having control of your data with 100% open-source software is not incredibly out of reach.
I also urge that if you use any of these projects please donate if you can to keep them going.
That's it folks! if you got here congratulations and I wish the best on your self-hosting adventures! If you have any questions feel free to ask in the comments section.
Discussion on reddit: https://www.reddit.com/r/selfhosted/comments/79oras/my_selfhosted_setup/