I’ve always been hesitant to run my own database instance, because I was afraid it might break or I’d have to support complex upgrade procedures. Thus, I’ve stored as much data as possible in plain files or tried to use SQLite. While SQLite is quite popular in the dev community, it didn’t really resonate with me. Maybe because I often create data from scripts on my local PC, but would also like to use it in applications on a server.

Recently, I read a discussion about PostgreSQL version upgrade difficulties. One complaint - may it be true or not - was that sometimes you’d have to dump your entire database and re-insert it into the new version. While in the discussion it was a negative point, for my private goals it’s exactly what I need. Just give me a plaintext backup of my data that I know I can always import into another database instance if something crashes.

So here’s a short collection of the scripts I use to backup my database. I’m using a systemd timer to trigger the backup once per day. Persistent ensures that the backup is executed after reboot in case the server was down during a backup window.

[Unit]
Description=PostgreSQL Backup Timer

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target

A timer needs a corresponding service file which it can start. The unit runs as user postgres, because that user has access to the PostgreSQL server (only from the same host as PostgreSQL is running on) without a password. Since postgres has no home directory, I just used my own user’s home directory to store the executable. As I administer the server manually (automation was just too much overhead for a hobby box), I don’t like to put executables into global paths where I might forget about them.

[Unit]
Description=PostgreSQL Backup
Requires=postgresql.service

[Service]
User=postgres
ExecStart=/home/skoch/bin/postgresql-backup.sh
Type=oneshot

In the shell script postgresql-backup.sh I run pg_dumpall to dump all databases into a gzipped file.

#!/usr/bin/env bash

BACKUP_DIR="/var/backups/postgresql"

pg_dumpall | gzip > "$BACKUP_DIR/$(hostname)_$(date +%Y%m%d).sql.gz"
find "$BACKUP_DIR" -type f -mtime +30 -exec rm {} \;

The PostgreSQL backup runs on a server. From a computer in my local network I periodically download the backups (again with a shellscript and a systemd timer). When the backups are in my local network, they get backupped with my local file system backup procedures. The systemd timer looks just the same as on the server, the service file has of course a few adjustments:

[Unit]
Description=PostgreSQL Backup Download

[Service]
User=skoch
ExecStart=/home/skoch/bin/postgresql-backup-download.sh
Type=oneshot

And the script is just simple rsync. At the moment I do not care about deletions on my local computer, it should have enough storage for a while. I don’t expect my private database to become really big.

#!/usr/bin/env bash

rsync -a kafka:/var/backups/postgresql/ /data/it/postgresql/
I do not maintain a comments section. If you have any questions or comments regarding my posts, please do not hesitate to send me an e-mail to blog@stefan-koch.name.