Backups You Can Actually Restore
The backup nobody tests is the backup that doesn't exist.
The three questions
Before you write a single line of backup code, answer: How much data can I lose? (your RPO — recovery point objective). How long can I be down? (your RTO — recovery time objective). Where will backups live if the whole server is gone? (off-site, not on the same box). If your answer to the third question is "on the same box", you do not have backups, you have a backup of your backup.
pg_dump on a nightly cron
The simplest Postgres backup is a nightly pg_dump piped to gzip and uploaded to S3 (or Backblaze B2 — half the price, same API). A ten-line shell script in cron does the whole job:
#!/bin/bash
set -e
DATE=$(date +%Y%m%d-%H%M%S)
docker compose exec -T db pg_dump -U app app | gzip > /tmp/backup-$DATE.sql.gz
aws s3 cp /tmp/backup-$DATE.sql.gz s3://my-backups/postgres/
rm /tmp/backup-$DATE.sql.gz
Schedule it with 0 3 * * * /usr/local/bin/backup.sh in root's crontab. Rotate old backups with an S3 lifecycle rule (keep 7 daily, 4 weekly, 12 monthly).
Backblaze B2 is underrated
Backblaze B2 is S3-compatible, costs $6/TB/month (vs AWS S3 at $23/TB), and has free egress up to 3× your stored volume each month. For a self-hoster, it is the obvious default. Point your aws CLI at --endpoint-url https://s3.us-west-002.backblazeb2.com and the same script works.
The monthly restore drill
Write a second script that downloads the latest backup to a fresh throwaway Postgres container and runs \dt and SELECT count(*) FROM users;. Run it on the first of every month on a cron. If the drill fails, you know before you actually need the backup. This step is the entire difference between a backup strategy and a backup fantasy.
Volume backups for the rest
For Docker named volumes (uploads, user files, anything not in the database), spin up a tiny alpine container that bind-mounts the volume and tars it straight to S3:
docker run --rm -v uploads:/data -v /tmp:/backup alpine \
tar czf /backup/uploads-$(date +%F).tar.gz /data
Upload the tarball the same way as the database dump. One cron, one bucket, one peace of mind.
Key takeaways
- Backups that live on the same box are not backups
- `pg_dump` piped to gzip piped to S3 is the whole recipe
- Backblaze B2 is S3-compatible and ~4× cheaper than AWS
- Run a restore drill monthly — untested backups fail when you need them