May 2, 2026
The Preflight Script: 60 Seconds That Save a Two-Hour Outage
Most proxy and SSL outages are silent regressions, not new failures. A 60-second preflight script that captures current state — then compares after — catches them before users do.

You change one block in nginx.conf, reload, and everything still loads. Two days later someone reports a route returning a 502 from a service nobody uses very often. You roll back, can't reproduce locally, and lose the afternoon to git bisect on a config file.
This is the single most common shape of self-hosted outage. The change worked for the route you tested. It silently broke a different route nobody had open at the time.
The fix isn't more careful editing. It's a 60-second preflight script that captures the current state of every route, then a postflight script that compares.
What "current state" means here
Before any nginx / Caddy / Traefik change, capture three things for every host you serve:
- HTTP status for the canonical path of each route (usually
/) - Cert validity — fingerprint and expiry date
- Response body hash for static endpoints that should be byte-stable
A minimal preflight in shell:
#!/usr/bin/env bash
# preflight.sh — capture current proxy/SSL state
set -e
hosts=(api.example.com app.example.com docs.example.com)
out=/tmp/preflight-$(date +%s)
mkdir -p "$out"
for h in "${hosts[@]}"; do
curl -sI -m 5 "https://$h/" > "$out/$h.head" 2>&1
echo | openssl s_client -servername "$h" -connect "$h:443" 2>/dev/null \
| openssl x509 -noout -fingerprint -dates > "$out/$h.cert" 2>&1
curl -s -m 5 "https://$h/health" | sha256sum > "$out/$h.health" 2>&1 || true
done
echo "preflight saved to $out"
That's it. Run before the change. The captured snapshot lives in /tmp until you compare against it.
The postflight check
Same script, different output dir, then diff the two:
./preflight.sh # before change
# ... reload nginx, etc. ...
./preflight.sh # after change
diff -r /tmp/preflight-1700000000 /tmp/preflight-1700000300
Anything that diff prints is a regression candidate. Status codes that flipped, certs that changed unexpectedly, health endpoints that returned different bodies. Most of the output should be empty.
The diff is what catches the silent breakage. Without it, you find out from a user three days later.
What "silent breakage" actually looks like
A few patterns this catches that visual inspection misses:
- Certificate served by the wrong virtual host. You added a new server block above an existing one; the new one's cert now serves for both domains because of
default_serverordering. SSL handshake completes for both, but the cert mismatch causes browser warnings — not visible in curl without-v. - Upstream pool change drops a backend. You added a new upstream entry but the syntax silently dropped one of the existing ones. Health endpoint still works because round-robin sometimes hits the surviving backends. Will fail eventually.
- HSTS preload accidentally removed. Removing one
add_headerline that you thought was a duplicate. Browsers cached the old policy so existing users don't notice. New users get a downgrade-attack-vulnerable connection. - Rate limit zone forgotten. Refactored a
limit_req_zonedefinition. New definition has the same name but a different burst. Production traffic now gets throttled at 1/10th the previous rate.
Each of these has bitten production setups. Each shows up as a visible diff in the preflight output.
When the preflight should run automatically
Three triggers worth wiring up:
- Pre-commit on the proxy config repo. Refuses to commit if preflight wasn't captured.
- Before any
nginx -s reloadin your deploy script. Capture state, reload, compare. Auto-rollback on diff. - Daily as a cron. Compare against yesterday's snapshot. Catches drift you didn't intend.
The third one is underrated. Servers accumulate state that no one remembers changing — auto-renewed certs, OS package updates that ship new nginx defaults, kernel modules that change TLS cipher availability. A daily diff lets you notice within 24 hours instead of when something breaks.
Where Server Compass fits
Server Compass already runs structured deploys — adding a preflight hook before each proxy-related deploy is the next obvious step. Until that ships as a built-in, the script above runs anywhere you have bash and curl. Stick it in your deploy directory and source it before any nginx change.
The pattern is older than this article: run a snapshot, change the thing, compare. The unfamiliar part is doing it for every change, not just risky-looking ones. The risky-looking ones are the ones you're already careful with. The boring-looking ones are where outages hide.