May 3, 2026
Encrypted Backup Files Without Leaving Plaintext Behind: Atomic Temp-File Hygiene
You decrypt a backup, work on it, re-encrypt, delete the temp file. Three weeks later you find the unencrypted version still sitting in /tmp because the script crashed on step 2. Here's the cleanup-guaranteed pattern.

You have an encrypted backup. You need to extract one file, edit it, re-encrypt, push back. The naive script:
gpg -d backup.tar.gz.gpg > /tmp/backup.tar.gz
tar -xzf /tmp/backup.tar.gz -C /tmp/extract
# ... edit a file ...
tar -czf /tmp/backup.tar.gz -C /tmp/extract .
gpg -e -r yourkey /tmp/backup.tar.gz
mv /tmp/backup.tar.gz.gpg backup.tar.gz.gpg
rm -rf /tmp/extract /tmp/backup.tar.gz
This works. Until step 2 fails (disk full, corrupt archive, signal). Then /tmp/backup.tar.gz and /tmp/extract/ sit on disk indefinitely with your unencrypted data. You discover them three weeks later when running du.
The fix is a cleanup-guaranteed pattern. It costs maybe three extra lines and removes the entire class of "plaintext residue from a crashed script" failure.
The pattern: trap + atomic temp dir
#!/usr/bin/env bash
set -euo pipefail
# Atomic temp dir; created with restrictive perms (0700)
TMPDIR=$(mktemp -d)
chmod 700 "$TMPDIR"
# Cleanup runs on EXIT, INT, TERM, ERR — covers script success, Ctrl-C,
# SIGTERM from systemd, errexit from set -e
cleanup() {
if [ -n "${TMPDIR:-}" ] && [ -d "$TMPDIR" ]; then
# shred sensitive files before unlinking (single overwrite is enough on SSD)
find "$TMPDIR" -type f -exec shred -u -n 1 {} + 2>/dev/null || true
rm -rf "$TMPDIR"
fi
}
trap cleanup EXIT INT TERM
# Now do the work — every failure path triggers cleanup
gpg -d backup.tar.gz.gpg > "$TMPDIR/backup.tar.gz"
mkdir -p "$TMPDIR/extract"
tar -xzf "$TMPDIR/backup.tar.gz" -C "$TMPDIR/extract"
# ... edit "$TMPDIR/extract/somefile" ...
tar -czf "$TMPDIR/repacked.tar.gz" -C "$TMPDIR/extract" .
gpg -e -r yourkey -o backup.tar.gz.gpg.new "$TMPDIR/repacked.tar.gz"
mv backup.tar.gz.gpg.new backup.tar.gz.gpg
# trap fires on exit and shreds + removes $TMPDIR
Six load-bearing changes from the naive script:
set -euo pipefail— exit immediately on any error, don't continue with stale datamktemp -d— gets a guaranteed-unique temp dir (avoids races with other processes)chmod 700— only your user can read it (defense if /tmp is shared)trap cleanup EXIT INT TERM— cleanup runs no matter how the script exitsshred -u -n 1beforerm— one overwrite pass; on SSDs that's all that matters anyway, but it makes intent explicit- Re-encrypt to a
.newfilename thenmv— atomic replacement; no window where the on-disk archive is missing
Together: even if you Ctrl-C in the middle of step 4, cleanup fires. Even if the script gets SIGKILL'd by OOM, the temp dir at least has restrictive perms (0700) so other users can't read it.
What set -euo pipefail actually buys you
Most "safe" bash scripts don't set this. The defaults are unsafe in three ways:
- Without
-e, a failed command continues to the next line. So ifgpg -dfails, the script merrily proceeds totar -xzfon a missing file, then re-encrypts whatever's in/tmp/extract(possibly nothing, possibly stale). - Without
-u, an unset variable expands to empty. Sorm -rf "$TMPDIR/"becomesrm -rf "/"if$TMPDIRis unset. That's the famous SteamOS bug. - Without
pipefail,cmd1 | cmd2returns cmd2's exit code. Sogpg -d backup.gpg | tar -xsucceeds even ifgpgfailed, because tar handled an empty stream and returned 0.
The three-letter euo flag captures all of these. There's almost no downside in scripts you control.
Where Mac and Linux diverge
The above works identically on macOS and Linux except shred — macOS doesn't include it. The macOS equivalents:
# macOS doesn't have shred; use rm -P (overwrite then delete)
# In your cleanup() function:
if command -v shred >/dev/null; then
find "$TMPDIR" -type f -exec shred -u -n 1 {} + 2>/dev/null || true
elif rm --help 2>&1 | grep -q '\-P'; then
find "$TMPDIR" -type f -exec rm -P {} + 2>/dev/null || true
fi
rm -rf "$TMPDIR"
On modern SSDs, the overwrite is mostly theater — the underlying flash controller will reuse the cells whenever it pleases, and you can't observe what was where. The point of including shred/rm -P is to make the cleanup intent visible in code review, not to make recovery cryptographically impossible.
Why this pattern matters for backup workflows specifically
Backup files contain everything sensitive your system has. Database dumps, env files, encryption keys, SSH credentials, customer data. A backup file briefly decrypted in /tmp is the worst kind of data exposure: high-value, low-visibility (people don't audit /tmp), persistent (until reboot or manual cleanup).
The cleanup-guaranteed pattern doesn't make this risk go away (the data IS in /tmp during the script's runtime), but it bounds the window to "while the script runs" rather than "indefinitely."
If you can avoid decrypting to disk entirely, that's better — pipe directly:
# Better: never write plaintext to disk if you don't have to
gpg -d backup.tar.gz.gpg | tar -xzO somefile.txt | grep ...
The -O flag makes tar write to stdout; the result never touches disk. Use this when you're just reading from the encrypted archive. The decrypt-to-disk pattern is only needed when you'll modify and re-encrypt.
Where Server Compass fits
Server Compass's File Manager and SSH terminal already give you a way to extract specific files from a backup without writing the full decrypted archive to disk — the file gets streamed from the remote source through the SSH tunnel directly into the editor. No /tmp exposure window at all.
For workflows where you genuinely need a full local decrypt-and-reencrypt cycle (e.g., editing many files, repacking), the mktemp -d + trap cleanup pattern above is the right tool. Adopt it everywhere a script handles encrypted data.
The defining property: it should be impossible to write the script in a way that leaves plaintext behind, even if you Ctrl-C, even if the disk fills, even if you forget. The trap handler is what makes "impossible" actually true.