FreeBSD.software
Home/Blog/FreeBSD Backup Strategy: Complete Guide
guide2026-03-29

FreeBSD Backup Strategy: Complete Guide

Complete guide to FreeBSD backup strategies. Covers ZFS snapshots, BorgBackup, Restic, cloud backup to Backblaze B2, 3-2-1 rule, automated scripts, disaster recovery, and testing restores.

# FreeBSD Backup Strategy: Complete Guide

Backups are the one thing that separates a recoverable system from a catastrophe. Every FreeBSD administrator will eventually face hardware failure, accidental deletion, ransomware, or a bad upgrade. The question is not whether it will happen, but whether you will be ready.

This guide covers every layer of a production backup strategy on FreeBSD -- from ZFS snapshots that recover a deleted file in seconds, to encrypted off-site backups that survive a building fire. Every command runs on FreeBSD 14.x. Every script is cron-ready. No theory without practice.

Table of Contents

1. [Backup Fundamentals](#backup-fundamentals)

2. [ZFS Snapshots](#zfs-snapshots)

3. [ZFS Send/Receive for Replication](#zfs-sendreceive-for-replication)

4. [Sanoid and Syncoid for Automated ZFS Replication](#sanoid-and-syncoid-for-automated-zfs-replication)

5. [BorgBackup on FreeBSD](#borgbackup-on-freebsd)

6. [Restic on FreeBSD](#restic-on-freebsd)

7. [Cloud Backup to Backblaze B2](#cloud-backup-to-backblaze-b2)

8. [Database Backups](#database-backups)

9. [Automating Backups](#automating-backups)

10. [Testing Restores](#testing-restores)

11. [Disaster Recovery Planning](#disaster-recovery-planning)

12. [Comparison Table](#comparison-table)

13. [FAQ](#faq)

---

Backup Fundamentals

Before choosing tools, you need a framework for thinking about backups. Three concepts matter above all else.

The 3-2-1 Rule

The 3-2-1 rule is the minimum viable backup strategy:

- **3 copies** of your data (the original plus two backups).

- **2 different storage media** (e.g., local disk and cloud, or disk and tape).

- **1 copy off-site** (survives fire, flood, theft, or ransomware that spreads across your LAN).

Some organizations extend this to 3-2-1-1-0: one copy offline (air-gapped), and zero unverified backups. The core idea remains the same -- redundancy across failure domains.

RPO and RTO

**Recovery Point Objective (RPO)** is how much data you can afford to lose. If your RPO is one hour, you need backups at least every hour. If your RPO is zero, you need synchronous replication.

**Recovery Time Objective (RTO)** is how fast you need to be back online. ZFS snapshots give you sub-second RTO for file recovery. Restoring from a cloud backup of a 500 GB dataset might take hours.

Define RPO and RTO for each service before choosing tools. A PostgreSQL database serving production traffic has different requirements than a static website.

Full, Incremental, and Differential

- **Full backup:** Copies everything. Simple. Slow. Expensive in storage.

- **Incremental backup:** Copies only what changed since the last backup (full or incremental). Fast. Space-efficient. Requires the full chain to restore.

- **Differential backup:** Copies everything that changed since the last full backup. A middle ground -- larger than incremental, but restores need only the last full plus the latest differential.

ZFS snapshots are inherently incremental at the block level. BorgBackup and Restic use content-defined chunking with deduplication, which is effectively incremental with the restoration simplicity of a full backup. Modern tools have made the old full/incremental/differential taxonomy less relevant, but the concepts still matter when planning storage capacity.

---

ZFS Snapshots

If you are running FreeBSD with ZFS -- and you should be (see the [ZFS guide](/blog/zfs-freebsd-guide/)) -- snapshots are your first line of defense. They are instant, space-efficient, and require zero additional software.

Creating Snapshots

A snapshot captures the exact state of a dataset at a point in time:

sh

zfs snapshot zroot/data@2026-03-29_14-00

Snapshots are read-only. They consume no space at creation time. As blocks change in the active dataset, the snapshot retains the old blocks, so space usage grows over time proportional to your change rate.

List all snapshots:

sh

zfs list -t snapshot

Check space consumed by a snapshot:

sh

zfs list -t snapshot -o name,used,refer zroot/data

Recursive Snapshots

To snapshot a dataset and all its children:

sh

zfs snapshot -r zroot@daily-2026-03-29

This is essential for consistent backups of dataset hierarchies. A [FreeBSD NAS](/blog/freebsd-nas-build/) with datasets for photos, documents, and media can be snapshotted atomically.

Accessing Snapshot Data

Every dataset exposes a .zfs/snapshot directory:

sh

ls /data/.zfs/snapshot/2026-03-29_14-00/

Users can recover their own files by copying from this hidden directory. No administrator intervention needed. This alone justifies running ZFS.

Rolling Back

To roll a dataset back to a snapshot, discarding all changes made after it:

sh

zfs rollback zroot/data@2026-03-29_14-00

If intermediate snapshots exist, you must either destroy them or use -r:

sh

zfs rollback -r zroot/data@2026-03-29_14-00

Rollback is destructive to anything written after the snapshot. Use it deliberately.

Automating Snapshot Rotation

A cron-ready script to create daily snapshots and remove those older than 30 days:

sh

#!/bin/sh

# /usr/local/bin/zfs-snapshot-rotate.sh

# Creates a daily recursive snapshot and prunes snapshots older than 30 days.

POOL="zroot"

DATE=$(date +%Y-%m-%d_%H-%M)

RETENTION_DAYS=30

# Create snapshot

zfs snapshot -r "${POOL}@auto-${DATE}"

if [ $? -ne 0 ]; then

echo "ERROR: Failed to create snapshot ${POOL}@auto-${DATE}" | \

mail -s "ZFS Snapshot FAILED on $(hostname)" admin@example.com

exit 1

fi

# Prune old snapshots

CUTOFF=$(date -v-${RETENTION_DAYS}d +%Y-%m-%d)

zfs list -H -t snapshot -o name -r "${POOL}" | grep '@auto-' | while read snap; do

SNAP_DATE=$(echo "$snap" | sed 's/.*@auto-//' | cut -d_ -f1)

if [ "$SNAP_DATE" '<' "$CUTOFF" ]; then

zfs destroy "$snap"

fi

done

Add to root's crontab:


0 2 * * * /usr/local/bin/zfs-snapshot-rotate.sh >> /var/log/zfs-snapshots.log 2>&1

Space Management

Snapshots hold references to old blocks. If you delete a large file but snapshots still reference it, the space is not freed. Monitor snapshot space:

sh

zfs list -t snapshot -o name,used,refer -s used | tail -20

If a snapshot consumes excessive space, destroy it:

sh

zfs destroy zroot/data@old-snapshot

---

ZFS Send/Receive for Replication

Snapshots protect against accidental deletion and corruption. They do not protect against disk failure if the pool itself is lost. ZFS send/receive copies snapshot data -- including incremental deltas -- to another pool, another machine, or a file.

Local Replication

Send a snapshot to a second pool on the same machine:

sh

zfs send zroot/data@2026-03-29 | zfs receive backup/data

For incremental sends (far faster after the initial full send):

sh

zfs send -i zroot/data@2026-03-28 zroot/data@2026-03-29 | zfs receive backup/data

The -i flag sends only the delta between the two snapshots.

Remote Replication via SSH

Send to a remote FreeBSD machine:

sh

zfs send -i zroot/data@2026-03-28 zroot/data@2026-03-29 | \

ssh backup-server zfs receive tank/backups/data

For compressed transfer over slow links:

sh

zfs send -i zroot/data@2026-03-28 zroot/data@2026-03-29 | \

zstd | ssh backup-server "zstd -d | zfs receive tank/backups/data"

Resumable Sends

If a transfer is interrupted, ZFS can resume it. On the receiving side:

sh

zfs receive -s tank/backups/data

Check the resume token:

sh

zfs get receive_resume_token tank/backups/data

Resume with:

sh

zfs send -t | ssh backup-server zfs receive -s tank/backups/data

A Complete Replication Script

sh

#!/bin/sh

# /usr/local/bin/zfs-replicate.sh

# Incremental ZFS replication to a remote host.

SRC_DATASET="zroot/data"

DST_HOST="backup-server"

DST_DATASET="tank/backups/data"

SNAP_PREFIX="replicate"

DATE=$(date +%Y-%m-%d_%H-%M)

# Create new snapshot

NEW_SNAP="${SRC_DATASET}@${SNAP_PREFIX}-${DATE}"

zfs snapshot -r "$NEW_SNAP"

# Find the previous replication snapshot

PREV_SNAP=$(zfs list -H -t snapshot -o name -r "$SRC_DATASET" | \

grep "@${SNAP_PREFIX}-" | sort | tail -2 | head -1)

if [ "$PREV_SNAP" = "$NEW_SNAP" ]; then

# First replication -- full send

zfs send -R "$NEW_SNAP" | ssh "$DST_HOST" zfs receive -F "$DST_DATASET"

else

# Incremental send

zfs send -R -i "$PREV_SNAP" "$NEW_SNAP" | \

ssh "$DST_HOST" zfs receive -F "$DST_DATASET"

fi

STATUS=$?

if [ $STATUS -ne 0 ]; then

echo "ZFS replication FAILED: ${SRC_DATASET} -> ${DST_HOST}:${DST_DATASET}" | \

mail -s "ZFS Replication FAILED on $(hostname)" admin@example.com

exit 1

fi

# Clean up old snapshots (keep last 7)

zfs list -H -t snapshot -o name -r "$SRC_DATASET" | \

grep "@${SNAP_PREFIX}-" | sort | head -n -7 | xargs -I{} zfs destroy {}

---

Sanoid and Syncoid for Automated ZFS Replication

Writing your own snapshot and replication scripts works, but Sanoid and Syncoid handle the edge cases you have not thought of yet -- interrupted transfers, bookmark-based incremental tracking, and configurable retention policies.

Installation

sh

pkg install sanoid

This installs both sanoid (snapshot management) and syncoid (replication).

Sanoid Configuration

Create /usr/local/etc/sanoid/sanoid.conf:

ini

[zroot/data]

use_template = production

recursive = yes

[zroot/postgres]

use_template = database

[template_production]

hourly = 24

daily = 30

monthly = 6

yearly = 1

autosnap = yes

autoprune = yes

[template_database]

hourly = 48

daily = 60

monthly = 12

yearly = 2

autosnap = yes

autoprune = yes

Run sanoid to take and prune snapshots:

sh

sanoid --cron

Add to crontab to run every 15 minutes:


*/15 * * * * /usr/local/bin/sanoid --cron

Syncoid for Replication

Syncoid handles incremental sends, creates snapshots automatically, and resumes interrupted transfers:

sh

syncoid zroot/data backup-server:tank/backups/data

For recursive replication of an entire hierarchy:

sh

syncoid -r zroot/data backup-server:tank/backups/data

Syncoid determines the latest common snapshot between source and destination and sends only the delta. Add it to cron:


0 * * * * /usr/local/bin/syncoid -r zroot/data backup-server:tank/backups/data >> /var/log/syncoid.log 2>&1

Sanoid and Syncoid together give you automated, policy-driven ZFS snapshot management and replication with minimal configuration. For any serious [FreeBSD NAS](/blog/freebsd-nas-build/), they are the standard answer.

---

BorgBackup on FreeBSD

ZFS snapshots and replication handle block-level backup within the ZFS ecosystem. BorgBackup adds something different: content-aware deduplication, encryption, and compression that works at the file level. It is ideal for backing up to non-ZFS targets, cloud storage, or untrusted remote servers.

Installation

sh

pkg install py311-borgbackup

Initializing a Repository

Create an encrypted local repository:

sh

borg init --encryption=repokey /backup/borg-repo

For a remote repository over SSH:

sh

borg init --encryption=repokey ssh://backup-user@backup-server/tank/borg-repo

Save the repokey and passphrase securely. Without them, your backups are unrecoverable.

Creating Backups

sh

export BORG_PASSPHRASE='your-secure-passphrase'

borg create --stats --progress --compression zstd,6 \

/backup/borg-repo::'{hostname}-{now:%Y-%m-%d_%H-%M}' \

/etc \

/usr/local/etc \

/home \

/var/db/postgres \

--exclude '*.tmp' \

--exclude '/home/*/.cache'

BorgBackup deduplicates across all archives in the repository. If two machines back up the same file, it is stored once. Compression reduces size further.

Listing and Extracting

List archives:

sh

borg list /backup/borg-repo

List files in a specific archive:

sh

borg list /backup/borg-repo::myhost-2026-03-29_14-00

Extract a single file:

sh

borg extract /backup/borg-repo::myhost-2026-03-29_14-00 home/user/important-document.txt

Extract everything:

sh

cd /tmp/restore && borg extract /backup/borg-repo::myhost-2026-03-29_14-00

Pruning Old Archives

BorgBackup's prune command applies retention policies:

sh

borg prune --stats --keep-hourly=24 --keep-daily=30 \

--keep-weekly=12 --keep-monthly=12 --keep-yearly=3 \

/backup/borg-repo

Always run borg compact after pruning to reclaim disk space:

sh

borg compact /backup/borg-repo

Complete BorgBackup Script

sh

#!/bin/sh

# /usr/local/bin/borg-backup.sh

# Complete BorgBackup script with rotation and monitoring.

export BORG_PASSPHRASE='your-secure-passphrase'

REPO="ssh://backup-user@backup-server/tank/borg-repo"

ARCHIVE="${REPO}::$(hostname)-$(date +%Y-%m-%d_%H-%M)"

LOG="/var/log/borg-backup.log"

echo "=== BorgBackup started: $(date) ===" >> "$LOG"

# Create backup

borg create --stats --compression zstd,6 \

"$ARCHIVE" \

/etc \

/usr/local/etc \

/home \

/var/db/postgres \

/usr/local/www \

--exclude '*.tmp' \

--exclude '*/cache/*' \

--exclude '*/tmp/*' \

>> "$LOG" 2>&1

BACKUP_STATUS=$?

# Prune

borg prune --stats \

--keep-hourly=24 --keep-daily=30 \

--keep-weekly=12 --keep-monthly=12 --keep-yearly=3 \

"$REPO" >> "$LOG" 2>&1

# Compact

borg compact "$REPO" >> "$LOG" 2>&1

# Verify

borg check --last 3 "$REPO" >> "$LOG" 2>&1

CHECK_STATUS=$?

echo "=== BorgBackup finished: $(date) ===" >> "$LOG"

if [ $BACKUP_STATUS -ne 0 ] || [ $CHECK_STATUS -ne 0 ]; then

tail -50 "$LOG" | mail -s "BorgBackup FAILED on $(hostname)" admin@example.com

exit 1

fi

---

Restic on FreeBSD

Restic is an alternative to BorgBackup with a different design philosophy. It supports multiple storage backends natively -- local disk, SFTP, S3, Backblaze B2, Azure, GCS -- without requiring rclone as an intermediary. It is written in Go, ships as a single binary, and handles encryption by default.

Installation

sh

pkg install restic

Initializing a Repository

Local:

sh

restic init --repo /backup/restic-repo

SFTP:

sh

restic init --repo sftp:backup-user@backup-server:/tank/restic-repo

Backblaze B2:

sh

export B2_ACCOUNT_ID="your-account-id"

export B2_ACCOUNT_KEY="your-account-key"

restic init --repo b2:your-bucket-name:/restic-repo

Creating Backups

sh

export RESTIC_PASSWORD='your-secure-password'

export RESTIC_REPOSITORY='/backup/restic-repo'

restic backup /etc /usr/local/etc /home /var/db/postgres \

--exclude='*.tmp' \

--exclude-caches \

--verbose

Listing Snapshots

sh

restic snapshots

Output shows snapshot ID, timestamp, host, and paths:


ID Time Host Paths

-------- ------------------- ---------- -----

a1b2c3d4 2026-03-29 14:00:00 fbsd-srv /etc, /home, ...

e5f6g7h8 2026-03-28 14:00:00 fbsd-srv /etc, /home, ...

Restoring

Restore a full snapshot:

sh

restic restore a1b2c3d4 --target /tmp/restore

Restore specific files:

sh

restic restore a1b2c3d4 --target /tmp/restore --include /etc/pf.conf

Forget and Prune

Apply retention policy:

sh

restic forget --keep-hourly 24 --keep-daily 30 \

--keep-weekly 12 --keep-monthly 12 --keep-yearly 3 \

--prune

The --prune flag removes unreferenced data immediately. Without it, forget only marks snapshots for deletion.

Checking Repository Integrity

sh

restic check

For a full data integrity check (reads all data blobs):

sh

restic check --read-data

---

Cloud Backup to Backblaze B2

Backblaze B2 is the most cost-effective S3-compatible cloud storage at $0.006/GB/month. Combined with rclone, it provides an off-site backup target that satisfies the "1 off-site" requirement of the 3-2-1 rule.

Installing rclone

sh

pkg install rclone

Configuring Backblaze B2

Run the interactive configuration:

sh

rclone config

Steps:

1. Choose n for new remote.

2. Name it b2.

3. Select Backblaze B2 as the storage type.

4. Enter your Application Key ID and Application Key.

5. Leave other options as defaults.

Verify the configuration:

sh

rclone lsd b2:

Syncing Backups to B2

Sync a local backup directory to B2:

sh

rclone sync /backup/borg-repo b2:my-backup-bucket/borg-repo \

--transfers 8 \

--b2-hard-delete \

--verbose

To sync only specific directories:

sh

rclone sync /backup/exports b2:my-backup-bucket/exports \

--transfers 8 \

--bwlimit 50M \

--verbose

The --bwlimit flag prevents saturating your upstream bandwidth.

Encrypted Sync

If your backup tool does not handle encryption (unlike Borg and Restic, which encrypt by default), use rclone's crypt backend:

sh

rclone config

# Create a new remote of type "crypt" wrapping the b2 remote

# Name: b2-encrypted

# Remote: b2:my-backup-bucket/encrypted

# Filename encryption: standard

# Directory name encryption: true

# Password: (enter a strong password)

Then sync to the encrypted remote:

sh

rclone sync /data/exports b2-encrypted:exports --transfers 8

Automated Cloud Sync Script

sh

#!/bin/sh

# /usr/local/bin/cloud-sync.sh

# Syncs local backups to Backblaze B2.

LOG="/var/log/cloud-sync.log"

REMOTE="b2:my-backup-bucket"

echo "=== Cloud sync started: $(date) ===" >> "$LOG"

# Sync BorgBackup repository

rclone sync /backup/borg-repo "${REMOTE}/borg-repo" \

--transfers 8 \

--bwlimit 50M \

--log-file "$LOG" --log-level INFO

# Sync database dumps

rclone sync /backup/db-dumps "${REMOTE}/db-dumps" \

--transfers 4 \

--log-file "$LOG" --log-level INFO

STATUS=$?

echo "=== Cloud sync finished: $(date), exit code: ${STATUS} ===" >> "$LOG"

if [ $STATUS -ne 0 ]; then

tail -30 "$LOG" | mail -s "Cloud Sync FAILED on $(hostname)" admin@example.com

fi

---

Database Backups

File-level backups of a running database can produce corrupt dumps. Databases need their own backup procedures that guarantee consistency.

PostgreSQL with pg_dump

For a complete setup guide, see [PostgreSQL on FreeBSD](/blog/postgresql-freebsd-setup/).

Dump a single database:

sh

pg_dump -U postgres -Fc mydb > /backup/db-dumps/mydb-$(date +%Y-%m-%d).dump

The -Fc flag produces a custom-format compressed dump that supports parallel restore.

Dump all databases:

sh

pg_dumpall -U postgres > /backup/db-dumps/all-databases-$(date +%Y-%m-%d).sql

Restore from custom-format dump:

sh

pg_restore -U postgres -d mydb -C /backup/db-dumps/mydb-2026-03-29.dump

MySQL with mysqldump

sh

mysqldump -u root -p --all-databases --single-transaction \

--routines --triggers > /backup/db-dumps/mysql-all-$(date +%Y-%m-%d).sql

The --single-transaction flag ensures consistency for InnoDB tables without locking.

Restore:

sh

mysql -u root -p < /backup/db-dumps/mysql-all-2026-03-29.sql

Combining with ZFS Snapshots

For databases stored on ZFS datasets, snapshot the dataset after flushing:

sh

# PostgreSQL -- trigger a checkpoint before snapshot

psql -U postgres -c "CHECKPOINT;"

zfs snapshot zroot/postgres@pre-backup-$(date +%Y-%m-%d_%H-%M)

# Then run pg_dump against the snapshot mount for a consistent dump

pg_dump -U postgres -Fc mydb > /backup/db-dumps/mydb-$(date +%Y-%m-%d).dump

Database Backup Script

sh

#!/bin/sh

# /usr/local/bin/db-backup.sh

# Backs up PostgreSQL and MySQL databases.

DUMP_DIR="/backup/db-dumps"

DATE=$(date +%Y-%m-%d_%H-%M)

LOG="/var/log/db-backup.log"

RETENTION_DAYS=30

mkdir -p "$DUMP_DIR"

echo "=== Database backup started: $(date) ===" >> "$LOG"

# PostgreSQL

for db in $(psql -U postgres -Atc "SELECT datname FROM pg_database WHERE datistemplate = false;"); do

pg_dump -U postgres -Fc "$db" > "${DUMP_DIR}/pg-${db}-${DATE}.dump" 2>> "$LOG"

if [ $? -eq 0 ]; then

echo "OK: PostgreSQL ${db}" >> "$LOG"

else

echo "FAIL: PostgreSQL ${db}" >> "$LOG"

echo "PostgreSQL dump of ${db} FAILED" | \

mail -s "DB Backup FAILED on $(hostname)" admin@example.com

fi

done

# MySQL (if installed)

if command -v mysqldump > /dev/null 2>&1; then

mysqldump -u root --all-databases --single-transaction \

--routines --triggers > "${DUMP_DIR}/mysql-all-${DATE}.sql" 2>> "$LOG"

if [ $? -eq 0 ]; then

echo "OK: MySQL all databases" >> "$LOG"

else

echo "FAIL: MySQL all databases" >> "$LOG"

echo "MySQL dump FAILED" | \

mail -s "DB Backup FAILED on $(hostname)" admin@example.com

fi

fi

# Prune old dumps

find "$DUMP_DIR" -type f -mtime +${RETENTION_DAYS} -delete

echo "=== Database backup finished: $(date) ===" >> "$LOG"

---

Automating Backups

Individual backup scripts mean nothing if they do not run reliably. Automation requires cron scheduling, success/failure monitoring, and alerting.

Master Crontab

A complete backup crontab for root:


# ZFS snapshots -- every 15 minutes via Sanoid

*/15 * * * * /usr/local/bin/sanoid --cron >> /var/log/sanoid.log 2>&1

# ZFS replication -- hourly

0 * * * * /usr/local/bin/syncoid -r zroot/data backup-server:tank/backups/data >> /var/log/syncoid.log 2>&1

# Database dumps -- daily at 01:00

0 1 * * * /usr/local/bin/db-backup.sh

# BorgBackup -- daily at 02:00

0 2 * * * /usr/local/bin/borg-backup.sh

# Cloud sync -- daily at 04:00 (after local backups finish)

0 4 * * * /usr/local/bin/cloud-sync.sh

# Restore test -- weekly on Sunday at 06:00

0 6 * * 0 /usr/local/bin/test-restore.sh

Monitoring Script

A daily health check that verifies all backup components:

sh

#!/bin/sh

# /usr/local/bin/backup-monitor.sh

# Checks that all backup components are current and healthy.

FAILURES=""

# Check ZFS snapshots are recent (within last 2 hours)

LATEST_SNAP=$(zfs list -H -t snapshot -o creation -r zroot -S creation | head -1)

SNAP_EPOCH=$(date -j -f "%a %b %d %H:%M %Y" "$LATEST_SNAP" +%s 2>/dev/null)

NOW_EPOCH=$(date +%s)

if [ -n "$SNAP_EPOCH" ]; then

AGE=$(( (NOW_EPOCH - SNAP_EPOCH) / 3600 ))

if [ "$AGE" -gt 2 ]; then

FAILURES="${FAILURES}ZFS snapshots are ${AGE} hours old. "

fi

fi

# Check BorgBackup log for today's success

if ! grep -q "$(date +%Y-%m-%d).*finished" /var/log/borg-backup.log 2>/dev/null; then

FAILURES="${FAILURES}BorgBackup did not complete today. "

fi

# Check database dumps exist for today

TODAY=$(date +%Y-%m-%d)

if ! ls /backup/db-dumps/pg-*-${TODAY}* > /dev/null 2>&1; then

FAILURES="${FAILURES}No PostgreSQL dumps found for today. "

fi

# Check cloud sync log

if ! grep -q "$(date +%Y-%m-%d).*finished" /var/log/cloud-sync.log 2>/dev/null; then

FAILURES="${FAILURES}Cloud sync did not complete today. "

fi

# Report

if [ -n "$FAILURES" ]; then

echo "BACKUP FAILURES on $(hostname): ${FAILURES}" | \

mail -s "BACKUP ALERT: $(hostname)" admin@example.com

exit 1

else

echo "$(date): All backup checks passed" >> /var/log/backup-monitor.log

fi

Add to crontab:


0 8 * * * /usr/local/bin/backup-monitor.sh

Email Alerts with ssmtp or msmtp

FreeBSD's base mail command works if you have a local MTA. For relay through an external SMTP server:

sh

pkg install msmtp

Configure /usr/local/etc/msmtp.conf:


defaults

auth on

tls on

tls_trust_file /etc/ssl/cert.pem

account default

host smtp.example.com

port 587

from backups@example.com

user backups@example.com

password your-smtp-password

Set as the system mailer in /etc/mail.rc:


set sendmail=/usr/local/bin/msmtp

---

Testing Restores

A backup that has never been tested is not a backup. It is a hope. Testing restores is the single most important step in any backup strategy, and it is the step most administrators skip.

What to Test

- **ZFS snapshot restore:** Roll back a test dataset to a snapshot. Verify file contents.

- **ZFS send/receive restore:** Receive a snapshot stream onto a test dataset. Verify data integrity.

- **BorgBackup restore:** Extract the latest archive to a temporary directory. Verify file counts, sizes, and content checksums.

- **Restic restore:** Restore the latest snapshot to a temporary directory. Same verification.

- **Database restore:** Restore a dump into a test database instance. Run application-specific queries to verify data integrity.

- **Cloud restore:** Download from B2 and verify against local copies.

Automated Restore Test Script

sh

#!/bin/sh

# /usr/local/bin/test-restore.sh

# Weekly automated restore testing.

LOG="/var/log/restore-test.log"

FAILURES=""

echo "=== Restore test started: $(date) ===" >> "$LOG"

# Test 1: ZFS snapshot access

TEST_FILE="/data/.zfs/snapshot/$(zfs list -H -t snapshot -o name -r zroot/data | tail -1 | cut -d@ -f2)/test-canary.txt"

if [ -f "$(dirname "$TEST_FILE")" ] 2>/dev/null; then

echo "OK: ZFS snapshot directory accessible" >> "$LOG"

else

# Verify snapshot browsing works at all

SNAP_DIR="/data/.zfs/snapshot"

if [ -d "$SNAP_DIR" ]; then

echo "OK: ZFS snapshot directory exists" >> "$LOG"

else

FAILURES="${FAILURES}ZFS snapshot directory not accessible. "

fi

fi

# Test 2: BorgBackup restore

export BORG_PASSPHRASE='your-secure-passphrase'

BORG_REPO="/backup/borg-repo"

LATEST_ARCHIVE=$(borg list --last 1 --short "$BORG_REPO" 2>/dev/null)

if [ -n "$LATEST_ARCHIVE" ]; then

mkdir -p /tmp/restore-test-borg

borg extract --dry-run "$BORG_REPO::${LATEST_ARCHIVE}" >> "$LOG" 2>&1

if [ $? -eq 0 ]; then

echo "OK: BorgBackup dry-run restore succeeded" >> "$LOG"

else

FAILURES="${FAILURES}BorgBackup restore test failed. "

fi

rm -rf /tmp/restore-test-borg

fi

# Test 3: PostgreSQL restore

LATEST_DUMP=$(ls -t /backup/db-dumps/pg-*-*.dump 2>/dev/null | head -1)

if [ -n "$LATEST_DUMP" ]; then

# Restore to a test database

dropdb -U postgres --if-exists restore_test 2>/dev/null

createdb -U postgres restore_test 2>> "$LOG"

pg_restore -U postgres -d restore_test "$LATEST_DUMP" >> "$LOG" 2>&1

if [ $? -eq 0 ]; then

# Run a basic integrity check

TABLE_COUNT=$(psql -U postgres -Atc "SELECT count(*) FROM information_schema.tables WHERE table_schema = 'public';" restore_test)

echo "OK: PostgreSQL restore succeeded, ${TABLE_COUNT} tables" >> "$LOG"

else

FAILURES="${FAILURES}PostgreSQL restore test failed. "

fi

dropdb -U postgres --if-exists restore_test 2>/dev/null

fi

echo "=== Restore test finished: $(date) ===" >> "$LOG"

if [ -n "$FAILURES" ]; then

echo "RESTORE TEST FAILURES on $(hostname): ${FAILURES}" | \

mail -s "RESTORE TEST FAILED: $(hostname)" admin@example.com

exit 1

else

echo "All restore tests passed" >> "$LOG"

fi

How Often to Test

- **Weekly:** Automated dry-run restores and database restore tests.

- **Monthly:** Full restore of a random dataset to a temporary location. Verify file-by-file integrity.

- **Quarterly:** Full disaster recovery drill. Pretend the server is gone. Rebuild from backups alone. Time it. Document every step that was harder than expected.

The quarterly drill is where you discover that your passphrase is stored on the machine you are trying to recover, or that your rclone config was never backed up, or that your restore takes 14 hours instead of the 2 you promised in your RTO.

---

Disaster Recovery Planning

Backups are one component of disaster recovery. A complete plan addresses the full recovery lifecycle.

Document Everything

Create a disaster recovery runbook that covers:

1. **Contact list.** Who to notify, in what order.

2. **Asset inventory.** Every server, its role, its datasets, its backup locations.

3. **Recovery priority.** Which services come back first.

4. **Step-by-step procedures.** For each service, the exact commands to restore from each backup tier (local ZFS, remote ZFS, BorgBackup, cloud).

5. **Credentials.** Encryption passphrases, SSH keys, API keys. Stored securely off-site (password manager, printed in a safe, or split across multiple trusted parties).

6. **Network configuration.** IP addresses, DNS records, firewall rules. These are easy to forget and painful to reconstruct.

Recovery Order

A typical recovery sequence:

1. Install FreeBSD on replacement hardware.

2. Configure ZFS pool.

3. Restore from the most recent ZFS replication (fastest).

4. If ZFS replication is unavailable, restore from BorgBackup.

5. If BorgBackup is unavailable, download from Backblaze B2 and restore.

6. Restore database from dump files.

7. Verify application functionality.

8. Update DNS if IP addresses changed.

9. Monitor for 24 hours before declaring recovery complete.

Bare-Metal Recovery with ZFS

If you replicated your root pool (including boot environments), recovery can be as fast as:

sh

# On the new machine, create the pool

zpool create -o ashift=12 zroot mirror da0 da1

# Receive the replicated data

ssh old-backup-server zfs send -R tank/backups/zroot@latest | zfs receive -F zroot

# Install bootcode

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da1

This restores the entire OS, configuration, and data in one operation. It is the fastest possible recovery path and the strongest argument for ZFS replication as your primary backup method.

---

Comparison Table

| Feature | ZFS Snapshots | ZFS Send/Receive | BorgBackup | Restic | rclone to B2 |

|---|---|---|---|---|---|

| **Backup type** | Block-level | Block-level | File-level, deduplicated | File-level, deduplicated | File-level sync |

| **Encryption** | No (at-rest only with GELI) | No (SSH for transit) | Yes (repokey or keyfile) | Yes (always on) | Yes (crypt backend) |

| **Deduplication** | ZFS dedup (not recommended) | No | Yes (content-defined chunks) | Yes (content-defined chunks) | No |

| **Compression** | zstd, lz4, gzip (ZFS native) | Inherits from source | zstd, lz4, zlib | None (relies on backend) | None (relies on backend) |

| **Remote backup** | No (local only) | Yes (SSH pipe) | Yes (SSH) | Yes (SSH, S3, B2, etc.) | Yes (40+ backends) |

| **Incremental** | Inherently | Yes (send -i) | Yes (automatic) | Yes (automatic) | Sync-based (no versioning) |

| **Restore speed** | Instant (rollback) | Fast (receive) | Moderate | Moderate | Depends on bandwidth |

| **Storage overhead** | Low (COW blocks) | 1x per destination | Low (dedup + compression) | Low (dedup + compression) | 1x (full copy) |

| **Best for** | Local quick recovery | Site-to-site replication | Encrypted off-site backup | Multi-cloud backup | Simple cloud sync |

| **FreeBSD support** | Native | Native | pkg install | pkg install | pkg install |

Which to Use

Use all of them in layers:

- **ZFS snapshots** for instant local recovery (RPO: minutes, RTO: seconds).

- **ZFS send/receive** (via Syncoid) for site-to-site replication (RPO: hourly, RTO: minutes to hours).

- **BorgBackup or Restic** for encrypted, deduplicated off-site backup (RPO: daily, RTO: hours).

- **rclone to B2** for a final off-site copy or for syncing Borg/Restic repos to the cloud.

This gives you local snapshots, local replication, encrypted off-site backup, and cloud storage -- four layers that satisfy and exceed the 3-2-1 rule.

---

FAQ

How much storage do ZFS snapshots use?

Snapshots consume space proportional to the rate of change in the dataset. A dataset with 1 TB of data and a 5% daily change rate will see each daily snapshot grow to roughly 50 GB over 24 hours. Monitor with zfs list -t snapshot -o name,used. If a snapshot holds too much space, you can destroy it to free those blocks.

Should I use BorgBackup or Restic?

Both are excellent. BorgBackup has better compression options and is more mature on FreeBSD. Restic has native support for more storage backends (S3, B2, Azure, GCS) without needing rclone. If you primarily back up to a remote server over SSH, BorgBackup is the stronger choice. If you want to back up directly to cloud storage, Restic is simpler. For maximum flexibility, you can run both -- they do not conflict.

How do I back up a FreeBSD jail?

If your jails run on ZFS datasets (as they should), snapshot and replicate the jail dataset. For file-level backup, include the jail's filesystem path in your BorgBackup or Restic configuration. For iocage jails, iocage snapshot and iocage export provide jail-specific backup commands, but they are wrappers around ZFS snapshots.

Is Backblaze B2 reliable enough for critical backups?

Backblaze publishes annual drive failure statistics and maintains 11 nines (99.999999999%) of durability. Data is stored across multiple drives and multiple vaults. For a backup target (not primary storage), B2 is more than adequate. The risk is not B2 losing your data -- it is you losing your encryption keys or rclone configuration.

How do I verify that my backups are not corrupted?

BorgBackup: run borg check --verify-data periodically. Restic: run restic check --read-data. ZFS: zfs scrub verifies on-disk integrity. For database dumps, the only real verification is a test restore -- run the dump into a test database and query it. Checksums verify file integrity; test restores verify logical integrity. Do both.

What about tape backup on FreeBSD?

FreeBSD supports tape devices through the sa (SCSI sequential access) driver. Use tar or dump to write to /dev/sa0. Tape satisfies the air-gap requirement for high-security environments and remains the cheapest option for long-term archival. However, tape hardware is expensive to acquire, and restore times are slow. For most FreeBSD administrators, encrypted cloud backup is the practical alternative to tape.

---

Conclusion

A production FreeBSD backup strategy is not a single tool -- it is layers of protection, each covering the gaps left by the others. ZFS snapshots handle the "I just deleted a file" scenario. ZFS replication handles the "the boot drive died" scenario. BorgBackup or Restic handle the "the building flooded" scenario. Cloud sync handles the "everything local is gone" scenario.

Build these layers incrementally. Start with ZFS snapshots today -- they cost nothing and save files within seconds. Add Sanoid/Syncoid for replication this week. Set up BorgBackup or Restic for encrypted off-site backup this month. Add cloud sync to Backblaze B2 when you are ready.

Then test your restores. Schedule it. Automate it. The backup you never tested is the backup that will fail when you need it most.

For the foundation of all of this, start with the [ZFS guide](/blog/zfs-freebsd-guide/). For hardware planning, see the [FreeBSD NAS build guide](/blog/freebsd-nas-build/). For database-specific setup, see the [PostgreSQL on FreeBSD guide](/blog/postgresql-freebsd-setup/).