How to Manage ZFS Snapshots and Replication on FreeBSD
ZFS snapshots are one of the most powerful features available on FreeBSD. A snapshot is a read-only, point-in-time copy of a dataset that costs almost nothing to create. Combined with ZFS send/receive, snapshots enable efficient incremental backups and replication to local or remote systems. This guide covers manual snapshot management, automated snapshots with sanoid, replication with syncoid, pruning policies, and remote backup strategies.
Prerequisites
- FreeBSD 14.0 or later with ZFS root or ZFS data pools
- Root access
- For remote replication: SSH access to the remote system with ZFS
Verify ZFS is Active
shzpool list zfs list
You should see your pool(s) and datasets. If ZFS is not set up, it is typically configured during FreeBSD installation or via zpool create.
Understanding ZFS Snapshots
A ZFS snapshot captures the state of a dataset at a specific moment. Key characteristics:
- Instantaneous: Snapshots are created in constant time, regardless of dataset size
- Space-efficient: They consume only the space needed for changed blocks (copy-on-write)
- Read-only: Snapshots cannot be modified
- Hierarchical: Snapshots can be taken on any dataset, and recursive snapshots cover all child datasets
- Naming: Snapshots use the format
dataset@snapname
Snapshots are not backups by themselves. They protect against accidental deletion and provide rollback points, but they reside on the same pool. If the pool is lost, the snapshots are lost too. Replication to another pool or remote system is required for true backup.
Manual Snapshot Management
Create a Snapshot
sh# Snapshot a single dataset zfs snapshot zroot/usr/home@manual-2026-04-09 # Snapshot with a descriptive name zfs snapshot zroot/var/db/mysql@before-upgrade # Recursive snapshot (dataset and all children) zfs snapshot -r zroot/usr/home@daily-2026-04-09
List Snapshots
sh# List all snapshots zfs list -t snapshot # List snapshots for a specific dataset zfs list -t snapshot -r zroot/usr/home # Show space used by snapshots zfs list -t snapshot -o name,used,refer,creation -r zroot/usr/home # Sort by creation date zfs list -t snapshot -o name,creation -s creation -r zroot
Access Snapshot Contents
Every ZFS dataset has a hidden .zfs/snapshot directory:
sh# Browse snapshot contents ls /usr/home/.zfs/snapshot/ ls /usr/home/.zfs/snapshot/manual-2026-04-09/ # Recover a specific file from a snapshot cp /usr/home/.zfs/snapshot/manual-2026-04-09/user/important-file.txt \ /usr/home/user/important-file.txt
The .zfs directory is invisible to normal ls output but accessible by direct path.
Rollback to a Snapshot
Rolling back reverts the dataset to the snapshot state. All changes after the snapshot are lost.
sh# Rollback to the most recent snapshot zfs rollback zroot/usr/home@manual-2026-04-09 # Rollback discarding intermediate snapshots (if any exist after the target) zfs rollback -r zroot/usr/home@manual-2026-04-09
Rollback is a destructive operation. Data written after the snapshot is permanently deleted.
Destroy Snapshots
sh# Destroy a single snapshot zfs destroy zroot/usr/home@manual-2026-04-09 # Destroy snapshots matching a pattern (use with caution) zfs destroy zroot/usr/home@daily-2026-04-% # Destroy a range of snapshots zfs destroy zroot/usr/home@snap1%snap5
Check Snapshot Space Usage
sh# Space used by all snapshots on a dataset zfs list -o name,used,usedsnap,usedbysnapshots zroot/usr/home # Detailed per-snapshot space usage zfs list -t snapshot -o name,used,refer -r zroot/usr/home
The used column for a snapshot shows how much space would be freed by destroying it. This is the space occupied by blocks unique to that snapshot.
Automated Snapshots with Sanoid
Manual snapshots are useful for one-off situations. For automated snapshot management with retention policies, use sanoid.
Install Sanoid
shpkg install sanoid
Sanoid includes two tools:
- sanoid: Creates and prunes snapshots on a schedule
- syncoid: Replicates datasets between systems using ZFS send/receive
Configure Sanoid
Create /usr/local/etc/sanoid/sanoid.conf:
sh[zroot/usr/home] use_template = production recursive = yes [zroot/var/db/mysql] use_template = production [zroot/var/log] use_template = minimal [template_production] hourly = 24 daily = 30 monthly = 12 yearly = 2 autosnap = yes autoprune = yes [template_minimal] hourly = 12 daily = 7 monthly = 3 yearly = 0 autosnap = yes autoprune = yes
This configuration:
- Takes hourly, daily, monthly, and yearly snapshots of production datasets
- Keeps 24 hourly, 30 daily, 12 monthly, and 2 yearly snapshots
- Automatically prunes old snapshots beyond retention limits
- Applies a lighter policy to log datasets
Run Sanoid
Test the configuration:
shsanoid --configdir=/usr/local/etc/sanoid --take-snapshots --verbose
Verify snapshots were created:
shzfs list -t snapshot -r zroot/usr/home | head -20
Sanoid creates snapshots with names like zroot/usr/home@autosnap_2026-04-09_12:00:00_hourly.
Set Up Cron for Sanoid
Add to /etc/crontab:
sh# Run sanoid every 15 minutes for snapshot management */15 * * * * root /usr/local/sbin/sanoid --configdir=/usr/local/etc/sanoid --take-snapshots --prune-snapshots --quiet
Or use a dedicated crontab:
shcrontab -e
sh*/15 * * * * /usr/local/sbin/sanoid --configdir=/usr/local/etc/sanoid --take-snapshots --prune-snapshots --quiet 2>&1 | logger -t sanoid
Pruning Policies
Sanoid prunes automatically based on the retention values in the template. When a snapshot ages out of its retention window, it is destroyed.
The pruning logic:
- Hourly snapshots older than
hourlyhours are removed - The most recent snapshot of each day is promoted to daily
- The most recent snapshot of each month is promoted to monthly
- The most recent snapshot of each year is promoted to yearly
To manually prune:
shsanoid --configdir=/usr/local/etc/sanoid --prune-snapshots --verbose
ZFS Send/Receive
ZFS send/receive is the mechanism for replicating datasets. zfs send creates a stream of data from a snapshot, and zfs receive applies that stream to create or update a dataset on the target.
Local Replication
Replicate a dataset to another pool on the same system:
sh# Initial full send zfs send zroot/usr/home@daily-2026-04-09 | zfs receive tank/backup/home # Incremental send (only changes between two snapshots) zfs send -i zroot/usr/home@daily-2026-04-08 zroot/usr/home@daily-2026-04-09 | \ zfs receive tank/backup/home
Compressed Send
sh# Send with compressed stream (reduces I/O for local replication) zfs send --compressed zroot/usr/home@daily-2026-04-09 | zfs receive tank/backup/home
Remote Replication via SSH
Send snapshots to a remote FreeBSD system:
sh# Full initial replication zfs send zroot/usr/home@daily-2026-04-09 | \ ssh backup-server zfs receive tank/backup/home # Incremental replication zfs send -i zroot/usr/home@daily-2026-04-08 zroot/usr/home@daily-2026-04-09 | \ ssh backup-server zfs receive tank/backup/home
Recursive Replication
Replicate a dataset and all its children:
shzfs send -R zroot/usr/home@daily-2026-04-09 | \ ssh backup-server zfs receive -F tank/backup/home
The -R flag sends a replication stream (includes all child datasets, snapshots, and properties). The -F flag on receive forces a rollback on the receiving side if needed.
Resume Interrupted Transfers
ZFS supports resumable send/receive. If a transfer is interrupted:
sh# On the receiving side, get the resume token zfs get -H receive_resume_token tank/backup/home # On the sending side, resume the transfer zfs send -t <resume_token> | ssh backup-server zfs receive -s tank/backup/home
Replication with Syncoid
Syncoid (part of the sanoid package) automates ZFS replication. It handles initial full sends and subsequent incremental sends automatically.
Basic Syncoid Usage
sh# Replicate to a local pool syncoid zroot/usr/home tank/backup/home # Replicate to a remote system syncoid zroot/usr/home backup-server:tank/backup/home # Recursive replication syncoid -r zroot/usr/home backup-server:tank/backup/home
Syncoid automatically:
- Creates a temporary snapshot on the source
- Determines if a full or incremental send is needed
- Transfers only the changed blocks
- Verifies the transfer completed
- Cleans up temporary snapshots
SSH Configuration for Syncoid
For automated replication, set up SSH key-based authentication:
sh# Generate an SSH key for the root user (if not already done) ssh-keygen -t ed25519 -f /root/.ssh/id_ed25519 -N "" # Copy the public key to the backup server ssh-copy-id -i /root/.ssh/id_ed25519.pub root@backup-server
Test the connection:
shssh backup-server "zpool list"
Automate Syncoid with Cron
sh# Add to /etc/crontab # Replicate every hour 0 * * * * root /usr/local/sbin/syncoid zroot/usr/home backup-server:tank/backup/home --quiet 2>&1 | logger -t syncoid 0 * * * * root /usr/local/sbin/syncoid zroot/var/db/mysql backup-server:tank/backup/mysql --quiet 2>&1 | logger -t syncoid
Syncoid Options
sh# Use compression during transfer syncoid --compress=lz4 zroot/usr/home backup-server:tank/backup/home # Exclude specific child datasets syncoid -r --exclude='zroot/usr/home/tmp' zroot/usr/home backup-server:tank/backup/home # Preserve dataset properties syncoid --sendoptions="p" zroot/usr/home backup-server:tank/backup/home # Limit bandwidth syncoid --bandwidth-limit=50m zroot/usr/home backup-server:tank/backup/home
Remote Backup Strategies
Strategy 1: Local Snapshots + Remote Replication
The most common setup. Sanoid manages local snapshots, syncoid replicates to a remote server.
sh# On the primary server (crontab) */15 * * * * /usr/local/sbin/sanoid --configdir=/usr/local/etc/sanoid --take-snapshots --prune-snapshots --quiet 0 * * * * /usr/local/sbin/syncoid -r zroot/usr/home backup-server:tank/backup/home --quiet
Strategy 2: Pull-Based Replication
Run syncoid on the backup server to pull from the primary. This is more secure because the backup server initiates connections, and the primary does not need SSH access to the backup.
On the backup server:
sh# Pull from primary 0 * * * * /usr/local/sbin/syncoid primary-server:zroot/usr/home tank/backup/home --quiet
The backup server needs SSH key access to the primary, but the primary does not need access to the backup.
Strategy 3: Multi-Tier Replication
Replicate to a local secondary pool and a remote server:
sh# Local replication (fast, for quick recovery) 0 * * * * /usr/local/sbin/syncoid zroot/usr/home tank/local-backup/home --quiet # Remote replication (for disaster recovery) 30 * * * * /usr/local/sbin/syncoid zroot/usr/home remote-dc:tank/dr-backup/home --quiet
Delegated Permissions for Non-Root Replication
For security, create a dedicated user for replication instead of using root:
On the receiving server:
sh# Create a replication user pw useradd zfsbackup -m -s /bin/sh # Delegate ZFS permissions on the receiving dataset zfs allow -u zfsbackup create,mount,receive,destroy,rollback,hold,release tank/backup
On the sending server:
sh# Delegate ZFS permissions on the sending dataset zfs allow -u zfsbackup send,snapshot,hold,release zroot/usr/home
Use the dedicated user in syncoid:
shsyncoid --ssh-option="-l zfsbackup" zroot/usr/home backup-server:tank/backup/home
Monitoring Snapshots
Check Snapshot Count and Space
sh# Total snapshot count per dataset zfs list -t snapshot -o name -r zroot | awk -F@ '{print $1}' | sort | uniq -c | sort -rn # Total space used by snapshots per dataset zfs list -o name,usedbysnapshots -r zroot | sort -k2 -h
Alert on Failed Replication
Create a monitoring script at /usr/local/sbin/check-replication.sh:
sh#!/bin/sh DATASET="tank/backup/home" MAX_AGE_HOURS=2 LATEST=$(zfs list -t snapshot -o creation -s creation -r "${DATASET}" | tail -1) LATEST_EPOCH=$(date -j -f "%a %b %d %H:%M %Y" "${LATEST}" +%s 2>/dev/null) NOW_EPOCH=$(date +%s) AGE_HOURS=$(( (NOW_EPOCH - LATEST_EPOCH) / 3600 )) if [ "${AGE_HOURS}" -gt "${MAX_AGE_HOURS}" ]; then echo "ALERT: Latest snapshot on ${DATASET} is ${AGE_HOURS} hours old" logger -t zfs-monitor "Replication may be stale: ${DATASET} last snapshot ${AGE_HOURS}h ago" fi
shchmod +x /usr/local/sbin/check-replication.sh
Add to crontab:
sh0 * * * * /usr/local/sbin/check-replication.sh
Snapshot Best Practices
- Name snapshots consistently: Use a naming convention like
autosnap_YYYY-MM-DD_HH:MM:SS_type(sanoid does this automatically).
- Do not rely on snapshots alone: Snapshots on the same pool are not backups. Always replicate to a separate pool or remote system.
- Prune aggressively: Thousands of snapshots degrade ZFS performance (especially
zfs listand destroy operations). Keep retention tight.
- Snapshot before destructive operations: Before upgrades, schema changes, or bulk deletes, take a manual snapshot.
shzfs snapshot zroot/var/db/mysql@before-schema-change
- Monitor snapshot space: Snapshots can consume significant space if the underlying data changes frequently. Monitor
usedbysnapshots.
- Test restores: Periodically verify that you can restore from replicated snapshots. A backup you have not tested is not a backup.
sh# Test restore to a temporary dataset zfs send tank/backup/home@latest | zfs receive tank/test-restore/home ls /tank/test-restore/home/ zfs destroy -r tank/test-restore/home
Frequently Asked Questions
How much space does a snapshot use?
Initially, zero. As blocks are modified or deleted in the active dataset, the snapshot retains references to the original blocks. Space usage depends on the rate of change. A snapshot of a static dataset costs nothing. A snapshot of a database dataset with heavy writes will grow as the original blocks are overwritten.
Can I mount a snapshot read-write?
Not directly. Snapshots are read-only. To get a writable copy, create a clone:
shzfs clone zroot/usr/home@manual-2026-04-09 zroot/usr/home-clone
Clones are writable datasets that share blocks with the snapshot until modified.
How many snapshots can I have?
ZFS supports millions of snapshots per pool, but performance degrades beyond several thousand per dataset. Keep snapshot counts reasonable through pruning.
Does snapshotting affect performance?
Creating and destroying snapshots has minimal performance impact. However, having many snapshots increases metadata overhead, particularly for operations like zfs destroy and zfs list. The copy-on-write overhead from snapshots is negligible during normal operation.
Can I replicate to a pool with different hardware?
Yes. ZFS send/receive is hardware-independent. You can replicate from a mirror pool to a raidz2 pool, from SSD to HDD, or between systems with different architectures.
How do I handle replication when the source dataset has been modified outside of syncoid?
If someone manually destroys a snapshot that syncoid uses as its replication bookmark, the next incremental send will fail. Syncoid will detect this and fall back to a full send if configured with --force-delete.
What happens if the remote system runs out of space during replication?
The zfs receive fails, and the partial dataset is left in an incomplete state. You can resume the transfer after freeing space, or destroy the incomplete dataset and start fresh.
Can I use ZFS replication for database backup?
Yes. ZFS snapshots provide consistent-at-the-filesystem-level backups. For databases, take a snapshot while the database is in a consistent state (or use InnoDB's crash recovery capability). For maximum consistency, use the database's native backup tool in combination with ZFS snapshots.
How do I verify a replicated snapshot is intact?
Compare checksums:
sh# On source zfs send zroot/usr/home@snap1 | sha256 # On destination zfs send tank/backup/home@snap1 | sha256
If the SHA-256 hashes match, the replication is byte-identical.
Can I replicate encrypted ZFS datasets?
Yes. ZFS send with raw mode (-w flag) preserves encryption. The receiving side does not need the encryption key to store the data, only to mount and read it.
shzfs send -w zroot/encrypted@snap1 | ssh backup-server zfs receive tank/backup/encrypted