How to Back Up FreeBSD Servers to Backblaze B2
Server backups that live on the same machine -- or even the same network -- provide false security. A disk failure, ransomware attack, or datacenter incident takes out your data and your backups simultaneously. Off-site cloud storage solves this. Backblaze B2 is one of the cheapest S3-compatible storage services available: $6 per TB per month for storage, free ingress, and $0.01 per 10,000 API calls.
This guide covers a complete backup pipeline from FreeBSD to Backblaze B2: installing rclone, configuring B2 bucket access, creating ZFS snapshot-based backup scripts, encrypting backups client-side, scheduling automated backups with cron, monitoring for failures, and restoring from backups when disaster strikes.
For a broader backup strategy, see our FreeBSD backup guide. For ZFS fundamentals, start with our ZFS on FreeBSD guide.
Why Backblaze B2
Backblaze B2 costs a fraction of AWS S3 or Google Cloud Storage for bulk backup storage:
- $6/TB/month for storage (vs. $23/TB on S3 Standard)
- Free uploads (no ingress charges)
- $0.01/GB egress (first 1 GB/day free)
- S3-compatible API -- works with any S3 tool, including rclone
- Lifecycle rules for automatic deletion of old backups
- No minimum file size or storage duration penalties
For backup workloads where you write frequently but rarely read, B2 is the best value.
Backblaze Account and Bucket Setup
- Create a Backblaze account at
https://www.backblaze.com/b2/cloud-storage.html - In the B2 dashboard, create a new bucket
Name the bucket something descriptive:
- Bucket Name:
myserver-backups(must be globally unique) - Files in Bucket are: Private
- Default Encryption: Enable (server-side encryption)
- Object Lock: Enable if you want immutable backups (ransomware protection)
- Create an Application Key for rclone:
- Go to "App Keys" in the B2 dashboard
- Click "Add a New Application Key"
- Name:
rclone-freebsd - Allow access to: select your specific bucket
- Type of access: Read and Write
Save the keyID and applicationKey. The applicationKey is shown only once.
Installing rclone
rclone is the Swiss Army knife of cloud storage tools. It supports Backblaze B2 natively and handles encryption, sync, and bandwidth limiting.
shpkg install rclone
Verify the installation:
shrclone version
Configuring rclone for Backblaze B2
Configure rclone with your B2 credentials:
shrclone config
Follow the interactive prompts:
shelln) New remote name> b2 Storage> b2 account> YOUR_KEY_ID key> YOUR_APPLICATION_KEY hard_delete> true Edit advanced config? n
Test the connection:
shrclone lsd b2:
This should list your B2 buckets. Test writing a file:
shecho "test" > /tmp/rclone-test.txt rclone copy /tmp/rclone-test.txt b2:myserver-backups/test/ rclone ls b2:myserver-backups/test/ rclone delete b2:myserver-backups/test/rclone-test.txt
Adding Client-Side Encryption
Server-side encryption protects data at rest on Backblaze's disks, but Backblaze can still read it. For true privacy, add client-side encryption so data is encrypted before leaving your server.
Configure an encrypted rclone remote that wraps the B2 remote:
shrclone config
shelln) New remote name> b2-encrypted Storage> crypt remote> b2:myserver-backups filename_encryption> standard directory_name_encryption> true Password> (enter a strong password or let rclone generate one) Password2> (enter a second password/salt)
rclone stores the encrypted configuration in ~/.config/rclone/rclone.conf. Back up this file separately -- without it, you cannot decrypt your backups.
Secure the config file:
shchmod 600 ~/.config/rclone/rclone.conf
Test the encrypted remote:
shecho "encrypted test" > /tmp/crypt-test.txt rclone copy /tmp/crypt-test.txt b2-encrypted:test/ rclone ls b2-encrypted:test/
The file appears decrypted through b2-encrypted: but encrypted through b2:.
ZFS Snapshot-Based Backups
ZFS snapshots provide consistent, point-in-time copies of your data. The backup strategy is:
- Take a ZFS snapshot
- Send the snapshot as a stream to a local staging file
- Upload the stream to B2 with rclone
- Clean up old snapshots and staging files
Initial Full Backup
Create the backup script:
shvi /usr/local/bin/backup-to-b2.sh
sh#!/bin/sh set -e # Configuration POOL="zroot" DATASETS="zroot/ROOT/default zroot/usr/home zroot/var/db" REMOTE="b2-encrypted" BUCKET="backups" STAGING="/var/backups/staging" RETENTION_DAYS=30 DATE=$(date +%Y%m%d-%H%M%S) HOSTNAME=$(hostname -s) LOG="/var/log/backup-b2.log" log() { echo "$(date '+%Y-%m-%d %H:%M:%S') $1" >> "$LOG" } mkdir -p "$STAGING" log "=== Backup started ===" for DS in $DATASETS; do DSNAME=$(echo "$DS" | tr '/' '_') SNAP="${DS}@backup-${DATE}" PREV_SNAP=$(zfs list -t snapshot -o name -s creation -r "$DS" | grep "@backup-" | tail -1) # Create snapshot zfs snapshot "$SNAP" log "Created snapshot: $SNAP" if [ -z "$PREV_SNAP" ]; then # Full backup (no previous snapshot) OUTFILE="${STAGING}/${HOSTNAME}_${DSNAME}_full_${DATE}.zfs.gz" zfs send "$SNAP" | gzip -1 > "$OUTFILE" log "Full send: $SNAP -> $OUTFILE" else # Incremental backup OUTFILE="${STAGING}/${HOSTNAME}_${DSNAME}_incr_${DATE}.zfs.gz" zfs send -i "$PREV_SNAP" "$SNAP" | gzip -1 > "$OUTFILE" log "Incremental send: $PREV_SNAP -> $SNAP -> $OUTFILE" fi # Upload to B2 rclone copy "$OUTFILE" "${REMOTE}:${BUCKET}/${HOSTNAME}/${DSNAME}/" \ --transfers 4 \ --b2-chunk-size 96M \ --log-file "$LOG" \ --log-level INFO log "Uploaded: $OUTFILE" # Remove staging file rm -f "$OUTFILE" done # Clean up old snapshots (keep last RETENTION_DAYS days) CUTOFF=$(date -v-${RETENTION_DAYS}d +%Y%m%d) for DS in $DATASETS; do zfs list -t snapshot -o name -s creation -r "$DS" | grep "@backup-" | while read SNAP; do SNAP_DATE=$(echo "$SNAP" | sed 's/.*@backup-//' | cut -d- -f1) if [ "$SNAP_DATE" -lt "$CUTOFF" ] 2>/dev/null; then zfs destroy "$SNAP" log "Destroyed old snapshot: $SNAP" fi done done # Clean up old backups on B2 (optional, lifecycle rules are better) # rclone delete "${REMOTE}:${BUCKET}/${HOSTNAME}/" --min-age "${RETENTION_DAYS}d" log "=== Backup completed ==="
shchmod 700 /usr/local/bin/backup-to-b2.sh
Running the First Backup
Run the initial backup manually to verify everything works:
sh/usr/local/bin/backup-to-b2.sh
Monitor progress:
shtail -f /var/log/backup-b2.log
Verify files on B2:
shrclone ls b2-encrypted:backups/
Backing Up Non-ZFS Systems
If your server does not use ZFS, use tar for file-level backups:
shvi /usr/local/bin/backup-files-to-b2.sh
sh#!/bin/sh set -e REMOTE="b2-encrypted" BUCKET="backups" HOSTNAME=$(hostname -s) DATE=$(date +%Y%m%d-%H%M%S) STAGING="/var/backups/staging" LOG="/var/log/backup-b2.log" mkdir -p "$STAGING" # Backup /etc tar czf "${STAGING}/${HOSTNAME}_etc_${DATE}.tar.gz" /etc/ rclone copy "${STAGING}/${HOSTNAME}_etc_${DATE}.tar.gz" "${REMOTE}:${BUCKET}/${HOSTNAME}/etc/" rm -f "${STAGING}/${HOSTNAME}_etc_${DATE}.tar.gz" # Backup /usr/local/etc tar czf "${STAGING}/${HOSTNAME}_local-etc_${DATE}.tar.gz" /usr/local/etc/ rclone copy "${STAGING}/${HOSTNAME}_local-etc_${DATE}.tar.gz" "${REMOTE}:${BUCKET}/${HOSTNAME}/local-etc/" rm -f "${STAGING}/${HOSTNAME}_local-etc_${DATE}.tar.gz" # Backup home directories tar czf "${STAGING}/${HOSTNAME}_home_${DATE}.tar.gz" /home/ /root/ rclone copy "${STAGING}/${HOSTNAME}_home_${DATE}.tar.gz" "${REMOTE}:${BUCKET}/${HOSTNAME}/home/" rm -f "${STAGING}/${HOSTNAME}_home_${DATE}.tar.gz" # Backup databases # PostgreSQL su -m postgres -c "pg_dumpall" | gzip > "${STAGING}/${HOSTNAME}_pgdump_${DATE}.sql.gz" rclone copy "${STAGING}/${HOSTNAME}_pgdump_${DATE}.sql.gz" "${REMOTE}:${BUCKET}/${HOSTNAME}/postgresql/" rm -f "${STAGING}/${HOSTNAME}_pgdump_${DATE}.sql.gz" echo "$(date): Backup completed" >> "$LOG"
shchmod 700 /usr/local/bin/backup-files-to-b2.sh
Scheduling Automated Backups
Schedule daily backups with cron:
shcrontab -e
shell# Daily backup at 2 AM 0 2 * * * /usr/local/bin/backup-to-b2.sh >> /var/log/backup-b2.log 2>&1 # Weekly full verification (download and verify a random backup) 0 6 * * 0 /usr/local/bin/verify-backup.sh >> /var/log/backup-b2.log 2>&1
Or use FreeBSD's periodic system:
shvi /usr/local/etc/periodic/daily/500.backup-b2
sh#!/bin/sh # PROVIDE: backup-b2 # REQUIRE: DAEMON /usr/local/bin/backup-to-b2.sh
shchmod 755 /usr/local/etc/periodic/daily/500.backup-b2
Monitoring and Alerting
Create a verification script that checks backups are current:
shvi /usr/local/bin/verify-backup.sh
sh#!/bin/sh REMOTE="b2-encrypted" BUCKET="backups" HOSTNAME=$(hostname -s) ADMIN_EMAIL="admin@example.com" MAX_AGE_HOURS=26 # Check if the most recent backup is less than MAX_AGE_HOURS old LATEST=$(rclone lsl "${REMOTE}:${BUCKET}/${HOSTNAME}/" --max-depth 2 | sort -k2,3 | tail -1) if [ -z "$LATEST" ]; then echo "CRITICAL: No backups found for ${HOSTNAME}" | mail -s "Backup FAILED: ${HOSTNAME}" "$ADMIN_EMAIL" exit 1 fi LATEST_DATE=$(echo "$LATEST" | awk '{print $2, $3}') LATEST_EPOCH=$(date -j -f "%Y-%m-%d %H:%M:%S" "$LATEST_DATE" +%s 2>/dev/null || echo 0) NOW_EPOCH=$(date +%s) AGE_HOURS=$(( (NOW_EPOCH - LATEST_EPOCH) / 3600 )) if [ "$AGE_HOURS" -gt "$MAX_AGE_HOURS" ]; then echo "WARNING: Latest backup for ${HOSTNAME} is ${AGE_HOURS} hours old" | \ mail -s "Backup WARNING: ${HOSTNAME}" "$ADMIN_EMAIL" exit 1 fi echo "$(date): Backup verification passed. Latest backup is ${AGE_HOURS} hours old." >> /var/log/backup-b2.log
shchmod 755 /usr/local/bin/verify-backup.sh
Bandwidth Management
Limit rclone's bandwidth to avoid saturating your network during business hours:
sh# Limit to 50 Mbps during peak hours, unlimited at night rclone copy /path/to/data b2-encrypted:backups/ \ --bwlimit "08:00,50M 00:00,off"
Add the bandwidth limit to your backup script by modifying the rclone copy command:
shrclone copy "$OUTFILE" "${REMOTE}:${BUCKET}/${HOSTNAME}/${DSNAME}/" \ --transfers 4 \ --b2-chunk-size 96M \ --bwlimit "08:00,50M 00:00,off" \ --log-file "$LOG" \ --log-level INFO
B2 Lifecycle Rules
Configure Backblaze B2 to automatically delete old backups. In the B2 dashboard:
- Go to your bucket settings
- Click "Lifecycle Settings"
- Set "Keep only the last version of the file" with a retention period
Or manage via the B2 CLI:
shpkg install py311-b2-sdk b2 authorize-account YOUR_KEY_ID YOUR_APP_KEY b2 update-bucket --lifecycleRules '[{"daysFromHidingToDeleting": 30, "fileNamePrefix": ""}]' myserver-backups allPrivate
Restoring from Backups
List Available Backups
shrclone ls b2-encrypted:backups/$(hostname -s)/
Restore a ZFS Dataset
Download the backup files:
shmkdir -p /var/backups/restore rclone copy "b2-encrypted:backups/myserver/zroot_usr_home/" /var/backups/restore/ \ --include "*full*"
For a full restore, first receive the full backup, then apply incrementals in order:
sh# Restore full backup gunzip -c /var/backups/restore/myserver_zroot_usr_home_full_20260401-020000.zfs.gz | \ zfs receive -F zroot/usr/home # Apply incrementals in chronological order for f in $(ls /var/backups/restore/myserver_zroot_usr_home_incr_*.zfs.gz | sort); do gunzip -c "$f" | zfs receive zroot/usr/home done
Restore Individual Files
If you only need specific files from a tar backup:
shrclone copy "b2-encrypted:backups/myserver/etc/myserver_etc_20260401-020000.tar.gz" /tmp/ cd / tar xzf /tmp/myserver_etc_20260401-020000.tar.gz etc/ssh/sshd_config
Restore a PostgreSQL Database
shrclone copy "b2-encrypted:backups/myserver/postgresql/" /tmp/ --include "*20260401*" gunzip -c /tmp/myserver_pgdump_20260401-020000.sql.gz | su -m postgres -c "psql"
Object Lock for Ransomware Protection
Backblaze B2 supports Object Lock, which makes backups immutable for a specified period. Even if an attacker gains access to your B2 credentials, they cannot delete or modify locked files.
Enable Object Lock when creating the bucket (cannot be added later):
- Create a new bucket with Object Lock enabled in the B2 dashboard
- Set a default retention period (e.g., 30 days)
Configure rclone to work with Object Lock:
shrclone config
Update your B2 remote and set:
shellhard_delete = false
Files uploaded to a locked bucket cannot be deleted until the retention period expires.
Troubleshooting
rclone upload fails with "too many requests":
Reduce concurrent transfers:
shrclone copy /path b2:bucket/ --transfers 2 --b2-chunk-size 96M
B2 allows 100 concurrent requests per bucket. If you are running backups from multiple servers to the same bucket, reduce transfers per server.
Upload speed is slow:
Increase chunk size for large files:
shrclone copy /path b2:bucket/ --b2-chunk-size 256M --transfers 8
Large chunk sizes reduce API call overhead. Ensure your staging directory has enough space for the larger chunks.
"Permission denied" on rclone config:
Ensure the rclone config file is readable by the user running the backup:
shls -la ~/.config/rclone/rclone.conf
For root-executed cron jobs, the config should be at /root/.config/rclone/rclone.conf.
FAQ
How much does it cost to back up a typical server?
A server with 100 GB of data costs about $0.60/month for storage on B2. Uploads are free. Incremental ZFS backups reduce daily upload volume to only the changed data, which can be a few hundred MB to a few GB for most servers.
Should I use rclone sync or rclone copy?
Use rclone copy for backups. rclone sync deletes files on the remote that do not exist locally, which means a ransomware attack that deletes local files would also delete your backups on the next sync. Always use copy and manage retention separately with lifecycle rules.
Can I restore to a different server?
Yes. Install rclone on the new server, copy the rclone config file (with encryption passwords), and run the restore commands. For ZFS restores, the target server needs a ZFS pool with enough space.
Is client-side encryption necessary if B2 has server-side encryption?
Server-side encryption protects data at rest on Backblaze's storage. However, Backblaze holds the key and can technically decrypt the data. Client-side encryption means only you hold the key. For sensitive data (databases, customer information, credentials), client-side encryption is strongly recommended.
How do I rotate the rclone encryption password?
You cannot change the password for already-encrypted files. Create a new encrypted remote with a new password, re-upload your data through it, and delete the old encrypted files. This is a full re-upload, so plan accordingly.
Can I use restic or borg instead of rclone?
Yes. Restic has native B2 support and handles deduplication and encryption. Borg can work with B2 via rclone mount. However, rclone with ZFS snapshots is the simplest approach on FreeBSD because ZFS already handles deduplication and consistent snapshots. rclone adds encryption and cloud upload.