FreeBSD Database Server: PostgreSQL Production Guide
PostgreSQL on FreeBSD is a proven combination. FreeBSD's ZFS integration, network stack, and stability make it an excellent foundation for production database servers. This guide covers everything from initial setup through high availability -- with real configuration values, not placeholder defaults.
Installation
Install PostgreSQL 16 (the current stable release):
shpkg install postgresql16-server postgresql16-client postgresql16-contrib
Initialize the database cluster:
shsysrc postgresql_enable="YES" service postgresql initdb
This creates the data directory at /var/db/postgres/data16.
ZFS Dataset Layout
The most important decision: put PostgreSQL data on a dedicated ZFS dataset with tuned properties.
Create Datasets
shzfs create -o mountpoint=/var/db/postgres zroot/postgres zfs create zroot/postgres/data zfs create zroot/postgres/wal zfs create zroot/postgres/backup
Separate datasets for data and WAL allow independent tuning and snapshot policies.
ZFS Properties for PostgreSQL
sh# Main data: recordsize matches PostgreSQL's 8KB page size zfs set recordsize=8K zroot/postgres/data zfs set compression=lz4 zroot/postgres/data zfs set atime=off zroot/postgres/data zfs set primarycache=metadata zroot/postgres/data zfs set logbias=throughput zroot/postgres/data # WAL: sequential writes, larger records zfs set recordsize=128K zroot/postgres/wal zfs set compression=lz4 zroot/postgres/wal zfs set atime=off zroot/postgres/wal zfs set logbias=latency zroot/postgres/wal # Backups: optimize for space zfs set compression=zstd zroot/postgres/backup zfs set atime=off zroot/postgres/backup
Why recordsize=8K for data? PostgreSQL reads and writes in 8KB pages. Matching ZFS's recordsize eliminates read amplification and write amplification.
Why primarycache=metadata for data? PostgreSQL has its own buffer cache (shared_buffers). Double-caching wastes ARC memory. Let PostgreSQL manage data caching and ZFS cache metadata only.
Move PostgreSQL Data
shservice postgresql stop mv /var/db/postgres/data16 /var/db/postgres/data/data16 ln -s /var/db/postgres/data/data16 /var/db/postgres/data16 chown -R postgres:postgres /var/db/postgres/data/data16
Move WAL to the dedicated dataset:
shmkdir /var/db/postgres/wal/pg_wal mv /var/db/postgres/data/data16/pg_wal/* /var/db/postgres/wal/pg_wal/ rmdir /var/db/postgres/data/data16/pg_wal ln -s /var/db/postgres/wal/pg_wal /var/db/postgres/data/data16/pg_wal chown -R postgres:postgres /var/db/postgres/wal/pg_wal service postgresql start
PostgreSQL Configuration Tuning
Edit /var/db/postgres/data16/postgresql.conf. The following values assume a dedicated server with 32 GB RAM and SSDs.
Memory Settings
shcat >> /var/db/postgres/data16/postgresql.conf << 'EOF' # Memory shared_buffers = 8GB effective_cache_size = 24GB work_mem = 64MB maintenance_work_mem = 2GB huge_pages = try # WAL wal_buffers = 64MB wal_level = replica max_wal_size = 4GB min_wal_size = 1GB checkpoint_completion_target = 0.9 checkpoint_timeout = 15min # Query Planner random_page_cost = 1.1 effective_io_concurrency = 200 default_statistics_target = 200 # Parallel Queries max_worker_processes = 8 max_parallel_workers_per_gather = 4 max_parallel_workers = 8 max_parallel_maintenance_workers = 4 # Connections max_connections = 200 listen_addresses = '*' # Logging log_destination = 'csvlog' logging_collector = on log_directory = 'log' log_filename = 'postgresql-%Y-%m-%d.log' log_min_duration_statement = 500 log_checkpoints = on log_connections = on log_disconnections = on log_lock_waits = on log_temp_files = 0 log_autovacuum_min_duration = 0 EOF
Scaling Rules
For different RAM sizes:
| RAM | shared_buffers | effective_cache_size | work_mem |
|-----|---------------|---------------------|----------|
| 8 GB | 2 GB | 6 GB | 16 MB |
| 16 GB | 4 GB | 12 GB | 32 MB |
| 32 GB | 8 GB | 24 GB | 64 MB |
| 64 GB | 16 GB | 48 GB | 128 MB |
Formula: shared_buffers = 25% of RAM. effective_cache_size = 75% of RAM. work_mem = (RAM - shared_buffers) / (max_connections * 2).
FreeBSD Kernel Tuning for PostgreSQL
PostgreSQL uses System V shared memory. Increase limits:
shcat >> /etc/sysctl.conf << 'EOF' kern.ipc.shmmax=17179869184 kern.ipc.shmall=4194304 kern.ipc.semmap=256 kern.ipc.semmni=512 kern.ipc.semmns=1024 kern.ipc.semmnu=256 EOF sysctl -f /etc/sysctl.conf
Enable huge pages for better TLB performance:
shecho 'vm.pmap.pg_ps_enabled=1' >> /boot/loader.conf
Restart PostgreSQL:
shservice postgresql restart
Authentication Configuration
Edit /var/db/postgres/data16/pg_hba.conf:
shcat > /var/db/postgres/data16/pg_hba.conf << 'EOF' # TYPE DATABASE USER ADDRESS METHOD local all postgres peer local all all peer host all all 127.0.0.1/32 scram-sha-256 host all all ::1/128 scram-sha-256 host all all 10.0.0.0/24 scram-sha-256 host replication replicator 10.0.0.0/24 scram-sha-256 EOF
Create application users:
shsu - postgres -c "psql -c \"CREATE ROLE appuser WITH LOGIN PASSWORD 'strong_password_here';\"" su - postgres -c "psql -c \"CREATE DATABASE appdb OWNER appuser;\""
Create a replication user:
shsu - postgres -c "psql -c \"CREATE ROLE replicator WITH REPLICATION LOGIN PASSWORD 'repl_password_here';\""
Reload configuration:
shservice postgresql reload
Connection Pooling with PgBouncer
PostgreSQL creates a process per connection. At 200+ connections, this wastes memory. PgBouncer sits in front and multiplexes connections.
shpkg install pgbouncer
Configure /usr/local/etc/pgbouncer.ini:
shcat > /usr/local/etc/pgbouncer.ini << 'EOF' [databases] appdb = host=127.0.0.1 port=5432 dbname=appdb [pgbouncer] listen_addr = 0.0.0.0 listen_port = 6432 auth_type = scram-sha-256 auth_file = /usr/local/etc/pgbouncer.userlist pool_mode = transaction default_pool_size = 25 max_client_conn = 1000 max_db_connections = 100 log_connections = 1 log_disconnections = 1 log_pooler_errors = 1 stats_period = 60 admin_users = postgres EOF
Create the user list:
shsu - postgres -c "psql -t -A -c \"SELECT '\\\"' || rolname || '\\\" \\\"' || rolpassword || '\\\"' FROM pg_authid WHERE rolcanlogin;\"" > /usr/local/etc/pgbouncer.userlist chown pgbouncer:pgbouncer /usr/local/etc/pgbouncer.userlist chmod 600 /usr/local/etc/pgbouncer.userlist
Start PgBouncer:
shsysrc pgbouncer_enable="YES" service pgbouncer start
Applications connect to port 6432 instead of 5432. PgBouncer handles up to 1,000 client connections using only 100 actual PostgreSQL connections.
Monitoring
pg_stat_statements
Enable query performance tracking:
shsu - postgres -c "psql -c \"CREATE EXTENSION pg_stat_statements;\""
Add to postgresql.conf:
shecho "shared_preload_libraries = 'pg_stat_statements'" >> /var/db/postgres/data16/postgresql.conf echo "pg_stat_statements.track = all" >> /var/db/postgres/data16/postgresql.conf
Restart PostgreSQL, then query the top slow queries:
shsu - postgres -c "psql -c \"SELECT query, calls, mean_exec_time, total_exec_time FROM pg_stat_statements ORDER BY mean_exec_time DESC LIMIT 10;\""
Prometheus Monitoring
shpkg install postgres_exporter
Configure in /usr/local/etc/postgres_exporter.env:
shecho 'DATA_SOURCE_NAME="user=postgres host=/tmp sslmode=disable"' > /usr/local/etc/postgres_exporter.env
Start:
shsysrc postgres_exporter_enable="YES" service postgres_exporter start
Prometheus scrapes metrics from http://server:9187/metrics.
Essential Health Checks
Create a monitoring script at /usr/local/bin/pg_health.sh:
shcat > /usr/local/bin/pg_health.sh << 'HEOF' #!/bin/sh echo "=== Connection Count ===" su - postgres -c "psql -c \"SELECT count(*) as connections, state FROM pg_stat_activity GROUP BY state;\"" echo "=== Database Sizes ===" su - postgres -c "psql -c \"SELECT datname, pg_size_pretty(pg_database_size(datname)) FROM pg_database ORDER BY pg_database_size(datname) DESC;\"" echo "=== Replication Status ===" su - postgres -c "psql -c \"SELECT client_addr, state, sent_lsn, write_lsn, flush_lsn, replay_lsn FROM pg_stat_replication;\"" echo "=== Long Running Queries ===" su - postgres -c "psql -c \"SELECT pid, now() - pg_stat_activity.query_start AS duration, query FROM pg_stat_activity WHERE state = 'active' AND now() - pg_stat_activity.query_start > interval '5 minutes';\"" echo "=== Table Bloat ===" su - postgres -c "psql -d appdb -c \"SELECT schemaname, relname, n_dead_tup, n_live_tup, round(n_dead_tup::numeric/greatest(n_live_tup,1)*100, 2) as dead_pct FROM pg_stat_user_tables WHERE n_dead_tup > 1000 ORDER BY n_dead_tup DESC LIMIT 10;\"" HEOF chmod +x /usr/local/bin/pg_health.sh
Backup and Restore
pg_dump for Logical Backups
For individual databases:
shsu - postgres -c "pg_dump -Fc appdb > /var/db/postgres/backup/appdb-$(date +%Y%m%d).dump"
Restore:
shsu - postgres -c "pg_restore -d appdb /var/db/postgres/backup/appdb-20260409.dump"
Continuous Archiving (WAL Archiving)
For point-in-time recovery, archive WAL segments:
shmkdir -p /var/db/postgres/backup/wal_archive chown postgres:postgres /var/db/postgres/backup/wal_archive
Add to postgresql.conf:
shcat >> /var/db/postgres/data16/postgresql.conf << 'EOF' archive_mode = on archive_command = 'cp %p /var/db/postgres/backup/wal_archive/%f' EOF
pg_basebackup for Physical Backups
Full binary backup including WAL:
shsu - postgres -c "pg_basebackup -D /var/db/postgres/backup/base-$(date +%Y%m%d) -Ft -z -Xs -P"
Automate nightly backups:
shcat > /usr/local/bin/pg_backup.sh << 'BEOF' #!/bin/sh BACKUP_DIR=/var/db/postgres/backup DATE=$(date +%Y%m%d) # Physical backup su - postgres -c "pg_basebackup -D ${BACKUP_DIR}/base-${DATE} -Ft -z -Xs -P" # Clean old backups (keep 7 days) find ${BACKUP_DIR}/base-* -maxdepth 0 -mtime +7 -exec rm -rf {} + # ZFS snapshot zfs snapshot zroot/postgres/data@backup-${DATE} zfs snapshot zroot/postgres/wal@backup-${DATE} # Clean old snapshots (keep 14 days) zfs list -t snapshot -o name -H | grep "zroot/postgres/.*@backup-" | sort | head -n -14 | xargs -n1 zfs destroy 2>/dev/null BEOF chmod +x /usr/local/bin/pg_backup.sh echo '0 2 * * * root /usr/local/bin/pg_backup.sh' >> /etc/crontab
Streaming Replication
Primary Server Configuration
On the primary, ensure these are set in postgresql.conf:
shwal_level = replica max_wal_senders = 5 wal_keep_size = 1GB hot_standby = on
Replica Setup
On the replica server, install PostgreSQL and take a base backup from the primary:
shservice postgresql stop rm -rf /var/db/postgres/data16/* su - postgres -c "pg_basebackup -h primary-ip -U replicator -D /var/db/postgres/data16 -Fp -Xs -P -R"
The -R flag creates standby.signal and adds connection info to postgresql.auto.conf.
Start the replica:
shservice postgresql start
Verify replication on the primary:
shsu - postgres -c "psql -c \"SELECT client_addr, state, sent_lsn, replay_lsn, replay_lag FROM pg_stat_replication;\""
Promoting a Replica
If the primary fails, promote the replica:
shsu - postgres -c "pg_ctl promote -D /var/db/postgres/data16"
The replica becomes a standalone primary. Update application connection strings to point to the new primary.
High Availability with Patroni
For automatic failover, use Patroni with etcd:
shpkg install py311-patroni etcd
Patroni manages the PostgreSQL cluster, monitors health, and performs automatic failover. Configuration is beyond the scope of this single-server guide, but the FreeBSD packages include working service scripts.
A basic Patroni setup requires:
- Three nodes (2 PostgreSQL + 1 witness, or 3 PostgreSQL)
- etcd running on each node for distributed consensus
- Patroni configuration on each PostgreSQL node
- A virtual IP or load balancer for client connections
Performance Verification
After tuning, verify performance:
shpkg install pg_bench su - postgres -c "pgbench -i -s 100 appdb" su - postgres -c "pgbench -c 50 -j 4 -T 60 appdb"
Expected results on a modern SSD system with 32 GB RAM:
| Metric | Before Tuning | After Tuning |
|--------|--------------|--------------|
| TPS (read-write) | 1,200 | 4,800 |
| TPS (read-only) | 15,000 | 48,000 |
| Avg latency | 42 ms | 10 ms |
FAQ
Q: Why FreeBSD for PostgreSQL instead of Linux?
A: ZFS integration is the primary reason. FreeBSD's ZFS support is mature and first-class. The combination of ZFS snapshots for backups, compression for storage efficiency, and checksumming for data integrity makes it ideal for databases.
Q: How much RAM should I give shared_buffers?
A: 25% of total RAM is the standard recommendation. Going above 25% rarely helps because the OS filesystem cache (or ZFS ARC) handles the rest. On ZFS with primarycache=metadata, shared_buffers can be larger (up to 40%).
Q: Should I use ZFS snapshots or pg_basebackup for backups?
A: Both. ZFS snapshots give you instant point-in-time copies for quick recovery. pg_basebackup with WAL archiving gives you portable backups that can be restored on any server, including point-in-time recovery.
Q: How do I tune for an OLAP (analytics) workload?
A: Increase work_mem (256 MB-1 GB), maintenance_work_mem (4 GB), and max_parallel_workers_per_gather (8-16). Set random_page_cost = 1.0 on SSDs. Use SET work_mem per-session for large analytical queries rather than globally.
Q: Is PgBouncer necessary?
A: For applications with fewer than 100 concurrent connections, no. For microservices, connection-heavy applications, or serverless backends that open many short-lived connections, PgBouncer is essential.
Q: How do I handle PostgreSQL major version upgrades?
A: Use pg_upgrade for in-place upgrades or pg_dump/pg_restore for logical migration. Test the upgrade on a replica first. On FreeBSD, install the new version alongside the old one (pkg install postgresql17-server), then run pg_upgrade.
Q: What about connection encryption?
A: Enable SSL in postgresql.conf: ssl = on, ssl_cert_file, ssl_key_file. Use Let's Encrypt certificates or self-signed certificates for internal networks. Enforce SSL in pg_hba.conf with hostssl instead of host.