# FreeBSD Server Monitoring Tools: Complete Guide (2026)
FreeBSD ships with monitoring tools that most sysadmins never learn exist. Before reaching for Prometheus or Zabbix, you should know what the base system already gives you -- and exactly when it stops being enough.
This guide covers the entire monitoring stack on FreeBSD: built-in command-line tools for live diagnostics, full-featured monitoring platforms for fleet management, log monitoring with syslog and newsyslog, network analysis, performance tuning indicators, and a step-by-step Prometheus + Grafana setup. Every command is FreeBSD-specific. No systemd. No apt-get. Real pkg install commands, real /usr/local/etc/ config paths, real rc.conf entries.
Whether you run one FreeBSD server or a hundred, this guide gives you the tools and the knowledge to monitor all of it.
---
Built-in FreeBSD Monitoring Tools
Every FreeBSD system includes powerful monitoring utilities out of the box. No packages required. These are your first line of defense when something goes wrong -- and the tools you should master before installing anything else.
top -- Process and CPU Overview
sh
top -SHP
The -S flag includes system processes, -H shows threads, and -P displays per-CPU usage. A typical view on a production web server:
last pid: 48372; load averages: 2.15, 1.87, 1.62 up 124+03:12:47 14:23:01
187 processes: 3 running, 184 sleeping
CPU 0: 12.5% user, 0.0% nice, 3.1% system, 0.4% interrupt, 84.0% idle
CPU 1: 18.2% user, 0.0% nice, 4.7% system, 0.8% interrupt, 76.3% idle
Mem: 2104M Active, 5765M Inact, 1203M Wired, 412M Buf, 7248M Free
ARC: 3812M Total, 1509M MFU, 1847M MRU, 12M Anon, 189M Header, 255M Other
Swap: 4096M Total, 0K Used, 4096M Free
Key things to watch: the ARC line shows ZFS adaptive replacement cache usage. If Free memory is low but Inact is high, that is normal -- FreeBSD aggressively caches to inactive memory, and it is reclaimable on demand.
vmstat -- Virtual Memory and System Activity
sh
vmstat -w 2
Prints stats every 2 seconds with wide output:
procs memory page disks faults cpu
r b w avm fre flt re pi po fr sr da0 nda0 in sy cs us sy id
1 0 0 3.2G 7.1G 215 0 0 0 0 0 3 1 412 2847 1923 8 2 90
2 0 0 3.2G 7.1G 187 0 0 0 0 0 12 4 538 3102 2104 12 3 85
What matters:
- **r** -- runnable processes. Consistently higher than your CPU count means CPU-bound.
- **pi/po** -- page-in/page-out. Non-zero po means you are swapping. Investigate immediately.
- **flt** -- page faults. High values during normal operation indicate memory pressure.
- **cs** -- context switches. Spikes correlate with high concurrency or excessive fork activity.
systat -- Curses-based System Monitor
sh
systat -vmstat 2
systat is an underrated tool that combines CPU, memory, disk, and network stats in a single terminal view. Other useful modes:
sh
systat -ifstat 2 # network interface throughput
systat -iostat 2 # disk I/O
systat -tcp 2 # TCP connection stats
systat -netstat 2 # active network connections
The -ifstat mode is particularly valuable for spotting network throughput anomalies across all interfaces simultaneously.
iostat -- Disk I/O Performance
sh
iostat -dxz 2
extended device statistics
device r/s w/s kr/s kw/s ms/r ms/w ms/o ms/t qlen %b
ada0 45.2 12.8 1024.0 256.0 0.4 1.2 0.0 0.6 1 3
nvd0 312.4 187.6 8192.0 4096.0 0.1 0.1 0.0 0.1 2 8
- **ms/r** and **ms/w** -- latency per operation. On NVMe, anything above 2ms is suspicious. On spinning disks, 10ms+ is expected under load.
- **%b** -- percent busy. Above 80% means the disk is a bottleneck.
- **qlen** -- queue length. Sustained values above the device queue depth indicate saturation.
For a real-time curses view of all GEOM providers, use gstat (or gstat -a to include idle providers).
netstat -- Network Connections and Statistics
sh
netstat -an -p tcp
To get a summary of connection states on a web server:
sh
netstat -an -p tcp | awk '/^tcp/ {print $6}' | sort | uniq -c | sort -rn
847 ESTABLISHED
312 TIME_WAIT
23 CLOSE_WAIT
12 LISTEN
4 SYN_RCVD
High TIME_WAIT is usually normal. High CLOSE_WAIT means your application is not closing connections properly. A sudden spike in SYN_RCVD suggests a SYN flood attack.
Use sockstat -4l to map listening sockets to owning processes -- the FreeBSD equivalent of ss -tlnp on Linux.
---
Package Monitoring Solutions
Built-in tools handle live diagnostics. For historical metrics, dashboards, alerting, and fleet management, you need dedicated monitoring software.
Prometheus + node_exporter
The industry standard for metrics collection. Prometheus scrapes targets at defined intervals and stores time-series data. On FreeBSD, the node_exporter package includes FreeBSD-specific collectors for ZFS, devstat, and more.
sh
pkg install prometheus node_exporter
Enable and configure:
sh
sysrc node_exporter_enable="YES"
sysrc node_exporter_args="--collector.zfs --collector.cpu --collector.loadavg --collector.filesystem --web.listen-address=127.0.0.1:9100"
sysrc prometheus_enable="YES"
service node_exporter start
service prometheus start
Prometheus configuration lives at /usr/local/etc/prometheus.yml. See the full setup walkthrough in the "Setting Up a Complete Stack" section below.
**Best for**: teams that want flexible querying (PromQL), long-term metrics storage, and a huge ecosystem of exporters.
Zabbix
Zabbix is a full-featured enterprise monitoring platform with auto-discovery, SNMP support, and built-in notification methods. FreeBSD has native packages for both server and agent.
sh
pkg install zabbix7-server zabbix7-agent
[link to: /software/net-mgmt/zabbix7-server/]
Enable services:
sh
sysrc zabbix_server_enable="YES"
sysrc zabbix_agentd_enable="YES"
service zabbix_server start
service zabbix_agentd start
The server configuration lives at /usr/local/etc/zabbix7/zabbix_server.conf. The agent config is at /usr/local/etc/zabbix7/zabbix_agentd.conf. Zabbix requires a database backend -- PostgreSQL or MySQL.
**Best for**: organizations already invested in Zabbix, environments with heavy SNMP monitoring (network switches, routers), and teams that want a single pane of glass with built-in alerting.
Nagios
Nagios is the veteran of open-source monitoring. It runs well on FreeBSD and has a massive library of check plugins.
sh
pkg install nagios4 nagios-plugins
[link to: /software/net-mgmt/nagios4/]
sh
sysrc nagios_enable="YES"
service nagios start
Configuration lives in /usr/local/etc/nagios/. Nagios uses a check-based model: it runs a plugin, the plugin returns OK/WARNING/CRITICAL/UNKNOWN, and Nagios acts on the result.
**Best for**: environments that need simple up/down service checks, legacy infrastructure, and teams comfortable with Nagios's configuration syntax.
Munin
Munin takes the opposite approach to Nagios -- it focuses on graphing metrics over time with minimal configuration. A Munin master collects data from Munin nodes and generates HTML graphs.
sh
pkg install munin-master munin-node
[link to: /software/sysutils/munin-node/]
sh
sysrc munin_node_enable="YES"
service munin-node start
The node config is at /usr/local/etc/munin/munin-node.conf. Plugins live in /usr/local/share/munin/plugins/ and are activated by symlinking into /usr/local/etc/munin/plugins/.
**Best for**: small environments where you want quick historical graphs without the complexity of Prometheus or Zabbix.
Quick Comparison: FreeBSD Monitoring Tools
| Tool | Type | Setup Effort | Scalability | Alerting | Dashboards | FreeBSD Support |
|------|------|-------------|-------------|----------|------------|-----------------|
| Prometheus + Grafana | Pull-based metrics | Medium | Excellent | Via Alertmanager | Excellent (Grafana) | Native pkg, ZFS collectors |
| Zabbix | Agent-based, SNMP | High | Excellent | Built-in | Built-in | Native pkg, agent + server |
| Nagios | Check-based | Medium | Good | Built-in | Basic (addons needed) | Native pkg |
| Munin | Agent-based graphing | Low | Limited | Plugin-based | Built-in (static HTML) | Native pkg |
| Netdata | Real-time metrics | Very Low | Single server | Built-in | Built-in (web UI) | Native pkg |
| Built-in tools | CLI | None | N/A (local only) | None | None | Part of base system |
[AFFILIATE: VPS recommendation for monitoring server -- a dedicated monitoring server should have 2+ GB RAM and fast storage for time-series data. Providers like Vultr and Hetzner offer FreeBSD images on all VPS tiers.]
---
Log Monitoring
Metrics tell you what is happening. Logs tell you why. FreeBSD has solid log management built into the base system.
syslog -- Centralized System Logging
FreeBSD uses syslogd(8) by default. Configuration lives at /etc/syslog.conf. The default setup logs to /var/log/ with separate files for auth, mail, cron, and general messages.
To configure syslog as a centralized log receiver from other FreeBSD servers:
sh
# /etc/syslog.conf -- add at the top
+*
*.* /var/log/remote/all.log
Enable network listening in /etc/rc.conf:
sh
sysrc syslogd_flags="-a 10.0.0.0/24:* -v -v"
service syslogd restart
The -a flag restricts which networks can send logs. For production, always restrict to your management network.
To send logs from a client to the central server, add to /etc/syslog.conf on the client:
*.* @10.0.0.1
newsyslog -- Automated Log Rotation
FreeBSD uses newsyslog(8) instead of logrotate. Configuration is in /etc/newsyslog.conf. Each line defines rotation rules for a log file:
# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/var/log/messages 644 7 1000 * JC
/var/log/auth.log 600 7 500 * JC
/var/log/all.log 600 7 * @T00 JC
Key fields:
- **count** -- number of rotated files to keep.
- **size** -- rotate when file reaches this size in KB. * means ignore size.
- **when** -- time-based rotation. @T00 means midnight daily. $W0 means weekly on Sunday.
- **flags** -- J for bzip2 compression, C for create the file if missing.
To add custom log rotation for an application:
/var/log/myapp.log myapp:myapp 640 30 10000 * JC /var/run/myapp.pid 30
This rotates when the file hits 10 MB, keeps 30 compressed archives, and sends signal 30 (USR1) to the application PID so it reopens the log file.
Monitoring Log Content
For real-time log watching:
sh
tail -F /var/log/messages
For pattern-based alerting from logs, a simple cron approach:
sh
# /usr/local/bin/log_alert.sh
#!/bin/sh
grep -c "error\|panic\|fatal" /var/log/messages > /tmp/log_errors_now
if [ "$(cat /tmp/log_errors_now)" -gt "$(cat /tmp/log_errors_last 2>/dev/null || echo 0)" ]; then
diff_count=$(($(cat /tmp/log_errors_now) - $(cat /tmp/log_errors_last 2>/dev/null || echo 0)))
echo "$diff_count new error(s) in /var/log/messages" | mail -s "Log Alert on $(hostname)" ops@example.com
fi
cp /tmp/log_errors_now /tmp/log_errors_last
For production environments, consider deploying Promtail + Loki (part of the Grafana stack) or forwarding logs to a centralized ELK stack.
---
Network Monitoring
FreeBSD's networking stack is one of its strengths. The monitoring tools match.
tcpdump -- Packet Capture and Analysis
tcpdump is part of the FreeBSD base system. No installation needed.
Capture HTTP traffic on a specific interface:
sh
tcpdump -i ix0 -n 'tcp port 80 or tcp port 443' -c 100
Capture and save to a file for later analysis in Wireshark:
sh
tcpdump -i ix0 -w /tmp/capture.pcap -s 0 'host 10.0.0.5'
Monitor DNS queries:
sh
tcpdump -i ix0 -n 'udp port 53' -l
The -n flag skips DNS resolution of captured addresses (faster and avoids recursive lookups). Use -l for line-buffered output when piping.
NetFlow with softflowd
NetFlow provides aggregated traffic flow data -- which hosts are talking, on which ports, and how much data they exchange. This is invaluable for capacity planning and security analysis.
sh
pkg install softflowd
[link to: /software/net-mgmt/softflowd/]
Configure softflowd to monitor an interface and export flows:
sh
sysrc softflowd_enable="YES"
sysrc softflowd_interface="ix0"
sysrc softflowd_dest="127.0.0.1:9995"
service softflowd start
Pair with nfsen or ntopng to visualize flow data.
ntopng -- Web-based Network Traffic Analysis
ntopng provides a real-time web dashboard for network traffic analysis, including protocol breakdown, top talkers, and flow analysis.
sh
pkg install ntopng
[link to: /software/net-mgmt/ntopng/]
sh
sysrc ntopng_enable="YES"
service ntopng start
Access the web interface at http://your-server:3000. ntopng can ingest NetFlow/sFlow data from softflowd or directly capture traffic from an interface.
**For hardened servers**, bind ntopng to localhost and access via SSH tunnel:
sh
ssh -L 3000:127.0.0.1:3000 user@your-server
See our [FreeBSD server hardening guide](/blog/hardening-freebsd-server/) for more on restricting service access.
---
Performance Tuning Indicators
Monitoring is not just about knowing when things break. The following indicators tell you when your FreeBSD server is approaching its limits -- before users notice.
CPU Saturation
sh
sysctl kern.cp_time
vmstat -w 2
Watch the r (runnable) column in vmstat. If r consistently exceeds your CPU core count, you are CPU-saturated. Also check the sy (system) percentage in top -- if system time exceeds 20% of total CPU, you may have kernel contention (often from excessive context switching or lock contention).
Memory Pressure
sh
vmstat -w 2 # watch pi/po columns
sysctl vm.stats.vm.v_swappgsin
sysctl vm.stats.vm.v_swappgsout
Any sustained page-out (po > 0) activity is a red flag. Also monitor ZFS ARC size -- if ARC is being squeezed by application memory demand, both application performance and disk I/O will suffer.
sh
sysctl vfs.zfs.arc_max
sysctl kstat.zfs.misc.arcstats.size
If arcstats.size is well below arc_max, ARC has been pressured and has shrunk. Consider reducing vfs.zfs.arc_max to leave more memory for applications, or add RAM.
Disk I/O Saturation
sh
iostat -dxz 2
gstat
Key thresholds:
- NVMe: ms/r or ms/w above 5ms under load is degraded.
- SATA SSD: above 10ms is degraded.
- Spinning disk: above 20ms is normal-to-degraded under load.
- %b above 80% on any device means it is the bottleneck.
Network Throughput and Errors
sh
netstat -ibdh
systat -ifstat 1
Check for non-zero values in Ierrs, Idrop, and Oerrs columns. These indicate driver bugs, hardware faults, or ring buffer overflows. If you see drops on a high-throughput interface, increase ring buffer sizes:
sh
# Set in /boot/loader.conf (not runtime-changeable):
# dev.ix.0.num_rx_desc=4096
# dev.ix.0.num_tx_desc=4096
ZFS ARC Hit Ratio
sh
hits=$(sysctl -n kstat.zfs.misc.arcstats.hits)
misses=$(sysctl -n kstat.zfs.misc.arcstats.misses)
echo "scale=2; $hits * 100 / ($hits + $misses)" | bc
A healthy ARC hit ratio is above 90%. Below 85%, investigate whether ARC is being squeezed by other memory consumers. See the [ZFS on FreeBSD guide](/blog/zfs-freebsd-guide/) for tuning recommendations.
---
Setting Up a Complete Stack: Prometheus + Grafana
This section walks through a production-ready monitoring stack from scratch. By the end, you will have Prometheus collecting metrics, Grafana displaying dashboards, and Alertmanager sending notifications.
Step 1: Install Everything
sh
pkg install prometheus node_exporter grafana alertmanager
Step 2: Configure node_exporter
sh
sysrc node_exporter_enable="YES"
sysrc node_exporter_args="--collector.zfs --collector.cpu --collector.loadavg --collector.filesystem --web.listen-address=127.0.0.1:9100"
service node_exporter start
Verify:
sh
fetch -qo - http://127.0.0.1:9100/metrics | head -20
Step 3: Configure Prometheus
Edit /usr/local/etc/prometheus.yml:
yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files:
- "/usr/local/etc/prometheus/rules/*.yml"
alerting:
alertmanagers:
- static_configs:
- targets:
- '127.0.0.1:9093'
scrape_configs:
- job_name: 'freebsd-node'
static_configs:
- targets:
- '127.0.0.1:9100'
labels:
instance: 'web01.example.com'
sh
sysrc prometheus_enable="YES"
sysrc prometheus_args="--storage.tsdb.retention.time=90d --web.listen-address=127.0.0.1:9090"
service prometheus start
Step 4: Configure Alert Rules
Create /usr/local/etc/prometheus/rules/freebsd-alerts.yml:
yaml
groups:
- name: freebsd-host
rules:
- alert: HighCpuUsage
expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 85
for: 10m
labels:
severity: warning
annotations:
summary: "High CPU on {{ $labels.instance }}"
description: "CPU at {{ $value | printf \"%.1f\" }}% for 10+ minutes."
- alert: DiskAlmostFull
expr: (node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}) * 100 < 10
for: 5m
labels:
severity: critical
annotations:
summary: "Disk almost full on {{ $labels.instance }}"
- alert: SwapInUse
expr: node_memory_swap_used_bytes > 0
for: 15m
labels:
severity: warning
annotations:
summary: "Swap in use on {{ $labels.instance }}"
- alert: ZfsPoolDegraded
expr: node_zfs_zpool_state{state!="online"} > 0
for: 1m
labels:
severity: critical
annotations:
summary: "ZFS pool {{ $labels.zpool }} degraded on {{ $labels.instance }}"
Step 5: Configure Alertmanager
Edit /usr/local/etc/alertmanager/alertmanager.yml:
yaml
global:
smtp_smarthost: 'smtp.example.com:587'
smtp_from: 'alerts@example.com'
smtp_auth_username: 'alerts@example.com'
smtp_auth_password: 'your-smtp-password'
smtp_require_tls: true
route:
group_by: ['alertname', 'instance']
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
receiver: 'ops-team'
receivers:
- name: 'ops-team'
email_configs:
- to: 'ops@example.com'
sh
sysrc alertmanager_enable="YES"
sysrc alertmanager_args="--web.listen-address=127.0.0.1:9093"
service alertmanager start
Step 6: Configure Grafana
sh
sysrc grafana_enable="YES"
service grafana start
Grafana listens on port 3000 (default login: admin/admin). Add Prometheus as a data source:
1. Navigate to Connections > Data Sources > Add data source.
2. Select Prometheus.
3. Set URL to http://127.0.0.1:9090.
4. Click Save & Test.
Import the **Node Exporter Full** dashboard (ID: 1860) for an instant overview. Build custom panels with PromQL queries for FreeBSD-specific metrics like ZFS ARC hit ratio:
promql
node_zfs_arc_hits / (node_zfs_arc_hits + node_zfs_arc_misses) * 100
Step 7: Secure the Stack
All monitoring services should be bound to localhost or a private network. If you use PF:
# /etc/pf.conf
monitoring_ports = "{ 9090, 9093, 9100, 3000 }"
pass in on $int_if proto tcp from 10.0.0.0/24 to any port $monitoring_ports
block in on $ext_if proto tcp to any port $monitoring_ports
sh
pfctl -f /etc/pf.conf
See the [FreeBSD server hardening guide](/blog/hardening-freebsd-server/) for a complete PF ruleset.
---
Best Practices
**Start with the base system.** Master top, vmstat, iostat, and netstat before installing anything. You can diagnose 80% of problems with tools that are already there.
**Bind monitoring services to localhost.** Prometheus, Grafana, and Alertmanager should never be exposed to the public internet. Use SSH tunnels or a reverse proxy with authentication for remote access. See our [nginx on FreeBSD guide](/blog/nginx-freebsd-production-setup/) for reverse proxy configuration.
**Set meaningful alert thresholds.** An alert that fires constantly gets ignored. Start conservative (CPU > 85% for 10 minutes, disk < 10% free for 5 minutes), then tighten based on your baseline.
**Monitor ZFS separately.** ZFS pool degradation is a critical event that should page someone immediately. ARC performance should be tracked but is usually informational, not alertable.
**Keep retention reasonable.** Prometheus with 90-day retention at 15-second scrape intervals uses roughly 1-2 GB per month per server. For longer retention, consider Thanos or VictoriaMetrics as a long-term storage backend.
**Separate your monitoring server.** Running Prometheus and Grafana on the same host they monitor creates a blind spot -- if the host goes down, your monitoring goes with it. For production, run the monitoring stack on a dedicated server or VM.
[AFFILIATE: VPS recommendation for monitoring server -- a 2-4 GB VPS with NVMe storage from Vultr or Hetzner is ideal for a Prometheus + Grafana stack monitoring up to 20 servers. FreeBSD images are available on both platforms.]
**Automate everything.** Use sysrc for rc.conf entries (never hand-edit), keep your Prometheus rules in version control, and provision Grafana data sources via YAML files in /usr/local/etc/grafana/provisioning/.
**Test your alerts.** An untested alert is the same as no alert. Temporarily lower thresholds to trigger an alert, verify it routes correctly through Alertmanager, and confirm the notification arrives.
---
Conclusion
FreeBSD gives you more monitoring capability out of the box than most people realize. The base system tools -- top, vmstat, iostat, systat, netstat, sockstat, gstat -- handle live diagnostics without installing a single package.
For anything beyond SSH-and-look debugging, the recommended production stack is:
sh
pkg install prometheus node_exporter grafana alertmanager
That gives you metrics collection, long-term storage, dashboards, and alerting. Add Zabbix if you need SNMP monitoring for network hardware. Add Netdata if you want instant dashboards on a single server with zero configuration. Add ntopng if network traffic analysis is a priority.
The complete setup takes under 30 minutes. The base system tools are already waiting for you.
---
FAQ
What is the best FreeBSD monitoring tool for a single server?
For a single server, Netdata (pkg install netdata) gives you the fastest time-to-dashboard -- real-time metrics, ZFS monitoring, and hundreds of auto-detected charts with zero configuration. If you need alerting and historical data, Prometheus + Grafana is still the better long-term choice even for one server, because it scales without rearchitecting when you add more.
How do I check if my FreeBSD server is running out of memory?
Run top and look at the memory line. On FreeBSD, Inact (inactive) memory is reclaimable cache -- it is not "used" in the Linux sense. Worry when Free + Inact together are very low, or when Swap Used is non-zero. The command vmstat -w 2 shows pi and po columns -- any sustained page-out activity means real memory pressure. See the Performance Tuning Indicators section above for the full checklist.
Does Prometheus node_exporter work the same on FreeBSD as on Linux?
Mostly, with important differences. Some collectors are Linux-only (like systemd and cgroups). The FreeBSD port of node_exporter includes FreeBSD-specific collectors for ZFS (--collector.zfs), devstat, and others. The ZFS collector exposes ARC statistics, pool health, and dataset metrics that do not exist on Linux. Memory metrics also differ -- FreeBSD reports Active, Inactive, Wired, and Free, not the Linux categories.
Should I use Zabbix or Prometheus on FreeBSD?
It depends on your environment. Choose **Prometheus + Grafana** for greenfield deployments, PromQL flexibility, Kubernetes integration, and the largest community ecosystem. Choose **Zabbix** if you already use it, need agentless SNMP monitoring for network switches and routers, or want built-in notification methods without a separate Alertmanager. Both have native FreeBSD packages and run well on the platform. See the Quick Comparison table above for a detailed breakdown.
How do I monitor FreeBSD jails from the host system?
Use rctl(8) for per-jail resource accounting. Enable RACCT in the bootloader with sysrc -f /boot/loader.conf kern.racct.enable=1 and reboot. Then run rctl -u jail:jailname to see CPU, memory, process count, and I/O metrics for any jail. To feed this into Prometheus, write a cron script that dumps jail metrics to .prom files in a textfile collector directory, then configure node_exporter with --collector.textfile --collector.textfile.directory=/var/tmp/node_exporter.