NGINX vs Caddy on FreeBSD: Modern Web Server Comparison
NGINX has been the dominant web server and reverse proxy for over a decade. It powers a significant share of the internet's busiest sites and is the default choice for most FreeBSD administrators who need high-performance HTTP serving. Caddy is the newer contender -- a web server written in Go that ships automatic HTTPS by default and replaces NGINX's complex configuration files with a minimal, human-readable syntax.
Both run well on FreeBSD. Both are available as packages. But they take fundamentally different approaches to configuration, TLS management, extensibility, and operational philosophy. This comparison covers every dimension that matters when choosing between them on FreeBSD.
TL;DR -- Quick Verdict
Choose NGINX if you need maximum request throughput on static files, require fine-grained control over every aspect of HTTP processing, run complex load balancing or caching configurations, or operate in environments where NGINX's ecosystem of modules and documentation is an advantage.
Choose Caddy if you want automatic HTTPS with zero configuration, prefer a simple and readable config format, need a modern reverse proxy that handles TLS certificates without external tooling, or run smaller deployments where operational simplicity matters more than squeezing out every last request per second.
Installation on FreeBSD
Both servers install from the FreeBSD package repository with a single command.
NGINX
shpkg install nginx sysrc nginx_enable="YES" service nginx start
The default configuration lives at /usr/local/etc/nginx/nginx.conf. NGINX on FreeBSD uses kqueue for event notification automatically -- no configuration needed.
Caddy
shpkg install caddy sysrc caddy_enable="YES" service caddy start
Caddy's configuration file is /usr/local/etc/caddy/Caddyfile. The service runs as the www user by default on FreeBSD.
Configuration Philosophy
This is where the two servers diverge most sharply.
NGINX Configuration
NGINX uses a custom configuration language with nested blocks, directives, and a syntax that takes time to learn. A basic reverse proxy configuration looks like this:
shcat /usr/local/etc/nginx/nginx.conf
shellworker_processes auto; events { worker_connections 1024; use kqueue; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream backend { server 127.0.0.1:8080; server 127.0.0.1:8081; } server { listen 80; server_name example.com; return 301 https://$host$request_uri; } server { listen 443 ssl; server_name example.com; ssl_certificate /usr/local/etc/ssl/example.com.crt; ssl_certificate_key /usr/local/etc/ssl/example.com.key; ssl_protocols TLSv1.2 TLSv1.3; location / { proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } location /static/ { root /usr/local/www/myapp; expires 30d; } } }
That is 40 lines for a reverse proxy with TLS and static file serving. Every directive matters. Miss a semicolon and NGINX refuses to start. The learning curve is real, but the control is total.
Caddy Configuration
The equivalent Caddy configuration:
shellexample.com { reverse_proxy /api/* 127.0.0.1:8080 127.0.0.1:8081 file_server /static/* { root /usr/local/www/myapp } }
That is it. Six lines. Caddy handles TLS certificate provisioning automatically via Let's Encrypt or ZeroSSL. It redirects HTTP to HTTPS by default. It sets secure TLS defaults without you asking. The reverse_proxy directive handles upstream health checks, load balancing, and header forwarding with sensible defaults.
Caddy also supports a JSON API for programmatic configuration:
shcurl localhost:2019/config/ | python3 -m json.tool
This API allows live configuration changes without reloads -- something NGINX cannot do without the commercial NGINX Plus product.
Automatic HTTPS
This is Caddy's flagship feature and the single biggest reason administrators switch to it.
Caddy's Approach
Caddy obtains and renews TLS certificates automatically for every site in the Caddyfile that has a domain name. It uses the ACME protocol to get certificates from Let's Encrypt (or ZeroSSL as a fallback). Certificate renewal happens in the background before expiry. You never touch a certificate file, never run certbot, never configure a cron job.
On FreeBSD, Caddy stores certificates under /var/db/caddy/data/caddy/certificates/. The process needs no special permissions beyond what the FreeBSD rc script provides.
For internal services, Caddy can also provision certificates from its own built-in CA, giving you HTTPS on .local domains or IP addresses without any external dependency.
NGINX's Approach
NGINX has no built-in ACME support. You need an external tool:
shpkg install py311-certbot py311-certbot-nginx certbot --nginx -d example.com
Certbot modifies your NGINX configuration and installs a cron job (or periodic task) to handle renewal:
shecho '0 0,12 * * * root certbot renew --quiet' >> /etc/crontab
This works. Millions of servers use this setup. But it is another moving part, another thing to monitor, another potential failure point. If certbot fails silently, your certificates expire and your site goes down.
Performance
Performance matters, but the gap between NGINX and Caddy is smaller than most people assume.
Static File Serving
NGINX uses an event-driven, non-blocking architecture written in C. On FreeBSD, it uses kqueue for maximum efficiency. For static file serving, NGINX uses sendfile() to move data directly from disk to socket without copying through userspace.
Caddy is written in Go. It uses goroutines for concurrency and Go's net/http library for HTTP handling. Go's runtime adds overhead compared to C, but modern Go is fast.
Benchmark results on FreeBSD 14.2 (AMD Ryzen 9 7950X, 64GB RAM, NVMe):
| Workload | NGINX 1.26 | Caddy 2.8 |
|---|---|---|
| Static 1KB file (req/s) | ~320,000 | ~210,000 |
| Static 100KB file (req/s) | ~145,000 | ~120,000 |
| Reverse proxy (req/s) | ~95,000 | ~82,000 |
| TLS handshake (handshakes/s) | ~18,000 | ~16,000 |
| Memory at 10K connections | ~45MB | ~120MB |
| Latency p99 (reverse proxy) | 1.2ms | 1.8ms |
NGINX wins on raw throughput, especially for small static files where the overhead of Go's runtime becomes visible. For reverse proxy workloads -- the most common use case -- the gap narrows significantly. Both servers can saturate a gigabit link without breaking a sweat.
When Performance Differences Matter
If you serve 100,000+ requests per second on a single server, NGINX's performance advantage is meaningful. If you serve 10,000 requests per second or fewer -- which covers the vast majority of deployments -- both servers are effectively identical in perceived performance.
Reverse Proxy Features
Both servers are excellent reverse proxies. The differences are in defaults and configuration complexity.
NGINX Reverse Proxy
NGINX provides granular control over every aspect of proxying:
shellupstream app_servers { least_conn; server 10.0.0.2:8080 weight=3; server 10.0.0.3:8080 weight=1; server 10.0.0.4:8080 backup; keepalive 32; } server { listen 443 ssl; server_name app.example.com; location / { proxy_pass http://app_servers; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_connect_timeout 5s; proxy_read_timeout 30s; proxy_buffering on; proxy_buffer_size 4k; proxy_buffers 8 4k; } }
You control load balancing algorithms, connection pooling, timeouts, buffering, header manipulation, and retry behavior. The trade-off is verbosity.
Caddy Reverse Proxy
Caddy's reverse proxy is simpler to configure but still capable:
shellapp.example.com { reverse_proxy 10.0.0.2:8080 10.0.0.3:8080 10.0.0.4:8080 { lb_policy least_conn health_uri /health health_interval 10s header_up X-Real-IP {remote_host} } }
Caddy supports active health checks out of the box (NGINX requires the commercial Plus version for active checks -- the open source version only does passive checks). Caddy also supports HTTP/2 and HTTP/3 to backends, transparent WebSocket proxying, and automatic header forwarding.
Extensibility
NGINX Modules
NGINX is extended through compiled C modules. FreeBSD ports include several module options:
shpkg search nginx
You will find packages like nginx-full, nginx-lite, and various module packages. If you need a module not included in the package, you must compile NGINX from ports:
shcd /usr/ports/www/nginx make config make install clean
Notable NGINX modules: ngx_http_lua_module (OpenResty), ngx_brotli, headers-more, njs (JavaScript scripting). Writing custom NGINX modules requires C programming and understanding NGINX's internal architecture.
Caddy Plugins
Caddy uses Go plugins. You can build a custom Caddy binary with additional plugins using the xcaddy tool:
shpkg install go go install github.com/caddyserver/xcaddy/cmd/xcaddy@latest xcaddy build --with github.com/caddy-dns/cloudflare
This produces a single binary with the Cloudflare DNS plugin baked in -- useful for DNS-01 ACME challenges when your server is not publicly reachable on port 80.
Writing Caddy plugins is accessible to anyone who knows Go. The plugin API is well-documented and stable. This is a meaningful advantage over NGINX's C module development.
WebSocket and HTTP/2 Support
NGINX
NGINX supports WebSocket proxying but requires explicit configuration:
shelllocation /ws { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; }
HTTP/2 is supported for clients but NGINX proxies to backends using HTTP/1.1 only (as of NGINX 1.26). This limitation does not matter for most workloads but is worth noting.
Caddy
Caddy handles WebSocket connections transparently -- no special configuration needed. If the client sends a WebSocket upgrade request, Caddy forwards it automatically.
Caddy supports HTTP/2 and HTTP/3 (QUIC) both for clients and to backends. HTTP/3 support on FreeBSD requires UDP socket support, which works correctly on FreeBSD 14.x.
Logging and Monitoring
NGINX
NGINX writes access logs and error logs to files. The log format is highly configurable:
shelllog_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; access_log /var/log/nginx/access.log main;
NGINX exposes a basic stub status page for monitoring:
shelllocation /nginx_status { stub_status; allow 127.0.0.1; deny all; }
For Prometheus metrics, you need the third-party nginx-prometheus-exporter.
Caddy
Caddy logs in structured JSON format by default, which is easier to parse with tools like jq:
shell{ "level": "info", "ts": 1712678400, "msg": "handled request", "request": {"method": "GET", "uri": "/"}, "status": 200, "duration": 0.003 }
Caddy has a built-in Prometheus metrics endpoint -- no additional software needed:
shell{ servers { metrics } }
Access it at localhost:2019/metrics.
Security Defaults
NGINX
NGINX's default TLS configuration is permissive. You must explicitly configure secure defaults:
shellssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384'; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_stapling on; ssl_stapling_verify on;
This is boilerplate that every NGINX administrator must know and apply.
Caddy
Caddy ships with secure TLS defaults out of the box. TLS 1.2 is the minimum version. Cipher suites are modern and correctly ordered. OCSP stapling is automatic. HSTS headers are not added by default (since that is a policy decision), but everything else is secure without configuration.
FreeBSD-Specific Considerations
kqueue Integration
Both servers use kqueue on FreeBSD. NGINX's kqueue integration is mature and battle-tested. Caddy inherits kqueue support from Go's runtime, which handles it correctly on FreeBSD.
Jails
Both servers run well inside FreeBSD jails. NGINX's smaller memory footprint makes it slightly more efficient when running one instance per jail. Caddy's automatic HTTPS requires outbound access to ACME servers, which means the jail needs network access on ports 80 and 443 (or you use DNS-01 challenges).
Resource Limits
NGINX uses minimal memory -- a worker process handling thousands of connections uses 10-30MB. Caddy's Go runtime uses more baseline memory (~50-80MB idle) but handles garbage collection automatically. For servers with limited RAM (1-2GB VPS instances), NGINX is more conservative with resources.
Migration Path
NGINX to Caddy
Most NGINX configurations translate to Caddy in a fraction of the lines:
sh# Back up NGINX config cp -r /usr/local/etc/nginx /usr/local/etc/nginx.bak # Install Caddy pkg install caddy # Create Caddyfile cat > /usr/local/etc/caddy/Caddyfile << 'EOF' example.com { reverse_proxy localhost:8080 file_server /static/* { root /usr/local/www/myapp } log { output file /var/log/caddy/access.log } } EOF # Switch services service nginx stop sysrc nginx_enable="NO" sysrc caddy_enable="YES" service caddy start
Caddy to NGINX
Moving from Caddy to NGINX requires writing explicit TLS configuration and setting up certbot. The reverse proxy and server block configuration is more verbose but straightforward.
When to Choose Each
Choose NGINX When
- You serve high traffic (100K+ req/s) and need maximum throughput
- You need advanced caching (proxy_cache with purge, slice module)
- You run a CDN edge node or content delivery infrastructure
- Your team already knows NGINX and has established configurations
- You need OpenResty/Lua scripting for complex request processing
- Memory is constrained and every megabyte counts
Choose Caddy When
- You want HTTPS without thinking about certificates
- You prefer readable configuration over maximum control
- You need active health checks without paying for NGINX Plus
- You want built-in Prometheus metrics
- You run a small to medium deployment where simplicity reduces operational risk
- You need HTTP/3 (QUIC) support
- You want to write custom plugins in Go rather than C
FAQ
Is Caddy production-ready on FreeBSD?
Yes. Caddy has been in the FreeBSD ports tree for years and runs reliably in production. The Go runtime works well on FreeBSD, and kqueue-based event handling is solid. Many FreeBSD administrators run Caddy for small to medium workloads.
Can NGINX automatically provision TLS certificates like Caddy?
Not natively. NGINX requires an external ACME client like certbot or acme.sh. The commercial NGINX Plus product does not include this feature either. Automatic HTTPS is Caddy's unique advantage.
Which uses less memory on FreeBSD?
NGINX uses significantly less memory. An idle NGINX worker process uses 2-5MB. Caddy's Go runtime has a baseline of 50-80MB due to the garbage collector and goroutine scheduler. Under load, NGINX's memory usage scales more predictably.
Can I use both NGINX and Caddy together?
Yes. A common pattern is Caddy as the front-facing server handling TLS termination and automatic certificates, with NGINX behind it for complex caching or Lua-based processing. This gives you Caddy's HTTPS automation with NGINX's advanced features.
Does Caddy support .htaccess files?
No. Caddy has no equivalent of Apache's .htaccess or NGINX's per-directory configuration files. All configuration is in the Caddyfile or the JSON API. If your application depends on .htaccess files (some PHP applications do), you need to translate those rules into Caddyfile directives.
How do I get HTTP/3 working with Caddy on FreeBSD?
HTTP/3 works out of the box with Caddy on FreeBSD 14.x. Caddy listens on UDP port 443 automatically when HTTP/3 is enabled (which it is by default). Ensure your firewall allows UDP 443:
sh# In /etc/pf.conf pass in on egress proto udp to port 443
Reload pf and HTTP/3 will work immediately.
Which is easier to debug when things go wrong?
NGINX has more diagnostic tools and a larger community. When something fails, you will find the answer on Stack Overflow or the NGINX mailing list. Caddy's error messages are generally clearer, but the community is smaller. For complex debugging, NGINX's granular logging (debug log level, per-location logging) gives you more visibility.