FreeBSD Virtual Machine Hosting with Bhyve Guide
Bhyve is FreeBSD's native hypervisor. It runs virtual machines with near-bare-metal performance, integrates with ZFS for storage, uses FreeBSD's networking stack for VM connectivity, and consumes minimal resources on the host. If you are building a virtualization platform on FreeBSD -- whether a home lab with a few VMs or a hosting environment with dozens -- bhyve is the right tool.
This guide covers bhyve from initial setup through production deployment: UEFI boot configuration, networking with bridges and VNET, VM management with vm-bhyve, ZFS storage backends, running Windows and Linux guests, and operational best practices.
Prerequisites
Bhyve requires hardware virtualization support. Verify your CPU supports it:
sh# Intel VT-x grep -o 'VMX' /var/run/dmesg.boot # AMD-V grep -o 'SVM' /var/run/dmesg.boot
If either returns a match, your hardware supports bhyve. Load the kernel module:
shkldload vmm echo 'vmm_load="YES"' >> /boot/loader.conf
For UEFI boot support (required for Windows guests and recommended for Linux):
shpkg install bhyve-firmware
This installs the UEFI firmware at /usr/local/share/uefi-firmware/BHYVE_UEFI.fd.
Raw Bhyve: Understanding the Basics
Before using a management tool, understand what bhyve does at the command level.
Creating a VM Manually
sh# Create a ZFS volume for the VM disk zfs create -V 20G zroot/vm/testvm # Create a tap interface for networking ifconfig tap0 create ifconfig bridge0 create ifconfig bridge0 addm em0 addm tap0 up # Boot the VM bhyve -c 2 -m 4G \ -s 0,hostbridge \ -s 3,ahci-hd,/dev/zvol/zroot/vm/testvm \ -s 4,virtio-net,tap0 \ -s 5,fbuf,tcp=0.0.0.0:5900,w=1024,h=768 \ -s 6,xhci,tablet \ -s 29,fbuf,tcp=0.0.0.0:5900 \ -s 30,xhci,tablet \ -s 31,lpc \ -l com1,stdio \ -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \ testvm
That is a lot of flags. Each -s defines a PCI slot with a virtual device:
| Slot | Device | Purpose |
|---|---|---|
| 0 | hostbridge | PCI host bridge (required) |
| 3 | ahci-hd | Disk controller with ZFS zvol |
| 4 | virtio-net | High-performance network adapter |
| 5 | fbuf | Framebuffer (VNC console) |
| 6 | xhci,tablet | USB tablet (fixes VNC mouse alignment) |
| 31 | lpc | ISA bus controller (for com1 and bootrom) |
Cleaning Up
When a VM shuts down, destroy it before restarting:
shbhyvectl --destroy --vm=testvm
This is a bhyve quirk -- the VM context must be destroyed between runs. Management tools handle this automatically.
vm-bhyve: The Practical Approach
vm-bhyve wraps bhyve with a sane management interface. It handles ZFS, networking, console access, and VM templates.
Installation and Setup
shpkg install vm-bhyve grub2-bhyve bhyve-firmware
Initialize vm-bhyve with a ZFS dataset:
shsysrc vm_enable="YES" sysrc vm_dir="zfs:zroot/vm" zfs create zroot/vm vm init
Network Configuration
Create virtual switches:
sh# Public switch -- bridged to physical interface vm switch create public vm switch add public em0 # Private switch -- isolated network for inter-VM communication vm switch create private vm switch address private 10.10.0.1/24
List switches:
shvm switch list
VM Templates
vm-bhyve uses templates to define default VM settings. Templates live in $vm_dir/.templates/:
shls /zroot/vm/.templates/
Default templates are included for common operating systems. Create a custom template:
shcat > /zroot/vm/.templates/linux.conf << 'EOF' loader="uefi" cpu=2 memory=4G network0_type="virtio-net" network0_switch="public" disk0_type="virtio-blk" disk0_name="disk0.img" graphics="yes" graphics_port="5900" graphics_res="1280x720" xhci_mouse="yes" EOF
shcat > /zroot/vm/.templates/freebsd.conf << 'EOF' loader="uefi" cpu=2 memory=2G network0_type="virtio-net" network0_switch="public" disk0_type="virtio-blk" disk0_name="disk0.img" graphics="no" EOF
shcat > /zroot/vm/.templates/windows.conf << 'EOF' loader="uefi" cpu=4 memory=8G network0_type="e1000" network0_switch="public" disk0_type="ahci-hd" disk0_name="disk0.img" graphics="yes" graphics_port="5900" graphics_res="1920x1080" graphics_wait="yes" xhci_mouse="yes" EOF
Creating and Managing VMs
sh# Fetch an ISO vm iso https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/14.2/FreeBSD-14.2-RELEASE-amd64-disc1.iso # Create a VM vm create -t freebsd -s 20G myfreebsd # Install from ISO vm install myfreebsd FreeBSD-14.2-RELEASE-amd64-disc1.iso # Connect to console vm console myfreebsd # (Ctrl+B, then d to detach from cu session) # Start a VM vm start myfreebsd # Stop a VM (graceful) vm stop myfreebsd # Force stop vm poweroff myfreebsd # List all VMs vm list # VM info vm info myfreebsd # Destroy a VM (deletes everything) vm destroy myfreebsd
VNC Console Access
For graphical VMs (Windows, Linux desktop), connect via VNC:
shvm list # Note the VNC port (e.g., 5900) # From your workstation vncviewer freebsd-host:5900
For remote access, tunnel VNC through SSH:
shssh -L 5900:localhost:5900 freebsd-host vncviewer localhost:5900
Running Linux Guests
Linux guests run excellently on bhyve with virtio drivers (included in all modern Linux kernels).
Ubuntu/Debian
shvm iso https://releases.ubuntu.com/24.04/ubuntu-24.04-live-server-amd64.iso vm create -t linux -s 30G ubuntu-server vm install ubuntu-server ubuntu-24.04-live-server-amd64.iso
Connect via VNC for the graphical installer, or use the serial console:
shvm console ubuntu-server
After installation, enable the serial console in the guest for headless management:
sh# Inside the Ubuntu guest sudo systemctl enable serial-getty@ttyS0.service sudo systemctl start serial-getty@ttyS0.service
CentOS/Rocky/Alma
shvm create -t linux -s 30G rocky-server vm install rocky-server Rocky-9.3-x86_64-minimal.iso
Alpine Linux
shvm create -t linux -s 5G alpine-vm vm install alpine-vm alpine-standard-3.20.0-x86_64.iso
Alpine is ideal for lightweight VMs -- minimal resource usage and fast boot.
Running Windows Guests
Windows requires UEFI boot and specific device emulation.
Windows 11
sh# Create VM with Windows template vm create -t windows -s 60G win11 # Download virtio driver ISO for Windows vm iso https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso # Add the virtio ISO as a second CD drive # Edit /zroot/vm/win11/win11.conf and add: # disk1_type="ahci-cd" # disk1_name="/zroot/vm/.iso/virtio-win.iso" vm install win11 Win11_English_x64.iso
During Windows installation:
- Connect via VNC (
vncviewer freebsd-host:5900) - When selecting a disk, click "Load driver"
- Browse to the virtio CD, select
vioscsi\w11\amd64 - Load the storage driver, then proceed with installation
- After installation, install all virtio drivers from the CD (network, balloon, etc.)
Windows Server
shvm create -t windows -s 80G winserver vm install winserver WindowsServer2022.iso
The same virtio driver process applies. Windows Server runs well on bhyve for workloads that require Windows (Active Directory, SQL Server, .NET applications).
Windows Performance Tips
sh# In the VM configuration, use virtio-blk for disk (faster than ahci-hd) # After installing virtio storage drivers, change: disk0_type="virtio-blk" # Allocate sufficient CPU and RAM cpu=4 memory=8G # Disable unnecessary Windows features in the guest # Turn off Superfetch, Windows Search, Defender (if behind a firewall)
Networking Deep Dive
Bridge Networking
Bridge networking puts VMs on the same network as the host:
sh# Manual bridge setup ifconfig bridge0 create ifconfig bridge0 addm em0 up ifconfig tap0 create ifconfig bridge0 addm tap0 # With vm-bhyve (automatic) vm switch create public vm switch add public em0
VMs get IPs from your network's DHCP server or use static IPs in the same subnet as the host.
NAT Networking
For isolated VMs that access the internet through the host:
sh# Create a private switch vm switch create private vm switch address private 10.10.0.1/24 # Enable IP forwarding sysrc gateway_enable="YES" sysctl net.inet.ip.forwarding=1 # NAT with PF cat >> /etc/pf.conf << 'EOF' nat on em0 from 10.10.0.0/24 to any -> (em0) pass from 10.10.0.0/24 to any EOF pfctl -f /etc/pf.conf
VMs on the private switch use 10.10.0.1 as their gateway. Run a DHCP server on the host for automatic IP assignment:
shpkg install isc-dhcp44-server cat > /usr/local/etc/dhcpd.conf << 'EOF' subnet 10.10.0.0 netmask 255.255.255.0 { range 10.10.0.100 10.10.0.200; option routers 10.10.0.1; option domain-name-servers 10.10.0.1; } EOF sysrc dhcpd_enable="YES" sysrc dhcpd_ifaces="vm-private" service isc-dhcpd start
VLAN Networking
Assign VMs to different VLANs:
sh# Create VLAN interfaces on the host sysrc vlans_em0="100 200" sysrc ifconfig_em0_100="up" sysrc ifconfig_em0_200="up" service netif restart # Create vm-bhyve switches per VLAN vm switch create vlan100 vm switch add vlan100 em0.100 vm switch create vlan200 vm switch add vlan200 em0.200
Assign VMs to VLANs via their switch:
sh# In VM config network0_switch="vlan100"
Multiple Network Interfaces
VMs can have multiple NICs:
sh# In VM config file network0_type="virtio-net" network0_switch="public" network1_type="virtio-net" network1_switch="private"
ZFS Storage Backend
ZFS is bhyve's ideal storage backend. Every VM disk is a zvol with all of ZFS's features.
Thin Provisioning
vm-bhyve creates sparse zvols by default -- a 100GB VM disk only uses space as data is written:
shvm create -t linux -s 100G myvm zfs list -o name,used,refer,avail zroot/vm/myvm # Used will be much less than 100G
Snapshots
sh# Snapshot a VM vm snapshot myvm # Or directly with ZFS for more control zfs snapshot zroot/vm/myvm/disk0.img@before-upgrade # Rollback zfs rollback zroot/vm/myvm/disk0.img@before-upgrade
Cloning VMs
Clone a VM from a snapshot for rapid deployment:
sh# Snapshot the template VM zfs snapshot zroot/vm/template-linux/disk0.img@base # Clone to new VMs vm clone template-linux newvm1 vm clone template-linux newvm2 vm clone template-linux newvm3
Clones are instant and share disk blocks with the source until modified (copy-on-write). This is the fastest way to deploy multiple VMs from a base image.
Compression
Enable compression on the VM dataset to save disk space:
shzfs set compression=lz4 zroot/vm
LZ4 compression has negligible CPU overhead and typically saves 30-50% disk space for Linux and FreeBSD guests.
Dedicated Storage Pools
For production, separate VM storage from the OS pool:
sh# Create a dedicated VM pool on fast storage zpool create vmpool mirror nvd0 nvd1 zfs set compression=lz4 vmpool # Point vm-bhyve at the new pool sysrc vm_dir="zfs:vmpool/vm" vm init
Resource Management
CPU Pinning
Pin VM vCPUs to specific host CPUs for consistent performance:
sh# In VM config cpu=4 cpu_pin="4,5,6,7"
This pins the VM's 4 vCPUs to host CPUs 4-7, avoiding contention with the host or other VMs.
Memory Limits
sh# In VM config memory=4G
Bhyve does not overcommit memory. Each VM's memory is reserved. Plan your host RAM accordingly:
shellHost RAM needed = Sum of all VM memory + Host OS (~2GB) + ZFS ARC (~50% of remaining)
Disk I/O Priority
Use ZFS properties to prioritize I/O:
sh# High-priority VM (database) zfs set logbias=latency vmpool/vm/database/disk0.img # Low-priority VM (backup) zfs set logbias=throughput vmpool/vm/backup/disk0.img
Production Checklist
Host Configuration
sh# Persistent kernel module echo 'vmm_load="YES"' >> /boot/loader.conf # Enable IP forwarding (for NAT) sysrc gateway_enable="YES" # Increase file descriptor limits sysrc kern.maxfiles=65536 echo 'kern.maxfiles=65536' >> /etc/sysctl.conf # Enable vm-bhyve sysrc vm_enable="YES" sysrc vm_dir="zfs:vmpool/vm" # Configure ZFS zfs set compression=lz4 vmpool/vm zfs set atime=off vmpool/vm
Backup Strategy
sh# Snapshot all VMs daily for vm in $(vm list | tail -n +2 | awk '{print $1}'); do zfs snapshot -r vmpool/vm/${vm}@daily-$(date +%Y%m%d) done # Replicate to backup host for vm in $(vm list | tail -n +2 | awk '{print $1}'); do zfs send -Ri vmpool/vm/${vm}@daily-$(date +%Y%m%d) | \ ssh backup-host zfs recv -F backuppool/vm/${vm} done # Clean up old snapshots (keep 7 days) zfs list -t snapshot -o name -s creation vmpool/vm | \ grep "@daily-" | head -n -7 | xargs -I{} zfs destroy {}
Monitoring
sh# VM status vm list # Per-VM resource usage top -j # Show jail/VM column # ZFS storage zpool iostat vmpool 5 zfs list -o name,used,avail,ratio vmpool/vm # Network traffic per tap interface netstat -I tap0 -b
FAQ
How many VMs can I run on one FreeBSD host?
It depends on RAM. Bhyve does not overcommit memory, so you need enough physical RAM for all VMs plus the host. A 128GB server can comfortably run 20-30 VMs with 2-4GB each, with enough left for ZFS ARC. CPU is rarely the bottleneck -- modern CPUs handle dozens of VMs efficiently.
Can bhyve run macOS guests?
No. Bhyve does not emulate Apple hardware and macOS requires specific hardware identification. This is a licensing restriction as much as a technical one. Use VirtualBox (unofficial support) if you need macOS in a VM.
Is bhyve slower than KVM?
Bhyve and KVM are comparable in performance for most workloads. Both use hardware virtualization (VT-x/AMD-V) and virtio for I/O. KVM has a slight edge in some benchmarks due to Linux's more aggressive scheduler optimizations for VM workloads, but the difference is typically under 5%.
How do I access a VM's console remotely?
For graphical VMs, use VNC (tunnel through SSH for security). For text-mode VMs, use vm console vmname which connects to the VM's serial port via cu. You can also configure the VM to expose its serial console on a TCP port: add -l com1,/dev/nmdm0A to the bhyve command.
Can I live migrate VMs between bhyve hosts?
No. Bhyve does not support live migration. To move a VM, stop it, send its ZFS dataset to the new host, and start it there. This typically takes minutes depending on dataset size. For zero-downtime migration, use application-level failover (DNS, load balancers) rather than hypervisor-level migration.
Do I need to install virtio drivers in Windows guests?
Yes. Without virtio drivers, Windows uses emulated AHCI storage and e1000 networking, which are slower. Download the virtio-win ISO from Fedora's repository, attach it to the VM, and install all drivers (storage, network, balloon, serial) from Device Manager after Windows installation.
How do I increase a VM's disk size?
sh# Stop the VM vm stop myvm # Resize the zvol zfs set volsize=50G vmpool/vm/myvm/disk0.img # Start the VM vm start myvm # Inside the guest, extend the partition and filesystem # (Linux: growpart + resize2fs; Windows: Disk Management; FreeBSD: gpart + growfs)