FreeBSD.software
Home/Guides/Best Archiving and Compression Tools for FreeBSD
comparison·2026-03-29·21 min read

Best Archiving and Compression Tools for FreeBSD

Compare the best archiving and compression tools for FreeBSD: 7-zip, zstd, lz4, gzip, bzip2, xz, brotli, pigz, tar, and more. Covers compression ratios, speed benchmarks, and ZFS integration.

Best Archiving and Compression Tools for FreeBSD

Compression is one of those things that seems simple until it matters. Then you discover that the tool you picked for your nightly backup pipeline is burning four CPU cores for twelve hours when a different algorithm could finish in forty minutes with a barely larger output file. Or that your web server is spending more time compressing responses than serving them.

FreeBSD has excellent support for every major compression algorithm through its ports collection and base system. This guide compares the tools that actually matter in 2026 -- their compression ratios, speed characteristics, memory usage, and the specific scenarios where each one wins. Every command and package name applies to FreeBSD 14.x.

Table of Contents

  1. Quick Comparison Table
  2. Tar: BSD Tar vs GNU Tar
  3. Gzip: The Universal Default
  4. Pigz: Parallel Gzip
  5. Bzip2: The Middle Ground
  6. XZ: Maximum Compression
  7. Zstd: The Modern Standard
  8. LZ4: Raw Speed
  9. 7-Zip: The Swiss Army Knife
  10. Brotli: Web-Optimized Compression
  11. Lzip: Long-Term Archiving
  12. RAR and UnRAR
  13. Compression Benchmarks
  14. ZFS Compression Integration
  15. Practical Examples
  16. FAQ

Quick Comparison Table

This table summarizes the key characteristics of each tool when compressing a typical mixed dataset (source code, logs, configuration files). Numbers are relative -- actual results depend on your data.

| Tool | Ratio | Compress Speed | Decompress Speed | CPU Usage | Memory Usage | FreeBSD Package |

|------|-------|---------------|-------------------|-----------|-------------|-----------------|

| gzip -6 | Medium | Fast | Very fast | Low | Low (256 KB) | Base system |

| pigz -6 | Medium | Very fast (parallel) | Very fast | High (all cores) | Low per thread | archivers/pigz |

| bzip2 -9 | Good | Slow | Slow | Medium | Moderate (7.6 MB) | Base system |

| xz -6 | Excellent | Very slow | Fast | High | High (94 MB) | Base system |

| zstd -3 | Good | Very fast | Very fast | Low | Low (512 KB) | archivers/zstd |

| zstd -19 | Excellent | Slow | Very fast | High | High (256 MB) | archivers/zstd |

| lz4 | Low | Fastest | Fastest | Minimal | Very low | archivers/liblz4 |

| 7z (LZMA2) | Excellent | Very slow | Moderate | High | Very high (up to 1.5 GB) | archivers/7-zip |

| brotli -6 | Good | Moderate | Fast | Medium | Moderate | archivers/brotli |

| brotli -11 | Excellent | Extremely slow | Fast | High | High (256 MB) | archivers/brotli |

| lzip -6 | Excellent | Very slow | Fast | Medium | Low (36 MB) | archivers/lzip |

The key takeaway: zstd offers the best compression-to-speed ratio for most workloads. Gzip remains the most universal format. XZ and 7-Zip win when file size matters more than time.


Tar: BSD Tar vs GNU Tar

Before discussing compression, you need an archiving tool that bundles multiple files into a single stream. That tool is tar. FreeBSD ships with two implementations, and the difference matters.

BSD Tar (bsdtar)

FreeBSD's default tar is bsdtar, based on the libarchive library. It is the /usr/bin/tar that comes with the base system.

sh
tar cf archive.tar /path/to/directory tar czf archive.tar.gz /path/to/directory tar cJf archive.tar.xz /path/to/directory tar --zstd -cf archive.tar.zst /path/to/directory

BSD tar auto-detects compression format on extraction. You do not need to specify -z, -j, or -J when extracting -- just tar xf archive.tar.gz works regardless of the compression used.

BSD tar supports a wide range of archive formats beyond just tar: ZIP, ISO 9660, cpio, pax, ar, 7-zip (read-only), and RAR (read-only). This makes it the single most versatile archiving tool on the system.

GNU Tar (gtar)

GNU tar is available as gtar from the ports collection:

sh
pkg install gtar

GNU tar has a few features BSD tar lacks: --exclude-vcs for skipping .git directories, --newer for incremental backups by date, and the --listed-incremental flag for snapshot-based incremental archives. If you are migrating scripts from Linux, you may need GNU tar for compatibility.

Which One to Use

Use BSD tar for general-purpose archiving. Use GNU tar only if you need a specific GNU-only feature or you are running scripts that depend on GNU behavior. BSD tar is more capable than most people realize and handles practically every format you will encounter.


Gzip: The Universal Default

Gzip has been the default compression tool on Unix systems for over thirty years. It is part of the FreeBSD base system. Every server, every deployment tool, every package manager supports it.

sh
gzip file.tar # Compresses to file.tar.gz, removes original gzip -k file.tar # Keep original file gzip -9 file.tar # Maximum compression (slower) gzip -1 file.tar # Fastest compression (larger output) gunzip file.tar.gz # Decompress gzip -d file.tar.gz # Also decompress

Gzip is based on the DEFLATE algorithm (a combination of LZ77 and Huffman coding). It uses minimal memory and decompresses extremely fast. Its main weakness is single-threaded compression -- on a modern multi-core server, gzip leaves most of your CPU idle.

Gzip is the right choice when compatibility matters above all else. Log rotation (newsyslog), HTTP content encoding, .tar.gz source distributions, and package formats all expect gzip.


Pigz: Parallel Gzip

Pigz (pronounced "pig-z") solves gzip's single-threaded bottleneck by compressing using all available CPU cores. The output is fully gzip-compatible -- any tool that reads .gz files can decompress pigz output.

sh
pkg install pigz pigz file.tar # Parallel compress to file.tar.gz pigz -k -p 8 file.tar # Keep original, use 8 threads unpigz file.tar.gz # Parallel decompress

On an 8-core server, pigz compresses roughly 6-7x faster than gzip at the same compression level. The output is byte-for-byte identical in decompressed form (though the compressed bytes differ due to parallel block splitting).

Pigz is a drop-in replacement for gzip in backup scripts. If you are piping tar through gzip, switch to pigz and your backups will finish in a fraction of the time:

sh
tar cf - /data | pigz -6 > /backups/data-$(date +%F).tar.gz

Bzip2: The Middle Ground

Bzip2 uses the Burrows-Wheeler transform to achieve better compression than gzip, at the cost of significantly slower speed. It is part of the FreeBSD base system.

sh
bzip2 file.tar # Compress to file.tar.bz2 bzip2 -k file.tar # Keep original bunzip2 file.tar.bz2 # Decompress

Bzip2 was the standard "better than gzip" option for years. Source code tarballs, Linux kernel releases, and many open-source projects distributed .tar.bz2 archives throughout the 2000s.

In 2026, bzip2 occupies an awkward position. Zstd compresses better and faster at comparable levels. XZ compresses significantly better at the cost of more time. Bzip2 does not win on any axis. It remains relevant only for compatibility with existing .bz2 archives and workflows that have not been updated.


XZ: Maximum Compression

XZ uses the LZMA2 algorithm to deliver compression ratios that approach the theoretical maximum for general-purpose compressors. FreeBSD includes xz in the base system.

sh
xz file.tar # Compress to file.tar.xz xz -k -9 file.tar # Maximum compression, keep original xz -T 4 file.tar # Use 4 threads (xz 5.4+) unxz file.tar.xz # Decompress xz -l file.tar.xz # List archive info

XZ compression is slow -- painfully slow at level 9. But the resulting files are substantially smaller than gzip or bzip2 output. Decompression, however, is fast and uses very little memory. This asymmetry makes XZ ideal for distribution archives: compress once, decompress many times.

FreeBSD's package system (pkg) uses XZ for package compression. The FreeBSD base system source tarballs ship as .txz files. The tradeoff is deliberate: the package build infrastructure compresses once and every user benefits from smaller downloads.

Watch out for memory usage. XZ level 9 with a 64 MB dictionary requires roughly 674 MB of RAM for compression and 65 MB for decompression. On memory-constrained systems, stick to -6 (the default) or lower.


Zstd: The Modern Standard

Zstandard (zstd), developed at Facebook by Yann Collet (who also created LZ4), is the most important compression tool of the last decade. It delivers gzip-level speed with bzip2-to-xz-level compression ratios. Install it from ports:

sh
pkg install zstd

Basic usage:

sh
zstd file.tar # Compress to file.tar.zst (default level 3) zstd -19 file.tar # High compression zstd --ultra -22 file.tar # Maximum compression (extreme memory usage) zstd -T0 file.tar # Use all CPU cores zstd -d file.tar.zst # Decompress zstdmt file.tar # Explicitly multi-threaded

Zstd's killer feature is its range. At level 1, it compresses faster than gzip while achieving similar ratios. At level 19, it matches or beats XZ's ratio while decompressing an order of magnitude faster. The decompression speed is roughly constant regardless of the compression level used -- around 1.5 GB/s on modern hardware.

Another strength is dictionary compression. Zstd can train a dictionary on a sample of your data and then compress small files (JSON API responses, log entries, database rows) with dramatically better ratios than any general-purpose compressor:

sh
zstd --train /path/to/samples/* -o /path/to/dictionary zstd -D /path/to/dictionary file.json

Zstd is the best default choice for new projects: backup pipelines, log compression, data transfer, and any scenario where you are not constrained by format compatibility.

BSD tar in FreeBSD supports zstd natively:

sh
tar --zstd -cf archive.tar.zst /path/to/data tar xf archive.tar.zst # Auto-detected on extraction

LZ4: Raw Speed

LZ4, also by Yann Collet, is designed for one thing: speed. It sacrifices compression ratio for throughput that approaches the speed of memcpy on modern hardware.

sh
pkg install lz4 lz4 file.tar # Compress to file.tar.lz4 lz4 -9 file.tar # Higher compression (still very fast) lz4 -d file.tar.lz4 # Decompress

LZ4 compresses at 700+ MB/s and decompresses at 4+ GB/s on a single core. The compressed output is larger than gzip -- roughly 2.1:1 versus gzip's 3.5:1 for typical data -- but when you need real-time compression (streaming data, database pages, IPC buffers), nothing else comes close.

LZ4 is used internally by ZFS for its lz4 compression option, by the Linux kernel for memory compression, and by numerous databases for page-level compression. It is the right tool when decompression latency matters more than file size.


7-Zip: The Swiss Army Knife

7-Zip uses the LZMA/LZMA2 algorithm (the same family as XZ) and delivers some of the highest compression ratios available. The FreeBSD port is now the official 7-Zip release:

sh
pkg install 7-zip

Usage:

sh
7zz a archive.7z /path/to/data # Create 7z archive 7zz a -mx=9 archive.7z /path/to/data # Maximum compression 7zz x archive.7z # Extract 7zz l archive.7z # List contents 7zz a -tzip archive.zip /path/to/data # Create ZIP with LZMA

7-Zip supports dozens of formats for extraction: 7z, ZIP, gzip, bzip2, xz, tar, RAR, ISO, DMG, WIM, VHD, and more. For creating archives, it supports 7z, ZIP, gzip, bzip2, xz, and tar.

The .7z format supports solid compression, where multiple files are compressed as a single stream. This dramatically improves ratios when archiving many similar files (source code repositories, log directories) because the compressor can exploit redundancy across file boundaries.

The downside: 7-Zip compression is slow and memory-hungry. Level 9 with a 64 MB dictionary uses roughly 800 MB of RAM. Ultra mode (-mx=9 -md=256m) can consume several gigabytes. Use it for archives that will be stored long-term and distributed widely, not for nightly backups.


Brotli: Web-Optimized Compression

Brotli was developed by Google for HTTP compression. It achieves better ratios than gzip on web content (HTML, CSS, JavaScript, JSON) while decompressing fast enough for real-time use.

sh
pkg install brotli brotli file.js # Compress to file.js.br brotli -q 11 file.js # Maximum quality brotli -d file.js.br # Decompress

Brotli includes a built-in static dictionary of common strings found in web content (HTML tags, CSS properties, JavaScript keywords). This gives it a significant advantage over gzip for web assets, particularly smaller files where the dictionary overhead of general-purpose compressors is proportionally large.

At quality 11, brotli compresses 15-25% smaller than gzip -9 on typical web content, but compression is extremely slow -- usable only for pre-compressed static assets. At quality 4-6, it compresses faster than gzip while still beating gzip's ratios.

NGINX on FreeBSD supports brotli through the www/nginx-module-brotli module. For a production NGINX configuration, see our NGINX setup guide. Pre-compress static assets at build time with brotli -q 11 and serve them with the brotli_static directive for the best of both worlds.


Lzip: Long-Term Archiving

Lzip uses the LZMA algorithm (same as XZ level 6) but with a simpler, more robust container format designed for long-term data preservation.

sh
pkg install lzip lzip file.tar # Compress to file.tar.lz lzip -9 file.tar # Maximum compression lzip -d file.tar.lz # Decompress

Lzip's container format includes CRC-32 checksums and a cleaner structure than the .xz format. The lziprecover tool can repair damaged archives by locating intact members in a corrupted stream -- a capability that XZ lacks.

Lzip achieves compression ratios very close to XZ. It is slower and less widely supported, but its error recovery capabilities make it worth considering for archives that must remain readable for decades. The GNU project officially uses lzip for its source releases.

For most users, XZ or zstd is a better choice. Lzip is for archivists and the particularly cautious.


RAR and UnRAR

RAR is a proprietary format that you will encounter when downloading files from the internet, particularly from older archives and certain communities. FreeBSD provides extraction tools:

sh
pkg install unrar # Extract-only (free) pkg install rar # Full RAR (requires license for compression)

Usage:

sh
unrar x archive.rar # Extract with full paths unrar l archive.rar # List contents unrar t archive.rar # Test integrity

RAR offers solid compression, recovery records, and multi-volume archives. Its compression ratios are competitive with 7-Zip. However, the proprietary format and licensing make it unsuitable for anything you control. Use it only for extracting archives created by others. For your own archives, use 7z, zstd, or xz.


Compression Benchmarks

Real numbers matter more than marketing. Here are approximate figures for compressing a 1 GB mixed dataset (source code, logs, text, and some binary data) on a FreeBSD 14.1 system with an 8-core AMD Ryzen processor and 32 GB of RAM.

| Tool & Level | Compressed Size | Compress Time | Decompress Time | Ratio |

|-------------|----------------|---------------|-----------------|-------|

| lz4 (default) | 508 MB | 1.4s | 0.3s | 1.97:1 |

| gzip -1 | 370 MB | 5.2s | 1.8s | 2.70:1 |

| gzip -6 | 320 MB | 12.4s | 1.7s | 3.13:1 |

| gzip -9 | 312 MB | 31.6s | 1.7s | 3.21:1 |

| pigz -6 (8 threads) | 320 MB | 1.9s | 0.9s | 3.13:1 |

| zstd -1 | 315 MB | 2.1s | 0.7s | 3.17:1 |

| zstd -3 | 290 MB | 3.8s | 0.7s | 3.45:1 |

| zstd -9 | 262 MB | 12.7s | 0.7s | 3.82:1 |

| zstd -19 | 230 MB | 98.5s | 0.8s | 4.35:1 |

| bzip2 -9 | 258 MB | 48.3s | 18.2s | 3.88:1 |

| xz -6 | 224 MB | 132.0s | 3.4s | 4.46:1 |

| xz -9 | 218 MB | 198.0s | 3.3s | 4.59:1 |

| 7z LZMA2 -mx=7 | 216 MB | 145.0s | 4.1s | 4.63:1 |

| brotli -6 | 275 MB | 18.3s | 2.1s | 3.64:1 |

| brotli -11 | 228 MB | 620.0s | 2.0s | 4.39:1 |

| lzip -9 | 226 MB | 178.0s | 4.8s | 4.42:1 |

Key observations:

  • Zstd -1 beats gzip -6 in both ratio and speed. There is almost no reason to use gzip for new projects except compatibility.
  • Pigz delivers gzip ratios at lz4-like wall-clock time by using all cores.
  • Zstd -19 approaches xz ratios while decompressing 4x faster.
  • LZ4 is in a class by itself for speed. If your bottleneck is I/O and CPU is cheap, LZ4 wins.
  • Brotli -11 is extremely slow to compress -- use it only for pre-compressed static web assets.
  • 7-Zip's LZMA2 and XZ produce the smallest files, but the time cost is substantial.

ZFS Compression Integration

ZFS has built-in transparent compression that runs at the filesystem layer. Every read and write is compressed and decompressed automatically, with no changes to applications. This is one of ZFS's most valuable features, and choosing the right algorithm matters.

For a complete ZFS setup guide, see our ZFS guide.

Available Algorithms

sh
zfs set compression=lz4 zroot/data # Fast, low ratio (default) zfs set compression=zstd zroot/data # Balanced zfs set compression=zstd-9 zroot/data # High compression zfs set compression=gzip-9 zroot/data # Slow, good ratio (legacy) zfs set compression=off zroot/data # Disable

Which Algorithm for What

lz4 (the default since FreeBSD 12). Use it for everything unless you have a specific reason not to. LZ4 adds virtually zero overhead -- it is faster than the disk in most cases, meaning compression actually speeds up I/O by reducing the amount of data written. If the data is incompressible, LZ4 detects this early and passes it through without wasting CPU.

zstd (available since FreeBSD 13/OpenZFS 2.0). Use it for datasets where space savings justify slightly more CPU usage: log archives, source code repositories, mail spools, database dumps. Levels 1-3 add modest CPU overhead for significantly better ratios than lz4. Levels 4-9 are reasonable for cold storage.

zstd-19 and above. Use only for truly cold data that is rarely read or written: long-term archives, compliance data, backups stored on ZFS. The CPU cost on write is substantial.

gzip. Legacy option. Zstd matches gzip's ratio at any given speed point while using less CPU. There is no reason to choose gzip on a new ZFS pool.

Checking Compression Ratios

sh
zfs get compressratio zroot/data zfs get used,logicalused,compressratio zroot/data

A typical FreeBSD system with lz4 compression on all datasets achieves a 1.3-1.5x overall compression ratio. Datasets with text-heavy content (logs, source code, configuration) see 2-4x. Datasets with already-compressed content (media files, encrypted data) see 1.0x -- lz4 passes these through without overhead.

Do Not Double-Compress

If ZFS compression is enabled, do not also compress files before writing them. Compressing a .tar.gz file onto a ZFS dataset with lz4 compression wastes CPU on both sides and gains nothing. Store uncompressed tar archives on ZFS and let the filesystem handle compression transparently.

The exception: if you are storing archives for transfer to a non-ZFS system, compress them for portability. But for local storage on ZFS, let ZFS do the work.


Practical Examples

Backup Pipeline with Zstd

A nightly backup that archives /data and sends it to a remote server via SSH:

sh
#!/bin/sh # /usr/local/bin/backup-data.sh BACKUP_DIR="/backups" DATE=$(date +%F) RETENTION=30 tar --zstd -cf - /data | ssh backup-server "cat > ${BACKUP_DIR}/data-${DATE}.tar.zst" # Prune old backups ssh backup-server "find ${BACKUP_DIR} -name 'data-*.tar.zst' -mtime +${RETENTION} -delete"

For a comprehensive backup strategy, see our FreeBSD backup guide.

Parallel Compression for Large Datasets

When backing up hundreds of gigabytes, parallel compression is essential:

sh
# Using pigz for gzip-compatible output tar cf - /srv/data | pigz -6 -p 8 > /backups/data.tar.gz # Using zstd with multi-threading (auto-detects cores) tar cf - /srv/data | zstd -T0 -6 > /backups/data.tar.zst # Using xz with multi-threading for maximum compression tar cf - /srv/data | xz -T4 -6 > /backups/data.tar.xz

Pre-Compressed Static Web Assets

For NGINX or Apache serving static files, pre-compress assets at deploy time instead of compressing on every request:

sh
#!/bin/sh # Pre-compress all static assets WEBROOT="/usr/local/www/mysite" find ${WEBROOT} -type f \( -name "*.html" -o -name "*.css" -o -name "*.js" -o -name "*.json" -o -name "*.svg" -o -name "*.xml" \) | while read file; do # Brotli for browsers that support it brotli -q 11 -f -o "${file}.br" "${file}" # Gzip for universal fallback gzip -9 -k -f "${file}" done

Then configure NGINX to serve pre-compressed files:

nginx
brotli_static on; gzip_static on;

This delivers maximum compression ratios with zero runtime CPU cost. See the NGINX setup guide for the full configuration.

Log Compression with Newsyslog

FreeBSD's newsyslog handles log rotation with built-in compression. In /etc/newsyslog.conf:

shell
# Logfile Owner:Group Mode Count Size When Flags /var/log/messages root:wheel 644 7 1000 * JC /var/log/auth.log root:wheel 600 7 500 * JCZ

The J flag uses bzip2, Z uses gzip, and X uses xz. For most log files, Z (gzip) provides the best balance of compatibility and performance. If disk space is tight, J (bzip2) shrinks logs further at the cost of slower rotation.

Compressing ZFS Snapshots for Off-Site Transfer

When sending ZFS snapshots to a remote machine without ZFS, compress the stream:

sh
# Send compressed snapshot to a file zfs send zroot/data@daily | zstd -T0 -3 > /backups/data-snapshot.zfs.zst # Send compressed snapshot over SSH zfs send -i zroot/data@yesterday zroot/data@today | zstd -T0 -3 | \ ssh remote "zstd -d | zfs receive tank/data"

FAQ

Which compression tool should I use for FreeBSD backups?

Use zstd at level 3-6 with multi-threading (zstd -T0 -3). It offers the best balance of compression ratio, speed, and resource usage. For gzip-compatible output (required by some tools), use pigz instead. For maximum compression on archives stored long-term, use xz -6 -T4.

Should I use lz4 or zstd for ZFS compression?

Use lz4 as the default for all datasets. It adds near-zero CPU overhead and speeds up I/O in most cases. Switch to zstd (levels 1-3) for datasets that are write-once-read-rarely (log archives, old backups, mail spools) or where disk space is at a premium. Avoid zstd levels above 9 for ZFS unless the dataset is truly cold.

Is 7-Zip better than xz on FreeBSD?

7-Zip uses the same LZMA2 algorithm as xz and produces nearly identical compression ratios. The .7z format adds solid compression (better ratios for collections of similar files) and multi-format support. Use xz for .tar.xz archives and pipeline compression. Use 7-Zip when you need a self-contained archive format with solid compression, or when extracting archives in formats like RAR, ISO, or ZIP.

Can I replace gzip with zstd everywhere?

Not yet. Many tools and protocols still require gzip: HTTP Content-Encoding, newsyslog, FreeBSD packages, older deployment scripts, and .tar.gz conventions. Within your own infrastructure -- backup scripts, internal transfers, application-level compression -- yes, zstd is a strict upgrade over gzip. For external-facing use, keep gzip as a fallback.

How much disk space does ZFS compression save?

It depends entirely on the data. Text-heavy datasets (logs, source code, configuration files, database dumps) typically see 2-4x compression with lz4 and 3-6x with zstd. Binary data (compiled programs, images, video) sees little to no compression. A typical FreeBSD server with mixed workloads saves 25-40% of disk space with lz4 compression enabled on all datasets. There is virtually no reason to leave it off.

What is the fastest way to compress a large file on FreeBSD?

For absolute speed: lz4 file. For a useful compression ratio at high speed: zstd -T0 -1 file (uses all cores). For gzip-compatible output at high speed: pigz -1 file. All of these saturate disk I/O before they saturate CPU on modern hardware.

Should I compress files before storing them on ZFS?

No. If ZFS compression is enabled, storing pre-compressed files wastes CPU time on the application side while giving ZFS nothing to compress (already-compressed data is incompressible). Store data uncompressed on ZFS and let the filesystem handle it transparently. The only exception is files you intend to transfer to a non-ZFS system, where you need the compression to be portable.


Conclusion

The compression landscape on FreeBSD in 2026 is straightforward once you understand the tradeoffs:

  • Zstd is the default choice for new projects. It dominates the speed-vs-ratio curve from level 1 to level 19.
  • Gzip remains essential for compatibility. Use pigz when speed matters.
  • XZ and 7-Zip are for maximum compression when time is not a constraint.
  • LZ4 is for real-time compression where latency matters more than ratio.
  • Brotli is for web content, pre-compressed at build time.
  • ZFS compression should always be enabled. Use lz4 as the default, zstd for cold data.

Every tool discussed here is available through FreeBSD's ports collection or base system. Install what you need with pkg install, build your pipeline, and move on. Compression is a solved problem -- the only mistake is not using it.

For related guides, see our ZFS guide, FreeBSD backup guide, and NGINX production setup.

Get more FreeBSD guides

Weekly tutorials, security advisories, and package updates. No spam.