
When a new version of OpenZFS is released, many administrators wonder whether it's worth updating now or waiting for the dust to settle. OpenZFS 2.4 The question is even more interesting, because It comes with profound changes in performance, new management tools, and some community debate about using release candidates in production systems.
General features of OpenZFS 2.4
OpenZFS 2.4 is presented as a version of stable and quite ambitious character Designed for both Linux and FreeBSD environments, the project, at the time of its final labeling, already emphasized that the goal was to continue promoting the maturity of the file system and volume manager while maintaining compatibility with recent kernels and ensuring data security.
This version consolidates many features that had been in development since the Frame 2.3 and its intermediate revisions: performance improvements in the encryption layernew management tools such as zfs rewriteMore flexible quota capabilities, and internal changes designed to reduce fragmentation, optimize deduplication, and refine complex aspects such as gang block management or behavior with problematic disks.
The community has also paid special attention to the integration with modern kernelsIn Linux, support is declared from 4.18 up to recent LTS branches (including kernel 6.18 at the time of the stable release of 2.4), while in FreeBSD, versions from 13.3 onwards are covered, including 14.0 and newer branches being prepared such as 15.0.
Platform support and kernel compatibility with OpenZFS 2.4
One of the pillars of OpenZFS 2.4 is its broad platform compatibilityFor many administrators this is key, because it allows them to upgrade operating system versions without losing the expected ZFS features.
On the Linux side, OpenZFS 2.4 indicates compatibility with kernels ranging from version 4.18 to the series 6.18 stableThis covers everything from conservative enterprise distributions to highly up-to-date environments that stay current with the latest kernel. In between lies the entire spectrum of common releases: LTS versions used on servers, custom kernels, and versions adopted by projects like CentOS Stream or similar.
In FreeBSD, the new version supports from FreeBSD 13.3 From now on, including 14.0 and later versions that are already on the horizon, such as the upcoming 15.0. This wide range ensures that both systems already in production and next-generation deployments can continue to use OpenZFS without the need for strange patches or custom solutions.
Behind this compatibility lies a continuous effort that was already evident in the series OpenZFS 2.3.xPrevious updates, such as 2.3.4, extended kernel support up to 6.16 and consolidated patches that had begun appearing in earlier RCs. OpenZFS 2.4 picks up where it left off and goes a step further, aligning with recent kernels and improving the experience for those who update their base stack relatively frequently.
Quotas and new space management capabilities
Among the most practical new features for the administrator are the improvements to the system of predetermined quotasOpenZFS 2.4 introduces the ability to define default quotas for users, groups, and projects, so that space consumption can be controlled more uniformly without having to configure each case manually.
This function allows, for example, setting a base fee for all users that are created in a specific dataset, or to set project limits that are automatically applied when new resources are allocated. It is a very useful tool in multi-user environments, hosting, laboratories, and any scenario where you want to prevent an oversight from filling the entire pool.
Support for default quotas does not replace existing specific quotas, but rather complements them. The administrator can define a global politics and then refine it with exceptions for specific users or groups who need more (or less) space. All of this is managed with the standard ZFS tools, maintaining the same property model that is already familiar.
Direct I/O, cacheless I/O, and misaligned write behavior
In terms of performance, OpenZFS 2.4 brings a very interesting change in the management of direct input/outputUntil now, using direct I/O in some situations could conflict with write alignment and result in suboptimal code paths. The new version introduces a mechanism so that, when direct I/O cannot be ideally implemented, an alternative mode is used. lightweight cacheless IO specifically designed for this type of scenario.
What does this mean in practice? That writings that don't fit well with the expected alignments cease to be a pathological case and are instead managed with a optimized route within ZFS. Overhead is reduced, some bottlenecks are avoided, and more predictable behavior is achieved, especially in environments where applications that use direct I/O coexist with others that do not.
This change is especially useful in demanding workloads where the goal is to squeeze out the performance storage without sacrificing the integrity guarantees offered by ZFS. With a specifically designed fallback, OpenZFS is better suited to the realities of many applications that don't always adhere to the ideal alignment of operations.
Unified allocation throttling and fragmentation reduction in OpenZFS 2.4
Another major change that comes with OpenZFS 2.4 is the introduction of a new algorithm for unified allocation throttlingBehind this name is a mechanism aimed at reducing the fragmentation of virtual devices (vdevs) and improving how writes are distributed when the system is under pressure.
Until now, block allocation in high-load situations could end up generating distribution patterns that, over time, favored the fragmentation of vdevThe unified algorithm aims to harmonize the allocation rate, so that the pool maintains a more orderly structure and performance penalties are reduced when space starts to run low or when the mix of block sizes is very varied.
These types of changes are less noticeable than a new command, but they are very valuable in long-term deployments, where a pool grows, is rebalanced, new virtual development environments (vdevs) are added, and maintenance operations are performed over years. By improving allocation control, OpenZFS 2.4 helps maintain a more stable behavior over timeeven when the system is used intensively.
Encryption improvements with AVX2 and AES-GCM
In terms of security and performance, OpenZFS 2.4 includes a series of optimizations in the use of AVX2 for AES-GCMIn simpler terms: the encryption implementation has been refined to better take advantage of the capabilities of modern processors that have these advanced vector instructions.
The result is faster encryption without sacrificing cryptographic guarantees, which is especially noticeable in systems handling large volumes of encrypted data or in environments where many simultaneous operations are performed on protected datasets. reduce CPU overhead associated with encryption, more requests can be handled or more resources can be dedicated to other system tasks.
In practice, administrators can continue to rely on the functions of ZFS native encryption to protect sensitive data without the significant performance impact of previous generations. Encryption doesn't become "free," but it does become more manageable under workloads where it was previously a clear bottleneck.
ZIL in special vdevs and improvements in special_small_blocks
OpenZFS 2.4 also brings new features regarding the special vdevs, those devices designed to store certain types of data (such as metadata, small blocks or deduplication tables) on faster media, usually SSD or NVMe.
On the one hand, it is now possible to allow the ZIL (ZFS Intent Log) Reside on dedicated vdevs when available. This makes it easier to concentrate synchronous writes on low-latency devices, improving the response time of applications that rely on sync-intensive operations, such as databases or messaging systems with strong persistence.
On the other hand, the behavior of the property is expanded special_small_blocks so that ZVOL writings They can also land in special vdevs, not just certain regular file blocks. Furthermore, the restriction that the value must be a power of two is relaxed, so the administrator can choose finer sizes tailored to their actual workload instead of being limited to rigid options.
Combined, these improvements allow for the design of storage architectures where the most critical data (Metadata, small blocks, ZILs, deduplication tables, etc.) are stored on faster media, while the bulk of the data remains on less expensive disks. All of this comes with much greater flexibility in defining what is considered "small" and what is not.
zfs rewrite and zfs rewrite -P: efficiently relocate data
The 2.3 series already brought one of the most striking features of recent times: the subcommand zfs rewriteOpenZFS 2.4 takes this tool a step further by incorporating the variant zfs rewrite -Pwhich adds new possibilities when relocating data within a pool.
The command zfs rewrite allows “to rewrite"The content of a file or dataset is copied without changing its logical meaning, but physically relocated to other areas with different internal properties. This allows modifications such as the compression algorithm, checksum type, whether deduplication is applied, the number of copies, or even the preferred device, without needing to copy the data to user space and rewrite it."
This has several clear advantages: it reduces I/O traffic compared to the classic "copy and rename" method, minimizes the impact on the cache, and avoids long periods of time during which data is being moved through external tools. Furthermore, since there is no logical change to the content, The mtime is not altered nor other properties visible from the user's point of view, which means that many applications are not even aware of the operation.
The option zfs rewrite -P adds the possibility of preserve the logical birth time of the blocks whenever possible, which helps minimize the size of incremental replication flows. By keeping this information stable, subsequent send/receive operations can better identify what has actually changed and what hasn't, reducing the amount of data that needs to be moved between systems.
Another important advantage is that the rewrite process is protected with range locks normal, so it can run in parallel with real workloads without unduly blocking the system. In datasets with sync=always The benefit is even greater, because by not having any logical modification of data, no additional writes are forced in the ZIL, avoiding an extra cost in synchronous operations.
New management options in OpenZFS 2.4: -a|–all, range scrub, and BRT prefetch
OpenZFS 2.4 also refines and expands the arsenal of management tools with several very useful options for day-to-day use. One of these is the addition of the option -a|–all in commands that perform maintenance tasks on pools, such as scrub, trim, or initialization.
This option makes it possible to launch an operation that affects all imported pools all at once, instead of having to manually iterate through each one. This greatly simplifies things on servers that manage multiple pools, reducing human error and facilitating easier automation.
In addition, the possibility of launching a zpool scrub limited to specific time ranges through the options -S -EThis functionality is highly valued when you want to review only a window of time in which problems are suspected, or when you want to spread the cost of a scrub over several partial executions so as not to impact overall performance too much.
Another relevant new feature is the addition of zpool prefetch -t brt to preload into memory the Block Reference Table (block cloning table)This allows for better exploitation of the block cloning functionality introduced in previous versions, reducing latency when accessing internal structures involved in this feature.
Permissions, renamed tools, and improvements to dedup and block cloning
Among the small but significant improvements that refine the experience, OpenZFS 2.4 adds a new permission send:encryptedDesigned to provide more granular control over who can send encrypted data, this works well with teams that have a separation of responsibilities between those who manage snapshots, those who handle replication, and those who have access to encryption keys.
Traditional utilities were also renamed, such as arc_summary y arcstat, which then become known zarcsummary y zarcstatThis change helps avoid name conflicts and makes it clearer that these are tools associated with ZFS, which is useful in systems with multiple components that expose similar commands.
Internally, the 2.4 series accumulates New optimizations and fixes This applies to both deduplication and block cloning. Data structures are refined, edge cases are corrected, and better access patterns are sought to make the impact on memory and CPU more manageable. These changes aren't directly visible to the user, but they result in more stable behavior and fewer surprises under complex workloads.
Gang blocks, ashift, slow child vdevs, and special topologies
OpenZFS 2.4 also incorporates a battery of improvements and fixes over the gang blocksThis is an internal system feature designed to handle blocks that cannot be placed conventionally. Although most users don't interact directly with them, any failure in this part of the code can have serious consequences, so the numerous fixes and optimizations included are good news for the overall robustness of the system.
In parallel, the handling of ashiftThe parameter that defines the minimum allocation unit aligned with the physical size of the device's sectors. Better shift management reduces the possibility of writing more data than necessary to disks with large sectors and helps maintain acceptable performance levels throughout the pool's lifespan.
Another interesting new feature is the ability to make child vdevs behave in a abnormally slow They can be temporarily "benched." Instead of dragging down the performance of the entire system, they can be taken off the hook for a while, which is very useful when disks are starting to fail, drives are experiencing intermittent problems, or environments have inconsistent hardware.
Finally, they have relaxed topology constraints In special and deduplication VDEVs, this allows for greater flexibility when designing pools with advanced configurations. This enables a better integration of fast devices for metadata, deduplicated tables, ZILs, and other sensitive elements without encountering overly rigid limitations in the layout definition.
OpenZFS 2.3.4: Maintenance, initial zfs rewrite and consolidation
To fully understand the leap that 2.4 represents, it's worth taking a quick look at OpenZFS 2.3.4, a maintenance version that appeared shortly before and laid some of the foundations for what has later been consolidated in the new main branch.
Version 2.3.4 arrived two months after 2.3.3 with a very strong focus on robustness and compatibilityIt extended Linux kernel support up to version 6.16, maintaining the minimum at 4.18, and confirmed compatibility with FreeBSD from version 13.3 onwards, including the upcoming 15.0. In other words, it was already preparing the groundwork for coexisting with modern base systems without sacrificing stability.
This specific review saw the debut of the initial version of the command zfs rewritedesigned precisely for relocate data without changing its logical content and without resorting to more cumbersome strategies like copy/renaming or send/receive with dataset renaming. The goal was to offer a tool capable of rebalancing a pool after adding vdevs, reducing fragmentation of randomly written files, or applying new storage properties to existing data.
Compared to traditional alternatives, zfs rewrite It is faster because it avoids the data traveling to user space. In datasets with sync=alwaysFurthermore, it improves performance because, since the data isn't modified logically, no additional writes are triggered in the ZIL. All of this without touching anything. mtime or other metadata visible to applications, which minimizes the impact on the software running on top of it.
Version 2.3.4 also provided various FreeBSD-specific settingsIt included packaging improvements and a set of minor fixes that polished some corners of the code. It wasn't a version intended to introduce disruptive changes, but rather to fine-tune stability before jumping to branch 2.4 with a larger package of new features.
OpenZFS 2.4 RC1, RC2, RC4: testing, feedback, and community discussion
Before the 2.4 series was declared stable, the project released several release candidates (RC1, RC2, RC4) with the goal of allowing advanced users and developers to test them and report problems. These release candidates already included virtually all the features we've discussed: default quotas, cacheless I/O as a fallback, unified allocation throttling, encryption improvements, ZIL in special vdevs, special_small_blocks extensions, new permissions, tool renaming, and much more.
The RC1 and RC2 notes emphasized the importance of the community I will test the builds and send feedback via GitHub, including commands to easily list changes relative to the reference branch (with combinations of git cherry comparing zfs-2.3-release with the various RCs). The message was clear: the goal was to test the code in real-world environments before labeling it "stable".
However, the appearance of a specific RC (for example, 2.4.0-RC4The inclusion of a .NET Framework (RF) in a version of FreeBSD marked as RELEASE, such as 15.0, raised some eyebrows. Some users wondered why it had been decided to include one. OpenZFS release candidate in a version considered stable of the operating system instead of resorting to a previous, already established branch. This choice generated some discontent among those who prefer that the file system on which their data rests be based strictly on final versions.
The doubts revolved around the durability of that decision: if someone installs FreeBSD 15.0 with OpenZFS 2.4.0-RC4 and then doesn't follow the -CURRENT branch, there's a concern about being "stuck" for several months with a release candidate until a minor revision or a new point in the series arrives. There was also concern that future releases like 15.1 would integrate another RC (for example, a hypothetical 2.4.1-RC3) instead of a final version.
Behind this debate there are different ways of understanding what “release candidate"In a context as sensitive as a file system. For some people, a Release Candidate (RC) is practically a stable version, only needing minor tweaks. For others, however, it's code that shouldn't be used as the foundation of a system marked as RELEASE and should be reserved for those who closely follow development branches."
In any case, the RCs fulfilled their mission of testing groundThese improvements allowed for the detection of bugs, adjustments to details, and a much more confident arrival at the "2.4 stable" release. Those who prioritize security above all else still have the option to remain on previous branches like 2.3.x until they deem 2.4 sufficiently mature in production.
Everything that OpenZFS 2.4 brings is based on the robustness that the project has gained with the 2.3 series and its maintenance updates, combining kernel compatibility improvements, new tools such as zfs rewriteThe release includes adjustments to deduplication and block cloning, encryption optimizations, internal changes to gang blocks and ashift, and a range of new management options. While some controversy has arisen regarding the use of release candidates on certain operating systems, the stable version 2.4 offers a significant leap forward for those who want to get more out of ZFS on Linux and FreeBSD without sacrificing the established integrity and resilience guarantees.