commit: c73bfddb71a7ddae22704584f9c4a6af31642109
parent 92d21df94b7dd7e401b02d7da489aaac3621c2e7
Author: Haelwenn (lanodan) Monnier <contact@hacktivis.me>
Date: Wed, 30 Jul 2025 11:57:56 +0200
oasis 0028dcc601
Diffstat:
83 files changed, 23713 insertions(+), 0 deletions(-)
diff --git a/bin/zed b/bin/zed
Binary files differ.
diff --git a/bin/zfs b/bin/zfs
Binary files differ.
diff --git a/bin/zpool b/bin/zpool
Binary files differ.
diff --git a/bin/zstream b/bin/zstream
Binary files differ.
diff --git a/bin/zstreamdump b/bin/zstreamdump
@@ -0,0 +1 @@
+zstream
+\ No newline at end of file
diff --git a/share/man/man4/zfs.4 b/share/man/man4/zfs.4
@@ -0,0 +1,2872 @@
+.\"
+.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
+.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved.
+.\" Copyright (c) 2019 Datto Inc.
+.\" Copyright (c) 2023, 2024 Klara, Inc.
+.\" The contents of this file are subject to the terms of the Common Development
+.\" and Distribution License (the "License"). You may not use this file except
+.\" in compliance with the License. You can obtain a copy of the license at
+.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
+.\"
+.\" See the License for the specific language governing permissions and
+.\" limitations under the License. When distributing Covered Code, include this
+.\" CDDL HEADER in each file and include the License file at
+.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
+.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
+.\" own identifying information:
+.\" Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" Copyright (c) 2024, Klara, Inc.
+.\"
+.Dd November 1, 2024
+.Dt ZFS 4
+.Os
+.
+.Sh NAME
+.Nm zfs
+.Nd tuning of the ZFS kernel module
+.
+.Sh DESCRIPTION
+The ZFS module supports these parameters:
+.Bl -tag -width Ds
+.It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
+Maximum size in bytes of the dbuf cache.
+The target size is determined by the MIN versus
+.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd
+of the target ARC size.
+The behavior of the dbuf cache and its associated settings
+can be observed via the
+.Pa /proc/spl/kstat/zfs/dbufstats
+kstat.
+.
+.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
+Maximum size in bytes of the metadata dbuf cache.
+The target size is determined by the MIN versus
+.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th
+of the target ARC size.
+The behavior of the metadata dbuf cache and its associated settings
+can be observed via the
+.Pa /proc/spl/kstat/zfs/dbufstats
+kstat.
+.
+.It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint
+The percentage over
+.Sy dbuf_cache_max_bytes
+when dbufs must be evicted directly.
+.
+.It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint
+The percentage below
+.Sy dbuf_cache_max_bytes
+when the evict thread stops evicting dbufs.
+.
+.It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq uint
+Set the size of the dbuf cache
+.Pq Sy dbuf_cache_max_bytes
+to a log2 fraction of the target ARC size.
+.
+.It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq uint
+Set the size of the dbuf metadata cache
+.Pq Sy dbuf_metadata_cache_max_bytes
+to a log2 fraction of the target ARC size.
+.
+.It Sy dbuf_mutex_cache_shift Ns = Ns Sy 0 Pq uint
+Set the size of the mutex array for the dbuf cache.
+When set to
+.Sy 0
+the array is dynamically sized based on total system memory.
+.
+.It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq uint
+dnode slots allocated in a single operation as a power of 2.
+The default value minimizes lock contention for the bulk operation performed.
+.
+.It Sy dmu_ddt_copies Ns = Ns Sy 3 Pq uint
+Controls the number of copies stored for DeDup Table
+.Pq DDT
+objects.
+Reducing the number of copies to 1 from the previous default of 3
+can reduce the write inflation caused by deduplication.
+This assumes redundancy for this data is provided by the vdev layer.
+If the DDT is damaged, space may be leaked
+.Pq not freed
+when the DDT can not report the correct reference count.
+.
+.It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint
+Limit the amount we can prefetch with one call to this amount in bytes.
+This helps to limit the amount of memory that can be used by prefetching.
+.
+.It Sy ignore_hole_birth Pq int
+Alias for
+.Sy send_holes_without_birth_time .
+.
+.It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Turbo L2ARC warm-up.
+When the L2ARC is cold the fill interval will be set as fast as possible.
+.
+.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64
+Min feed interval in milliseconds.
+Requires
+.Sy l2arc_feed_again Ns = Ns Ar 1
+and only applicable in related situations.
+.
+.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64
+Seconds between L2ARC writing.
+.
+.It Sy l2arc_headroom Ns = Ns Sy 8 Pq u64
+How far through the ARC lists to search for L2ARC cacheable content,
+expressed as a multiplier of
+.Sy l2arc_write_max .
+ARC persistence across reboots can be achieved with persistent L2ARC
+by setting this parameter to
+.Sy 0 ,
+allowing the full length of ARC lists to be searched for cacheable content.
+.
+.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64
+Scales
+.Sy l2arc_headroom
+by this percentage when L2ARC contents are being successfully compressed
+before writing.
+A value of
+.Sy 100
+disables this feature.
+.
+.It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Controls whether buffers present on special vdevs are eligible for caching
+into L2ARC.
+If set to 1, exclude dbufs on special vdevs from being cached to L2ARC.
+.
+.It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Ns | Ns 2 Pq int
+Controls whether only MFU metadata and data are cached from ARC into L2ARC.
+This may be desired to avoid wasting space on L2ARC when reading/writing large
+amounts of data that are not expected to be accessed more than once.
+.Pp
+The default is 0,
+meaning both MRU and MFU data and metadata are cached.
+When turning off this feature (setting it to 0), some MRU buffers will
+still be present in ARC and eventually cached on L2ARC.
+.No If Sy l2arc_noprefetch Ns = Ns Sy 0 ,
+some prefetched buffers will be cached to L2ARC, and those might later
+transition to MRU, in which case the
+.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
+.Pp
+Setting it to 1 means to L2 cache only MFU data and metadata.
+.Pp
+Setting it to 2 means to L2 cache all metadata (MRU+MFU) but
+only MFU data (ie: MRU data are not cached). This can be the right setting
+to cache as much metadata as possible even when having high data turnover.
+.Pp
+Regardless of
+.Sy l2arc_noprefetch ,
+some MFU buffers might be evicted from ARC,
+accessed later on as prefetches and transition to MRU as prefetches.
+If accessed again they are counted as MRU and the
+.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
+.Pp
+The ARC status of L2ARC buffers when they were first cached in
+L2ARC can be seen in the
+.Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize
+arcstats when importing the pool or onlining a cache
+device if persistent L2ARC is enabled.
+.Pp
+The
+.Sy evict_l2_eligible_mru
+arcstat does not take into account if this option is enabled as the information
+provided by the
+.Sy evict_l2_eligible_m[rf]u
+arcstats can be used to decide if toggling this option is appropriate
+for the current workload.
+.
+.It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq uint
+Percent of ARC size allowed for L2ARC-only headers.
+Since L2ARC buffers are not evicted on memory pressure,
+too many headers on a system with an irrationally large L2ARC
+can render it slow or unusable.
+This parameter limits L2ARC writes and rebuilds to achieve the target.
+.
+.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64
+Trims ahead of the current write size
+.Pq Sy l2arc_write_max
+on L2ARC devices by this percentage of write size if we have filled the device.
+If set to
+.Sy 100
+we TRIM twice the space required to accommodate upcoming writes.
+A minimum of
+.Sy 64 MiB
+will be trimmed.
+It also enables TRIM of the whole L2ARC device upon creation
+or addition to an existing pool or if the header of the device is
+invalid upon importing a pool or onlining a cache device.
+A value of
+.Sy 0
+disables TRIM on L2ARC altogether and is the default as it can put significant
+stress on the underlying storage devices.
+This will vary depending of how well the specific device handles these commands.
+.
+.It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Do not write buffers to L2ARC if they were prefetched but not used by
+applications.
+In case there are prefetched buffers in L2ARC and this option
+is later set, we do not read the prefetched buffers from L2ARC.
+Unsetting this option is useful for caching sequential reads from the
+disks to L2ARC and serve those reads from L2ARC later on.
+This may be beneficial in case the L2ARC device is significantly faster
+in sequential reads than the disks of the pool.
+.Pp
+Use
+.Sy 1
+to disable and
+.Sy 0
+to enable caching/reading prefetches to/from L2ARC.
+.
+.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int
+No reads during writes.
+.
+.It Sy l2arc_write_boost Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64
+Cold L2ARC devices will have
+.Sy l2arc_write_max
+increased by this amount while they remain cold.
+.
+.It Sy l2arc_write_max Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64
+Max write bytes per interval.
+.
+.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Rebuild the L2ARC when importing a pool (persistent L2ARC).
+This can be disabled if there are problems importing a pool
+or attaching an L2ARC device (e.g. the L2ARC device is slow
+in reading stored log metadata, or the metadata
+has become somehow fragmented/unusable).
+.
+.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
+Mininum size of an L2ARC device required in order to write log blocks in it.
+The log blocks are used upon importing the pool to rebuild the persistent L2ARC.
+.Pp
+For L2ARC devices less than 1 GiB, the amount of data
+.Fn l2arc_evict
+evicts is significant compared to the amount of restored L2ARC data.
+In this case, do not write log blocks in L2ARC in order not to waste space.
+.
+.It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
+Metaslab granularity, in bytes.
+This is roughly similar to what would be referred to as the "stripe size"
+in traditional RAID arrays.
+In normal operation, ZFS will try to write this amount of data to each disk
+before moving on to the next top-level vdev.
+.
+.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Enable metaslab group biasing based on their vdevs' over- or under-utilization
+relative to the pool.
+.
+.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64
+Make some blocks above a certain size be gang blocks.
+This option is used by the test suite to facilitate testing.
+.
+.It Sy metaslab_force_ganging_pct Ns = Ns Sy 3 Ns % Pq uint
+For blocks that could be forced to be a gang block (due to
+.Sy metaslab_force_ganging ) ,
+force this many of them to be gang blocks.
+.
+.It Sy brt_zap_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Controls prefetching BRT records for blocks which are going to be cloned.
+.
+.It Sy brt_zap_default_bs Ns = Ns Sy 12 Po 4 KiB Pc Pq int
+Default BRT ZAP data block size as a power of 2. Note that changing this after
+creating a BRT on the pool will not affect existing BRTs, only newly created
+ones.
+.
+.It Sy brt_zap_default_ibs Ns = Ns Sy 12 Po 4 KiB Pc Pq int
+Default BRT ZAP indirect block size as a power of 2. Note that changing this
+after creating a BRT on the pool will not affect existing BRTs, only newly
+created ones.
+.
+.It Sy ddt_zap_default_bs Ns = Ns Sy 15 Po 32 KiB Pc Pq int
+Default DDT ZAP data block size as a power of 2. Note that changing this after
+creating a DDT on the pool will not affect existing DDTs, only newly created
+ones.
+.
+.It Sy ddt_zap_default_ibs Ns = Ns Sy 15 Po 32 KiB Pc Pq int
+Default DDT ZAP indirect block size as a power of 2. Note that changing this
+after creating a DDT on the pool will not affect existing DDTs, only newly
+created ones.
+.
+.It Sy zfs_default_bs Ns = Ns Sy 9 Po 512 B Pc Pq int
+Default dnode block size as a power of 2.
+.
+.It Sy zfs_default_ibs Ns = Ns Sy 17 Po 128 KiB Pc Pq int
+Default dnode indirect block size as a power of 2.
+.
+.It Sy zfs_dio_enabled Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Enable Direct I/O.
+If this setting is 0, then all I/O requests will be directed through the ARC
+acting as though the dataset property
+.Sy direct
+was set to
+.Sy disabled .
+.
+.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
+When attempting to log an output nvlist of an ioctl in the on-disk history,
+the output will not be stored if it is larger than this size (in bytes).
+This must be less than
+.Sy DMU_MAX_ACCESS Pq 64 MiB .
+This applies primarily to
+.Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
+.
+.It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Prevent log spacemaps from being destroyed during pool exports and destroys.
+.
+.It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Enable/disable segment-based metaslab selection.
+.
+.It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int
+When using segment-based metaslab selection, continue allocating
+from the active metaslab until this option's
+worth of buckets have been exhausted.
+.
+.It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Load all metaslabs during pool import.
+.
+.It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Prevent metaslabs from being unloaded.
+.
+.It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Enable use of the fragmentation metric in computing metaslab weights.
+.
+.It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
+Maximum distance to search forward from the last offset.
+Without this limit, fragmented pools can see
+.Em >100`000
+iterations and
+.Fn metaslab_block_picker
+becomes the performance limiting factor on high-performance storage.
+.Pp
+With the default setting of
+.Sy 16 MiB ,
+we typically see less than
+.Em 500
+iterations, even with very fragmented
+.Sy ashift Ns = Ns Sy 9
+pools.
+The maximum number of iterations possible is
+.Sy metaslab_df_max_search / 2^(ashift+1) .
+With the default setting of
+.Sy 16 MiB
+this is
+.Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9
+or
+.Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 .
+.
+.It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int
+If not searching forward (due to
+.Sy metaslab_df_max_search , metaslab_df_free_pct ,
+.No or Sy metaslab_df_alloc_threshold ) ,
+this tunable controls which segment is used.
+If set, we will use the largest free segment.
+If unset, we will use a segment of at least the requested size.
+.
+.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64
+When we unload a metaslab, we cache the size of the largest free chunk.
+We use that cached size to determine whether or not to load a metaslab
+for a given allocation.
+As more frees accumulate in that metaslab while it's unloaded,
+the cached max size becomes less and less accurate.
+After a number of seconds controlled by this tunable,
+we stop considering the cached max size and start
+considering only the histogram instead.
+.
+.It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq uint
+When we are loading a new metaslab, we check the amount of memory being used
+to store metaslab range trees.
+If it is over a threshold, we attempt to unload the least recently used metaslab
+to prevent the system from clogging all of its memory with range trees.
+This tunable sets the percentage of total system memory that is the threshold.
+.
+.It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int
+.Bl -item -compact
+.It
+If unset, we will first try normal allocation.
+.It
+If that fails then we will do a gang allocation.
+.It
+If that fails then we will do a "try hard" gang allocation.
+.It
+If that fails then we will have a multi-layer gang block.
+.El
+.Pp
+.Bl -item -compact
+.It
+If set, we will first try normal allocation.
+.It
+If that fails then we will do a "try hard" allocation.
+.It
+If that fails we will do a gang allocation.
+.It
+If that fails we will do a "try hard" gang allocation.
+.It
+If that fails then we will have a multi-layer gang block.
+.El
+.
+.It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq uint
+When not trying hard, we only consider this number of the best metaslabs.
+This improves performance, especially when there are many metaslabs per vdev
+and the allocation can't actually be satisfied
+(so we would otherwise iterate all metaslabs).
+.
+.It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq uint
+When a vdev is added, target this number of metaslabs per top-level vdev.
+.
+.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint
+Default lower limit for metaslab size.
+.
+.It Sy zfs_vdev_max_ms_shift Ns = Ns Sy 34 Po 16 GiB Pc Pq uint
+Default upper limit for metaslab size.
+.
+.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint
+Maximum ashift used when optimizing for logical \[->] physical sector size on
+new
+top-level vdevs.
+May be increased up to
+.Sy ASHIFT_MAX Po 16 Pc ,
+but this may negatively impact pool space efficiency.
+.
+.It Sy zfs_vdev_direct_write_verify Ns = Ns Sy Linux 1 | FreeBSD 0 Pq uint
+If non-zero, then a Direct I/O write's checksum will be verified every
+time the write is issued and before it is commited to the block pointer.
+In the event the checksum is not valid then the I/O operation will return EIO.
+This module parameter can be used to detect if the
+contents of the users buffer have changed in the process of doing a Direct I/O
+write.
+It can also help to identify if reported checksum errors are tied to Direct I/O
+writes.
+Each verify error causes a
+.Sy dio_verify_wr
+zevent.
+Direct Write I/O checkum verify errors can be seen with
+.Nm zpool Cm status Fl d .
+The default value for this is 1 on Linux, but is 0 for
+.Fx
+because user pages can be placed under write protection in
+.Fx
+before the Direct I/O write is issued.
+.
+.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint
+Minimum ashift used when creating new top-level vdevs.
+.
+.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint
+Minimum number of metaslabs to create in a top-level vdev.
+.
+.It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Skip label validation steps during pool import.
+Changing is not recommended unless you know what you're doing
+and are recovering a damaged label.
+.
+.It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq uint
+Practical upper limit of total metaslabs per top-level vdev.
+.
+.It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Enable metaslab group preloading.
+.
+.It Sy metaslab_preload_limit Ns = Ns Sy 10 Pq uint
+Maximum number of metaslabs per group to preload
+.
+.It Sy metaslab_preload_pct Ns = Ns Sy 50 Pq uint
+Percentage of CPUs to run a metaslab preload taskq
+.
+.It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Give more weight to metaslabs with lower LBAs,
+assuming they have greater bandwidth,
+as is typically the case on a modern constant angular velocity disk drive.
+.
+.It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq uint
+After a metaslab is used, we keep it loaded for this many TXGs, to attempt to
+reduce unnecessary reloading.
+Note that both this many TXGs and
+.Sy metaslab_unload_delay_ms
+milliseconds must pass before unloading will occur.
+.
+.It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint
+After a metaslab is used, we keep it loaded for this many milliseconds,
+to attempt to reduce unnecessary reloading.
+Note, that both this many milliseconds and
+.Sy metaslab_unload_delay
+TXGs must pass before unloading will occur.
+.
+.It Sy reference_history Ns = Ns Sy 3 Pq uint
+Maximum reference holders being tracked when reference_tracking_enable is
+active.
+.It Sy raidz_expand_max_copy_bytes Ns = Ns Sy 160MB Pq ulong
+Max amount of memory to use for RAID-Z expansion I/O.
+This limits how much I/O can be outstanding at once.
+.
+.It Sy raidz_expand_max_reflow_bytes Ns = Ns Sy 0 Pq ulong
+For testing, pause RAID-Z expansion when reflow amount reaches this value.
+.
+.It Sy raidz_io_aggregate_rows Ns = Ns Sy 4 Pq ulong
+For expanded RAID-Z, aggregate reads that have more rows than this.
+.
+.It Sy reference_history Ns = Ns Sy 3 Pq int
+Maximum reference holders being tracked when reference_tracking_enable is
+active.
+.
+.It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Track reference holders to
+.Sy refcount_t
+objects (debug builds only).
+.
+.It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int
+When set, the
+.Sy hole_birth
+optimization will not be used, and all holes will always be sent during a
+.Nm zfs Cm send .
+This is useful if you suspect your datasets are affected by a bug in
+.Sy hole_birth .
+.
+.It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp
+SPA config file.
+.
+.It Sy spa_asize_inflation Ns = Ns Sy 24 Pq uint
+Multiplication factor used to estimate actual disk consumption from the
+size of data being written.
+The default value is a worst case estimate,
+but lower values may be valid for a given pool depending on its configuration.
+Pool administrators who understand the factors involved
+may wish to specify a more realistic inflation factor,
+particularly if they operate close to quota or capacity limits.
+.
+.It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Whether to print the vdev tree in the debugging message buffer during pool
+import.
+.
+.It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Whether to traverse data blocks during an "extreme rewind"
+.Pq Fl X
+import.
+.Pp
+An extreme rewind import normally performs a full traversal of all
+blocks in the pool for verification.
+If this parameter is unset, the traversal skips non-metadata blocks.
+It can be toggled once the
+import has started to stop or start the traversal of non-metadata blocks.
+.
+.It Sy spa_load_verify_metadata Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Whether to traverse blocks during an "extreme rewind"
+.Pq Fl X
+pool import.
+.Pp
+An extreme rewind import normally performs a full traversal of all
+blocks in the pool for verification.
+If this parameter is unset, the traversal is not performed.
+It can be toggled once the import has started to stop or start the traversal.
+.
+.It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq uint
+Sets the maximum number of bytes to consume during pool import to the log2
+fraction of the target ARC size.
+.
+.It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int
+Normally, we don't allow the last
+.Sy 3.2% Pq Sy 1/2^spa_slop_shift
+of space in the pool to be consumed.
+This ensures that we don't run the pool completely out of space,
+due to unaccounted changes (e.g. to the MOS).
+It also limits the worst-case time to allocate space.
+If we have less than this amount of free space,
+most ZPL operations (e.g. write, create) will return
+.Sy ENOSPC .
+.
+.It Sy spa_num_allocators Ns = Ns Sy 4 Pq int
+Determines the number of block alloctators to use per spa instance.
+Capped by the number of actual CPUs in the system via
+.Sy spa_cpus_per_allocator .
+.Pp
+Note that setting this value too high could result in performance
+degredation and/or excess fragmentation.
+Set value only applies to pools imported/created after that.
+.
+.It Sy spa_cpus_per_allocator Ns = Ns Sy 4 Pq int
+Determines the minimum number of CPUs in a system for block alloctator
+per spa instance.
+Set value only applies to pools imported/created after that.
+.
+.It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint
+Limits the number of on-disk error log entries that will be converted to the
+new format when enabling the
+.Sy head_errlog
+feature.
+The default is to convert all log entries.
+.
+.It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
+During top-level vdev removal, chunks of data are copied from the vdev
+which may include free space in order to trade bandwidth for IOPS.
+This parameter determines the maximum span of free space, in bytes,
+which will be included as "unnecessary" data in a chunk of copied data.
+.Pp
+The default value here was chosen to align with
+.Sy zfs_vdev_read_gap_limit ,
+which is a similar concept when doing
+regular reads (but there's no reason it has to be the same).
+.
+.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
+Logical ashift for file-based devices.
+.
+.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
+Physical ashift for file-based devices.
+.
+.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
+If set, when we start iterating over a ZAP object,
+prefetch the entire object (all leaf blocks).
+However, this is limited by
+.Sy dmu_prefetch_max .
+.
+.It Sy zap_micro_max_size Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int
+Maximum micro ZAP size.
+A "micro" ZAP is upgraded to a "fat" ZAP once it grows beyond the specified
+size.
+Sizes higher than 128KiB will be clamped to 128KiB unless the
+.Sy large_microzap
+feature is enabled.
+.
+.It Sy zap_shrink_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+If set, adjacent empty ZAP blocks will be collapsed, reducing disk space.
+.
+.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
+Min bytes to prefetch per stream.
+Prefetch distance starts from the demand access size and quickly grows to
+this value, doubling on each hit.
+After that it may grow further by 1/8 per hit, but only if some prefetch
+since last time haven't completed in time to satisfy demand request, i.e.
+prefetch depth didn't cover the read latency or the pool got saturated.
+.
+.It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
+Max bytes to prefetch per stream.
+.
+.It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
+Max bytes to prefetch indirects for per stream.
+.
+.It Sy zfetch_max_reorder Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
+Requests within this byte distance from the current prefetch stream position
+are considered parts of the stream, reordered due to parallel processing.
+Such requests do not advance the stream position immediately unless
+.Sy zfetch_hole_shift
+fill threshold is reached, but saved to fill holes in the stream later.
+.
+.It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint
+Max number of streams per zfetch (prefetch streams per file).
+.
+.It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint
+Min time before inactive prefetch stream can be reclaimed
+.
+.It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint
+Max time before inactive prefetch stream can be deleted
+.
+.It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Enables ARC from using scatter/gather lists and forces all allocations to be
+linear in kernel memory.
+Disabling can improve performance in some code paths
+at the expense of fragmented kernel memory.
+.
+.It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
+Maximum number of consecutive memory pages allocated in a single block for
+scatter/gather lists.
+.Pp
+The value of
+.Sy MAX_ORDER
+depends on kernel configuration.
+.
+.It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint
+This is the minimum allocation size that will use scatter (page-based) ABDs.
+Smaller allocations will use linear ABDs.
+.
+.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64
+When the number of bytes consumed by dnodes in the ARC exceeds this number of
+bytes, try to unpin some of it in response to demand for non-metadata.
+This value acts as a ceiling to the amount of dnode metadata, and defaults to
+.Sy 0 ,
+which indicates that a percent which is based on
+.Sy zfs_arc_dnode_limit_percent
+of the ARC meta buffers that may be used for dnodes.
+.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64
+Percentage that can be consumed by dnodes of ARC meta buffers.
+.Pp
+See also
+.Sy zfs_arc_dnode_limit ,
+which serves a similar purpose but has a higher priority if nonzero.
+.
+.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64
+Percentage of ARC dnodes to try to scan in response to demand for non-metadata
+when the number of bytes consumed by dnodes exceeds
+.Sy zfs_arc_dnode_limit .
+.
+.It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq uint
+The ARC's buffer hash table is sized based on the assumption of an average
+block size of this value.
+This works out to roughly 1 MiB of hash table per 1 GiB of physical memory
+with 8-byte pointers.
+For configurations with a known larger average block size,
+this value can be increased to reduce the memory footprint.
+.
+.It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq uint
+When
+.Fn arc_is_overflowing ,
+.Fn arc_get_data_impl
+waits for this percent of the requested amount of data to be evicted.
+For example, by default, for every
+.Em 2 KiB
+that's evicted,
+.Em 1 KiB
+of it may be "reused" by a new allocation.
+Since this is above
+.Sy 100 Ns % ,
+it ensures that progress is made towards getting
+.Sy arc_size No under Sy arc_c .
+Since this is finite, it ensures that allocations can still happen,
+even during the potentially long time that
+.Sy arc_size No is more than Sy arc_c .
+.
+.It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq uint
+Number ARC headers to evict per sub-list before proceeding to another sub-list.
+This batch-style operation prevents entire sub-lists from being evicted at once
+but comes at a cost of additional unlocking and locking.
+.
+.It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq uint
+If set to a non zero value, it will replace the
+.Sy arc_grow_retry
+value with this value.
+The
+.Sy arc_grow_retry
+.No value Pq default Sy 5 Ns s
+is the number of seconds the ARC will wait before
+trying to resume growth after a memory pressure event.
+.
+.It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int
+Throttle I/O when free system memory drops below this percentage of total
+system memory.
+Setting this value to
+.Sy 0
+will disable the throttle.
+.
+.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64
+Max size of ARC in bytes.
+If
+.Sy 0 ,
+then the max size of ARC is determined by the amount of system memory installed.
+The larger of
+.Sy all_system_memory No \- Sy 1 GiB
+and
+.Sy 5/8 No \(mu Sy all_system_memory
+will be used as the limit.
+This value must be at least
+.Sy 67108864 Ns B Pq 64 MiB .
+.Pp
+This value can be changed dynamically, with some caveats.
+It cannot be set back to
+.Sy 0
+while running, and reducing it below the current ARC size will not cause
+the ARC to shrink without memory pressure to induce shrinking.
+.
+.It Sy zfs_arc_meta_balance Ns = Ns Sy 500 Pq uint
+Balance between metadata and data on ghost hits.
+Values above 100 increase metadata caching by proportionally reducing effect
+of ghost data hits on target data/metadata rate.
+.
+.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64
+Min size of ARC in bytes.
+.No If set to Sy 0 , arc_c_min
+will default to consuming the larger of
+.Sy 32 MiB
+and
+.Sy all_system_memory No / Sy 32 .
+.
+.It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint
+Minimum time prefetched blocks are locked in the ARC.
+.
+.It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint
+Minimum time "prescient prefetched" blocks are locked in the ARC.
+These blocks are meant to be prefetched fairly aggressively ahead of
+the code that may use them.
+.
+.It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int
+Number of arc_prune threads.
+.Fx
+does not need more than one.
+Linux may theoretically use one per mount point up to number of CPUs,
+but that was not proven to be useful.
+.
+.It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int
+Number of missing top-level vdevs which will be allowed during
+pool import (only in read-only mode).
+.
+.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64
+Maximum size in bytes allowed to be passed as
+.Sy zc_nvlist_src_size
+for ioctls on
+.Pa /dev/zfs .
+This prevents a user from causing the kernel to allocate
+an excessive amount of memory.
+When the limit is exceeded, the ioctl fails with
+.Sy EINVAL
+and a description of the error is sent to the
+.Pa zfs-dbgmsg
+log.
+This parameter should not need to be touched under normal circumstances.
+If
+.Sy 0 ,
+equivalent to a quarter of the user-wired memory limit under
+.Fx
+and to
+.Sy 134217728 Ns B Pq 128 MiB
+under Linux.
+.
+.It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq uint
+To allow more fine-grained locking, each ARC state contains a series
+of lists for both data and metadata objects.
+Locking is performed at the level of these "sub-lists".
+This parameters controls the number of sub-lists per ARC state,
+and also applies to other uses of the multilist data structure.
+.Pp
+If
+.Sy 0 ,
+equivalent to the greater of the number of online CPUs and
+.Sy 4 .
+.
+.It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int
+The ARC size is considered to be overflowing if it exceeds the current
+ARC target size
+.Pq Sy arc_c
+by thresholds determined by this parameter.
+Exceeding by
+.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2
+starts ARC reclamation process.
+If that appears insufficient, exceeding by
+.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5
+blocks new buffer allocation until the reclaim thread catches up.
+Started reclamation process continues till ARC size returns below the
+target size.
+.Pp
+The default value of
+.Sy 8
+causes the ARC to start reclamation if it exceeds the target size by
+.Em 0.2%
+of the target size, and block allocations by
+.Em 0.6% .
+.
+.It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq uint
+If nonzero, this will update
+.Sy arc_shrink_shift Pq default Sy 7
+with the new value.
+.
+.It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint
+Percent of pagecache to reclaim ARC to.
+.Pp
+This tunable allows the ZFS ARC to play more nicely
+with the kernel's LRU pagecache.
+It can guarantee that the ARC size won't collapse under scanning
+pressure on the pagecache, yet still allows the ARC to be reclaimed down to
+.Sy zfs_arc_min
+if necessary.
+This value is specified as percent of pagecache size (as measured by
+.Sy NR_FILE_PAGES ) ,
+where that percent may exceed
+.Sy 100 .
+This
+only operates during memory pressure/reclaim.
+.
+.It Sy zfs_arc_shrinker_limit Ns = Ns Sy 0 Pq int
+This is a limit on how many pages the ARC shrinker makes available for
+eviction in response to one page allocation attempt.
+Note that in practice, the kernel's shrinker can ask us to evict
+up to about four times this for one allocation attempt.
+To reduce OOM risk, this limit is applied for kswapd reclaims only.
+.Pp
+For example a value of
+.Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages
+limits the amount of time spent attempting to reclaim ARC memory to
+less than 100 ms per allocation attempt,
+even with a small average compressed block size of ~8 KiB.
+.Pp
+The parameter can be set to 0 (zero) to disable the limit,
+and only applies on Linux.
+.
+.It Sy zfs_arc_shrinker_seeks Ns = Ns Sy 2 Pq int
+Relative cost of ARC eviction on Linux, AKA number of seeks needed to
+restore evicted page.
+Bigger values make ARC more precious and evictions smaller, comparing to
+other kernel subsystems.
+Value of 4 means parity with page cache.
+.
+.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64
+The target number of bytes the ARC should leave as free memory on the system.
+If zero, equivalent to the bigger of
+.Sy 512 KiB No and Sy all_system_memory/64 .
+.
+.It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Disable pool import at module load by ignoring the cache file
+.Pq Sy spa_config_path .
+.
+.It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint
+Rate limit checksum events to this many per second.
+Note that this should not be set below the ZED thresholds
+(currently 10 checksums over 10 seconds)
+or else the daemon may not trigger any action.
+.
+.It Sy zfs_commit_timeout_pct Ns = Ns Sy 10 Ns % Pq uint
+This controls the amount of time that a ZIL block (lwb) will remain "open"
+when it isn't "full", and it has a thread waiting for it to be committed to
+stable storage.
+The timeout is scaled based on a percentage of the last lwb
+latency to avoid significantly impacting the latency of each individual
+transaction record (itx).
+.
+.It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int
+Vdev indirection layer (used for device removal) sleeps for this many
+milliseconds during mapping generation.
+Intended for use with the test suite to throttle vdev removal speed.
+.
+.It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq uint
+Minimum percent of obsolete bytes in vdev mapping required to attempt to
+condense
+.Pq see Sy zfs_condense_indirect_vdevs_enable .
+Intended for use with the test suite
+to facilitate triggering condensing as needed.
+.
+.It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Enable condensing indirect vdev mappings.
+When set, attempt to condense indirect vdev mappings
+if the mapping uses more than
+.Sy zfs_condense_min_mapping_bytes
+bytes of memory and if the obsolete space map object uses more than
+.Sy zfs_condense_max_obsolete_bytes
+bytes on-disk.
+The condensing process is an attempt to save memory by removing obsolete
+mappings.
+.
+.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
+Only attempt to condense indirect vdev mappings if the on-disk size
+of the obsolete space map object is greater than this number of bytes
+.Pq see Sy zfs_condense_indirect_vdevs_enable .
+.
+.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64
+Minimum size vdev mapping to attempt to condense
+.Pq see Sy zfs_condense_indirect_vdevs_enable .
+.
+.It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Internally ZFS keeps a small log to facilitate debugging.
+The log is enabled by default, and can be disabled by unsetting this option.
+The contents of the log can be accessed by reading
+.Pa /proc/spl/kstat/zfs/dbgmsg .
+Writing
+.Sy 0
+to the file clears the log.
+.Pp
+This setting does not influence debug prints due to
+.Sy zfs_flags .
+.
+.It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
+Maximum size of the internal ZFS debug log.
+.
+.It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int
+Historically used for controlling what reporting was available under
+.Pa /proc/spl/kstat/zfs .
+No effect.
+.
+.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64
+Check time in milliseconds.
+This defines the frequency at which we check for hung I/O requests
+and potentially invoke the
+.Sy zfs_deadman_failmode
+behavior.
+.
+.It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+When a pool sync operation takes longer than
+.Sy zfs_deadman_synctime_ms ,
+or when an individual I/O operation takes longer than
+.Sy zfs_deadman_ziotime_ms ,
+then the operation is considered to be "hung".
+If
+.Sy zfs_deadman_enabled
+is set, then the deadman behavior is invoked as described by
+.Sy zfs_deadman_failmode .
+By default, the deadman is enabled and set to
+.Sy wait
+which results in "hung" I/O operations only being logged.
+The deadman is automatically disabled when a pool gets suspended.
+.
+.It Sy zfs_deadman_events_per_second Ns = Ns Sy 1 Ns /s Pq int
+Rate limit deadman zevents (which report hung I/O operations) to this many per
+second.
+.
+.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp
+Controls the failure behavior when the deadman detects a "hung" I/O operation.
+Valid values are:
+.Bl -tag -compact -offset 4n -width "continue"
+.It Sy wait
+Wait for a "hung" operation to complete.
+For each "hung" operation a "deadman" event will be posted
+describing that operation.
+.It Sy continue
+Attempt to recover from a "hung" operation by re-dispatching it
+to the I/O pipeline if possible.
+.It Sy panic
+Panic the system.
+This can be used to facilitate automatic fail-over
+to a properly configured fail-over partner.
+.El
+.
+.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64
+Interval in milliseconds after which the deadman is triggered and also
+the interval after which a pool sync operation is considered to be "hung".
+Once this limit is exceeded the deadman will be invoked every
+.Sy zfs_deadman_checktime_ms
+milliseconds until the pool sync completes.
+.
+.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64
+Interval in milliseconds after which the deadman is triggered and an
+individual I/O operation is considered to be "hung".
+As long as the operation remains "hung",
+the deadman will be invoked every
+.Sy zfs_deadman_checktime_ms
+milliseconds until the operation completes.
+.
+.It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Enable prefetching dedup-ed blocks which are going to be freed.
+.
+.It Sy zfs_dedup_log_flush_passes_max Ns = Ns Sy 8 Ns Pq uint
+Maximum number of dedup log flush passes (iterations) each transaction.
+.Pp
+At the start of each transaction, OpenZFS will estimate how many entries it
+needs to flush out to keep up with the change rate, taking the amount and time
+taken to flush on previous txgs into account (see
+.Sy zfs_dedup_log_flush_flow_rate_txgs ) .
+It will spread this amount into a number of passes.
+At each pass, it will use the amount already flushed and the total time taken
+by flushing and by other IO to recompute how much it should do for the remainder
+of the txg.
+.Pp
+Reducing the max number of passes will make flushing more aggressive, flushing
+out more entries on each pass.
+This can be faster, but also more likely to compete with other IO.
+Increasing the max number of passes will put fewer entries onto each pass,
+keeping the overhead of dedup changes to a minimum but possibly causing a large
+number of changes to be dumped on the last pass, which can blow out the txg
+sync time beyond
+.Sy zfs_txg_timeout .
+.
+.It Sy zfs_dedup_log_flush_min_time_ms Ns = Ns Sy 1000 Ns Pq uint
+Minimum time to spend on dedup log flush each transaction.
+.Pp
+At least this long will be spent flushing dedup log entries each transaction,
+up to
+.Sy zfs_txg_timeout .
+This occurs even if doing so would delay the transaction, that is, other IO
+completes under this time.
+.
+.It Sy zfs_dedup_log_flush_entries_min Ns = Ns Sy 1000 Ns Pq uint
+Flush at least this many entries each transaction.
+.Pp
+OpenZFS will estimate how many entries it needs to flush each transaction to
+keep up with the ingest rate (see
+.Sy zfs_dedup_log_flush_flow_rate_txgs ) .
+This sets the minimum for that estimate.
+Raising it can force OpenZFS to flush more aggressively, keeping the log small
+and so reducing pool import times, but can make it less able to back off if
+log flushing would compete with other IO too much.
+.
+.It Sy zfs_dedup_log_flush_flow_rate_txgs Ns = Ns Sy 10 Ns Pq uint
+Number of transactions to use to compute the flow rate.
+.Pp
+OpenZFS will estimate how many entries it needs to flush each transaction by
+monitoring the number of entries changed (ingest rate), number of entries
+flushed (flush rate) and time spent flushing (flush time rate) and combining
+these into an overall "flow rate".
+It will use an exponential weighted moving average over some number of recent
+transactions to compute these rates.
+This sets the number of transactions to compute these averages over.
+Setting it higher can help to smooth out the flow rate in the face of spiky
+workloads, but will take longer for the flow rate to adjust to a sustained
+change in the ingress rate.
+.
+.It Sy zfs_dedup_log_txg_max Ns = Ns Sy 8 Ns Pq uint
+Max transactions to before starting to flush dedup logs.
+.Pp
+OpenZFS maintains two dedup logs, one receiving new changes, one flushing.
+If there is nothing to flush, it will accumulate changes for no more than this
+many transactions before switching the logs and starting to flush entries out.
+.
+.It Sy zfs_dedup_log_mem_max Ns = Ns Sy 0 Ns Pq u64
+Max memory to use for dedup logs.
+.Pp
+OpenZFS will spend no more than this much memory on maintaining the in-memory
+dedup log.
+Flushing will begin when around half this amount is being spent on logs.
+The default value of
+.Sy 0
+will cause it to be set by
+.Sy zfs_dedup_log_mem_max_percent
+instead.
+.
+.It Sy zfs_dedup_log_mem_max_percent Ns = Ns Sy 1 Ns % Pq uint
+Max memory to use for dedup logs, as a percentage of total memory.
+.Pp
+If
+.Sy zfs_dedup_log_mem_max
+is not set, it will be initialised as a percentage of the total memory in the
+system.
+.
+.It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq uint
+Start to delay each transaction once there is this amount of dirty data,
+expressed as a percentage of
+.Sy zfs_dirty_data_max .
+This value should be at least
+.Sy zfs_vdev_async_write_active_max_dirty_percent .
+.No See Sx ZFS TRANSACTION DELAY .
+.
+.It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int
+This controls how quickly the transaction delay approaches infinity.
+Larger values cause longer delays for a given amount of dirty data.
+.Pp
+For the smoothest delay, this value should be about 1 billion divided
+by the maximum number of operations per second.
+This will smoothly handle between ten times and a tenth of this number.
+.No See Sx ZFS TRANSACTION DELAY .
+.Pp
+.Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 .
+.
+.It Sy zfs_dio_write_verify_events_per_second Ns = Ns Sy 20 Ns /s Pq uint
+Rate limit Direct I/O write verify events to this many per second.
+.
+.It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Disables requirement for IVset GUIDs to be present and match when doing a raw
+receive of encrypted datasets.
+Intended for users whose pools were created with
+OpenZFS pre-release versions and now have compatibility issues.
+.
+.It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong
+Maximum number of uses of a single salt value before generating a new one for
+encrypted datasets.
+The default value is also the maximum.
+.
+.It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint
+Size of the znode hashtable used for holds.
+.Pp
+Due to the need to hold locks on objects that may not exist yet, kernel mutexes
+are not created per-object and instead a hashtable is used where collisions
+will result in objects waiting when there is not actually contention on the
+same object.
+.
+.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int
+Rate limit delay zevents (which report slow I/O operations) to this many per
+second.
+.
+.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
+Upper-bound limit for unflushed metadata changes to be held by the
+log spacemap in memory, in bytes.
+.
+.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64
+Part of overall system memory that ZFS allows to be used
+for unflushed metadata changes by the log spacemap, in millionths.
+.
+.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64
+Describes the maximum number of log spacemap blocks allowed for each pool.
+The default value means that the space in all the log spacemaps
+can add up to no more than
+.Sy 131072
+blocks (which means
+.Em 16 GiB
+of logical space before compression and ditto blocks,
+assuming that blocksize is
+.Em 128 KiB ) .
+.Pp
+This tunable is important because it involves a trade-off between import
+time after an unclean export and the frequency of flushing metaslabs.
+The higher this number is, the more log blocks we allow when the pool is
+active which means that we flush metaslabs less often and thus decrease
+the number of I/O operations for spacemap updates per TXG.
+At the same time though, that means that in the event of an unclean export,
+there will be more log spacemap blocks for us to read, inducing overhead
+in the import time of the pool.
+The lower the number, the amount of flushing increases, destroying log
+blocks quicker as they become obsolete faster, which leaves less blocks
+to be read during import time after a crash.
+.Pp
+Each log spacemap block existing during pool import leads to approximately
+one extra logical I/O issued.
+This is the reason why this tunable is exposed in terms of blocks rather
+than space used.
+.
+.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64
+If the number of metaslabs is small and our incoming rate is high,
+we could get into a situation that we are flushing all our metaslabs every TXG.
+Thus we always allow at least this many log blocks.
+.
+.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64
+Tunable used to determine the number of blocks that can be used for
+the spacemap log, expressed as a percentage of the total number of
+unflushed metaslabs in the pool.
+.
+.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64
+Tunable limiting maximum time in TXGs any metaslab may remain unflushed.
+It effectively limits maximum number of unflushed per-TXG spacemap logs
+that need to be read after unclean pool export.
+.
+.It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
+When enabled, files will not be asynchronously removed from the list of pending
+unlinks and the space they consume will be leaked.
+Once this option has been disabled and the dataset is remounted,
+the pending unlinks will be processed and the freed space returned to the pool.
+This option is used by the test suite.
+.
+.It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong
+This is the used to define a large file for the purposes of deletion.
+Files containing more than
+.Sy zfs_delete_blocks
+will be deleted asynchronously, while smaller files are deleted synchronously.
+Decreasing this value will reduce the time spent in an
+.Xr unlink 2
+system call, at the expense of a longer delay before the freed space is
+available.
+This only applies on Linux.
+.
+.It Sy zfs_dirty_data_max Ns = Pq int
+Determines the dirty space limit in bytes.
+Once this limit is exceeded, new writes are halted until space frees up.
+This parameter takes precedence over
+.Sy zfs_dirty_data_max_percent .
+.No See Sx ZFS TRANSACTION DELAY .
+.Pp
+Defaults to
+.Sy physical_ram/10 ,
+capped at
+.Sy zfs_dirty_data_max_max .
+.
+.It Sy zfs_dirty_data_max_max Ns = Pq int
+Maximum allowable value of
+.Sy zfs_dirty_data_max ,
+expressed in bytes.
+This limit is only enforced at module load time, and will be ignored if
+.Sy zfs_dirty_data_max
+is later changed.
+This parameter takes precedence over
+.Sy zfs_dirty_data_max_max_percent .
+.No See Sx ZFS TRANSACTION DELAY .
+.Pp
+Defaults to
+.Sy min(physical_ram/4, 4GiB) ,
+or
+.Sy min(physical_ram/4, 1GiB)
+for 32-bit systems.
+.
+.It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq uint
+Maximum allowable value of
+.Sy zfs_dirty_data_max ,
+expressed as a percentage of physical RAM.
+This limit is only enforced at module load time, and will be ignored if
+.Sy zfs_dirty_data_max
+is later changed.
+The parameter
+.Sy zfs_dirty_data_max_max
+takes precedence over this one.
+.No See Sx ZFS TRANSACTION DELAY .
+.
+.It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq uint
+Determines the dirty space limit, expressed as a percentage of all memory.
+Once this limit is exceeded, new writes are halted until space frees up.
+The parameter
+.Sy zfs_dirty_data_max
+takes precedence over this one.
+.No See Sx ZFS TRANSACTION DELAY .
+.Pp
+Subject to
+.Sy zfs_dirty_data_max_max .
+.
+.It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq uint
+Start syncing out a transaction group if there's at least this much dirty data
+.Pq as a percentage of Sy zfs_dirty_data_max .
+This should be less than
+.Sy zfs_vdev_async_write_active_min_dirty_percent .
+.
+.It Sy zfs_wrlog_data_max Ns = Pq int
+The upper limit of write-transaction zil log data size in bytes.
+Write operations are throttled when approaching the limit until log data is
+cleared out after transaction group sync.
+Because of some overhead, it should be set at least 2 times the size of
+.Sy zfs_dirty_data_max
+.No to prevent harming normal write throughput .
+It also should be smaller than the size of the slog device if slog is present.
+.Pp
+Defaults to
+.Sy zfs_dirty_data_max*2
+.
+.It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint
+Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
+preallocated for a file in order to guarantee that later writes will not
+run out of space.
+Instead,
+.Xr fallocate 2
+space preallocation only checks that sufficient space is currently available
+in the pool or the user's project quota allocation,
+and then creates a sparse file of the requested size.
+The requested space is multiplied by
+.Sy zfs_fallocate_reserve_percent
+to allow additional space for indirect blocks and other internal metadata.
+Setting this to
+.Sy 0
+disables support for
+.Xr fallocate 2
+and causes it to return
+.Sy EOPNOTSUPP .
+.
+.It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string
+Select a fletcher 4 implementation.
+.Pp
+Supported selectors are:
+.Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw ,
+.No and Sy aarch64_neon .
+All except
+.Sy fastest No and Sy scalar
+require instruction set extensions to be available,
+and will only appear if ZFS detects that they are present at runtime.
+If multiple implementations of fletcher 4 are available, the
+.Sy fastest
+will be chosen using a micro benchmark.
+Selecting
+.Sy scalar
+results in the original CPU-based calculation being used.
+Selecting any option other than
+.Sy fastest No or Sy scalar
+results in vector instructions
+from the respective CPU instruction set being used.
+.
+.It Sy zfs_bclone_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Enables access to the block cloning feature.
+If this setting is 0, then even if feature@block_cloning is enabled,
+using functions and system calls that attempt to clone blocks will act as
+though the feature is disabled.
+.
+.It Sy zfs_bclone_wait_dirty Ns = Ns Sy 0 Ns | Ns 1 Pq int
+When set to 1 the FICLONE and FICLONERANGE ioctls wait for dirty data to be
+written to disk.
+This allows the clone operation to reliably succeed when a file is
+modified and then immediately cloned.
+For small files this may be slower than making a copy of the file.
+Therefore, this setting defaults to 0 which causes a clone operation to
+immediately fail when encountering a dirty block.
+.
+.It Sy zfs_blake3_impl Ns = Ns Sy fastest Pq string
+Select a BLAKE3 implementation.
+.Pp
+Supported selectors are:
+.Sy cycle , fastest , generic , sse2 , sse41 , avx2 , avx512 .
+All except
+.Sy cycle , fastest No and Sy generic
+require instruction set extensions to be available,
+and will only appear if ZFS detects that they are present at runtime.
+If multiple implementations of BLAKE3 are available, the
+.Sy fastest will be chosen using a micro benchmark. You can see the
+benchmark results by reading this kstat file:
+.Pa /proc/spl/kstat/zfs/chksum_bench .
+.
+.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Enable/disable the processing of the free_bpobj object.
+.
+.It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64
+Maximum number of blocks freed in a single TXG.
+.
+.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64
+Maximum number of dedup blocks freed in a single TXG.
+.
+.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint
+Maximum asynchronous read I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq uint
+Minimum asynchronous read I/O operation active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq uint
+When the pool has more than this much dirty data, use
+.Sy zfs_vdev_async_write_max_active
+to limit active async writes.
+If the dirty data is between the minimum and maximum,
+the active I/O limit is linearly interpolated.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq uint
+When the pool has less than this much dirty data, use
+.Sy zfs_vdev_async_write_min_active
+to limit active async writes.
+If the dirty data is between the minimum and maximum,
+the active I/O limit is linearly
+interpolated.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 10 Pq uint
+Maximum asynchronous write I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq uint
+Minimum asynchronous write I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.Pp
+Lower values are associated with better latency on rotational media but poorer
+resilver performance.
+The default value of
+.Sy 2
+was chosen as a compromise.
+A value of
+.Sy 3
+has been shown to improve resilver performance further at a cost of
+further increasing latency.
+.
+.It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq uint
+Maximum initializing I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq uint
+Minimum initializing I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq uint
+The maximum number of I/O operations active to each device.
+Ideally, this will be at least the sum of each queue's
+.Sy max_active .
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_open_timeout_ms Ns = Ns Sy 1000 Pq uint
+Timeout value to wait before determining a device is missing
+during import.
+This is helpful for transient missing paths due
+to links being briefly removed and recreated in response to
+udev events.
+.
+.It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq uint
+Maximum sequential resilver I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq uint
+Minimum sequential resilver I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq uint
+Maximum removal I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq uint
+Minimum removal I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq uint
+Maximum scrub I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq uint
+Minimum scrub I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq uint
+Maximum synchronous read I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq uint
+Minimum synchronous read I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq uint
+Maximum synchronous write I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq uint
+Minimum synchronous write I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq uint
+Maximum trim/discard I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq uint
+Minimum trim/discard I/O operations active to each device.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq uint
+For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
+the number of concurrently-active I/O operations is limited to
+.Sy zfs_*_min_active ,
+unless the vdev is "idle".
+When there are no interactive I/O operations active (synchronous or otherwise),
+and
+.Sy zfs_vdev_nia_delay
+operations have completed since the last interactive operation,
+then the vdev is considered to be "idle",
+and the number of concurrently-active non-interactive operations is increased to
+.Sy zfs_*_max_active .
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq uint
+Some HDDs tend to prioritize sequential I/O so strongly, that concurrent
+random I/O latency reaches several seconds.
+On some HDDs this happens even if sequential I/O operations
+are submitted one at a time, and so setting
+.Sy zfs_*_max_active Ns = Sy 1
+does not help.
+To prevent non-interactive I/O, like scrub,
+from monopolizing the device, no more than
+.Sy zfs_vdev_nia_credit operations can be sent
+while there are outstanding incomplete interactive operations.
+This enforced wait ensures the HDD services the interactive I/O
+within a reasonable amount of time.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_queue_depth_pct Ns = Ns Sy 1000 Ns % Pq uint
+Maximum number of queued allocations per top-level vdev expressed as
+a percentage of
+.Sy zfs_vdev_async_write_max_active ,
+which allows the system to detect devices that are more capable
+of handling allocations and to allocate more blocks to those devices.
+This allows for dynamic allocation distribution when devices are imbalanced,
+as fuller devices will tend to be slower than empty devices.
+.Pp
+Also see
+.Sy zio_dva_throttle_enabled .
+.
+.It Sy zfs_vdev_def_queue_depth Ns = Ns Sy 32 Pq uint
+Default queue depth for each vdev IO allocator.
+Higher values allow for better coalescing of sequential writes before sending
+them to the disk, but can increase transaction commit times.
+.
+.It Sy zfs_vdev_failfast_mask Ns = Ns Sy 1 Pq uint
+Defines if the driver should retire on a given error type.
+The following options may be bitwise-ored together:
+.TS
+box;
+lbz r l l .
+ Value Name Description
+_
+ 1 Device No driver retries on device errors
+ 2 Transport No driver retries on transport errors.
+ 4 Driver No driver retries on driver errors.
+.TE
+.
+.It Sy zfs_vdev_disk_max_segs Ns = Ns Sy 0 Pq uint
+Maximum number of segments to add to a BIO (min 4).
+If this is higher than the maximum allowed by the device queue or the kernel
+itself, it will be clamped.
+Setting it to zero will cause the kernel's ideal size to be used.
+This parameter only applies on Linux.
+This parameter is ignored if
+.Sy zfs_vdev_disk_classic Ns = Ns Sy 1 .
+.
+.It Sy zfs_vdev_disk_classic Ns = Ns Sy 0 Ns | Ns 1 Pq uint
+If set to 1, OpenZFS will submit IO to Linux using the method it used in 2.2
+and earlier.
+This "classic" method has known issues with highly fragmented IO requests and
+is slower on many workloads, but it has been in use for many years and is known
+to be very stable.
+If you set this parameter, please also open a bug report why you did so,
+including the workload involved and any error messages.
+.Pp
+This parameter and the classic submission method will be removed once we have
+total confidence in the new method.
+.Pp
+This parameter only applies on Linux, and can only be set at module load time.
+.
+.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int
+Time before expiring
+.Pa .zfs/snapshot .
+.
+.It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Allow the creation, removal, or renaming of entries in the
+.Sy .zfs/snapshot
+directory to cause the creation, destruction, or renaming of snapshots.
+When enabled, this functionality works both locally and over NFS exports
+which have the
+.Em no_root_squash
+option set.
+.
+.It Sy zfs_snapshot_no_setuid Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Whether to disable
+.Em setuid/setgid
+support for snapshot mounts triggered by access to the
+.Sy .zfs/snapshot
+directory by setting the
+.Em nosuid
+mount option.
+.
+.It Sy zfs_flags Ns = Ns Sy 0 Pq int
+Set additional debugging flags.
+The following flags may be bitwise-ored together:
+.TS
+box;
+lbz r l l .
+ Value Name Description
+_
+ 1 ZFS_DEBUG_DPRINTF Enable dprintf entries in the debug log.
+* 2 ZFS_DEBUG_DBUF_VERIFY Enable extra dbuf verifications.
+* 4 ZFS_DEBUG_DNODE_VERIFY Enable extra dnode verifications.
+ 8 ZFS_DEBUG_SNAPNAMES Enable snapshot name verification.
+* 16 ZFS_DEBUG_MODIFY Check for illegally modified ARC buffers.
+ 64 ZFS_DEBUG_ZIO_FREE Enable verification of block frees.
+ 128 ZFS_DEBUG_HISTOGRAM_VERIFY Enable extra spacemap histogram verifications.
+ 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
+ 512 ZFS_DEBUG_SET_ERROR Enable \fBSET_ERROR\fP and dprintf entries in the debug log.
+ 1024 ZFS_DEBUG_INDIRECT_REMAP Verify split blocks created by device removal.
+ 2048 ZFS_DEBUG_TRIM Verify TRIM ranges are always within the allocatable range tree.
+ 4096 ZFS_DEBUG_LOG_SPACEMAP Verify that the log summary is consistent with the spacemap log
+ and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing.
+.TE
+.Sy \& * No Requires debug build .
+.
+.It Sy zfs_btree_verify_intensity Ns = Ns Sy 0 Pq uint
+Enables btree verification.
+The following settings are culminative:
+.TS
+box;
+lbz r l l .
+ Value Description
+
+ 1 Verify height.
+ 2 Verify pointers from children to parent.
+ 3 Verify element counts.
+ 4 Verify element order. (expensive)
+* 5 Verify unused memory is poisoned. (expensive)
+.TE
+.Sy \& * No Requires debug build .
+.
+.It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int
+If destroy encounters an
+.Sy EIO
+while reading metadata (e.g. indirect blocks),
+space referenced by the missing metadata can not be freed.
+Normally this causes the background destroy to become "stalled",
+as it is unable to make forward progress.
+While in this stalled state, all remaining space to free
+from the error-encountering filesystem is "temporarily leaked".
+Set this flag to cause it to ignore the
+.Sy EIO ,
+permanently leak the space from indirect blocks that can not be read,
+and continue to free everything else that it can.
+.Pp
+The default "stalling" behavior is useful if the storage partially
+fails (i.e. some but not all I/O operations fail), and then later recovers.
+In this case, we will be able to continue pool operations while it is
+partially failed, and when it recovers, we can continue to free the
+space, with no leaks.
+Note, however, that this case is actually fairly rare.
+.Pp
+Typically pools either
+.Bl -enum -compact -offset 4n -width "1."
+.It
+fail completely (but perhaps temporarily,
+e.g. due to a top-level vdev going offline), or
+.It
+have localized, permanent errors (e.g. disk returns the wrong data
+due to bit flip or firmware bug).
+.El
+In the former case, this setting does not matter because the
+pool will be suspended and the sync thread will not be able to make
+forward progress regardless.
+In the latter, because the error is permanent, the best we can do
+is leak the minimum amount of space,
+which is what setting this flag will do.
+It is therefore reasonable for this flag to normally be set,
+but we chose the more conservative approach of not setting it,
+so that there is no possibility of
+leaking space in the "partial temporary" failure case.
+.
+.It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint
+During a
+.Nm zfs Cm destroy
+operation using the
+.Sy async_destroy
+feature,
+a minimum of this much time will be spent working on freeing blocks per TXG.
+.
+.It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint
+Similar to
+.Sy zfs_free_min_time_ms ,
+but for cleanup of old indirection records for removed vdevs.
+.
+.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64
+Largest data block to write to the ZIL.
+Larger blocks will be treated as if the dataset being written to had the
+.Sy logbias Ns = Ns Sy throughput
+property set.
+.
+.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64
+Pattern written to vdev free space by
+.Xr zpool-initialize 8 .
+.
+.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
+Size of writes used by
+.Xr zpool-initialize 8 .
+This option is used by the test suite.
+.
+.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64
+The threshold size (in block pointers) at which we create a new sub-livelist.
+Larger sublists are more costly from a memory perspective but the fewer
+sublists there are, the lower the cost of insertion.
+.
+.It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int
+If the amount of shared space between a snapshot and its clone drops below
+this threshold, the clone turns off the livelist and reverts to the old
+deletion method.
+This is in place because livelists no long give us a benefit
+once a clone has been overwritten enough.
+.
+.It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int
+Incremented each time an extra ALLOC blkptr is added to a livelist entry while
+it is being condensed.
+This option is used by the test suite to track race conditions.
+.
+.It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int
+Incremented each time livelist condensing is canceled while in
+.Fn spa_livelist_condense_sync .
+This option is used by the test suite to track race conditions.
+.
+.It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
+When set, the livelist condense process pauses indefinitely before
+executing the synctask \(em
+.Fn spa_livelist_condense_sync .
+This option is used by the test suite to trigger race conditions.
+.
+.It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int
+Incremented each time livelist condensing is canceled while in
+.Fn spa_livelist_condense_cb .
+This option is used by the test suite to track race conditions.
+.
+.It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
+When set, the livelist condense process pauses indefinitely before
+executing the open context condensing work in
+.Fn spa_livelist_condense_cb .
+This option is used by the test suite to trigger race conditions.
+.
+.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64
+The maximum execution time limit that can be set for a ZFS channel program,
+specified as a number of Lua instructions.
+.
+.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64
+The maximum memory limit that can be set for a ZFS channel program, specified
+in bytes.
+.
+.It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int
+The maximum depth of nested datasets.
+This value can be tuned temporarily to
+fix existing datasets that exceed the predefined limit.
+.
+.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64
+The number of past TXGs that the flushing algorithm of the log spacemap
+feature uses to estimate incoming log blocks.
+.
+.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64
+Maximum number of rows allowed in the summary of the spacemap log.
+.
+.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint
+We currently support block sizes from
+.Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc .
+The benefits of larger blocks, and thus larger I/O,
+need to be weighed against the cost of COWing a giant block to modify one byte.
+Additionally, very large blocks can have an impact on I/O latency,
+and also potentially on the memory allocator.
+Therefore, we formerly forbade creating blocks larger than 1M.
+Larger blocks could be created by changing it,
+and pools with larger blocks can always be imported and used,
+regardless of this setting.
+.Pp
+Note that it is still limited by default to
+.Ar 1 MiB
+on x86_32, because Linux's
+3/1 memory split doesn't leave much room for 16M chunks.
+.
+.It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Allow datasets received with redacted send/receive to be mounted.
+Normally disabled because these datasets may be missing key data.
+.
+.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64
+Minimum number of metaslabs to flush per dirty TXG.
+.
+.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq uint
+Allow metaslabs to keep their active state as long as their fragmentation
+percentage is no more than this value.
+An active metaslab that exceeds this threshold
+will no longer keep its active status allowing better metaslabs to be selected.
+.
+.It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq uint
+Metaslab groups are considered eligible for allocations if their
+fragmentation metric (measured as a percentage) is less than or equal to
+this value.
+If a metaslab group exceeds this threshold then it will be
+skipped unless all metaslab groups within the metaslab class have also
+crossed this threshold.
+.
+.It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq uint
+Defines a threshold at which metaslab groups should be eligible for allocations.
+The value is expressed as a percentage of free space
+beyond which a metaslab group is always eligible for allocations.
+If a metaslab group's free space is less than or equal to the
+threshold, the allocator will avoid allocating to that group
+unless all groups in the pool have reached the threshold.
+Once all groups have reached the threshold, all groups are allowed to accept
+allocations.
+The default value of
+.Sy 0
+disables the feature and causes all metaslab groups to be eligible for
+allocations.
+.Pp
+This parameter allows one to deal with pools having heavily imbalanced
+vdevs such as would be the case when a new vdev has been added.
+Setting the threshold to a non-zero percentage will stop allocations
+from being made to vdevs that aren't filled to the specified percentage
+and allow lesser filled vdevs to acquire more allocations than they
+otherwise would under the old
+.Sy zfs_mg_alloc_failures
+facility.
+.
+.It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
+If enabled, ZFS will place DDT data into the special allocation class.
+.
+.It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
+If enabled, ZFS will place user data indirect blocks
+into the special allocation class.
+.
+.It Sy zfs_multihost_history Ns = Ns Sy 0 Pq uint
+Historical statistics for this many latest multihost updates will be available
+in
+.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost .
+.
+.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64
+Used to control the frequency of multihost writes which are performed when the
+.Sy multihost
+pool property is on.
+This is one of the factors used to determine the
+length of the activity check during import.
+.Pp
+The multihost write period is
+.Sy zfs_multihost_interval No / Sy leaf-vdevs .
+On average a multihost write will be issued for each leaf vdev
+every
+.Sy zfs_multihost_interval
+milliseconds.
+In practice, the observed period can vary with the I/O load
+and this observed value is the delay which is stored in the uberblock.
+.
+.It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint
+Used to control the duration of the activity test on import.
+Smaller values of
+.Sy zfs_multihost_import_intervals
+will reduce the import time but increase
+the risk of failing to detect an active pool.
+The total activity check time is never allowed to drop below one second.
+.Pp
+On import the activity check waits a minimum amount of time determined by
+.Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals ,
+or the same product computed on the host which last had the pool imported,
+whichever is greater.
+The activity check time may be further extended if the value of MMP
+delay found in the best uberblock indicates actual multihost updates happened
+at longer intervals than
+.Sy zfs_multihost_interval .
+A minimum of
+.Em 100 ms
+is enforced.
+.Pp
+.Sy 0 No is equivalent to Sy 1 .
+.
+.It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint
+Controls the behavior of the pool when multihost write failures or delays are
+detected.
+.Pp
+When
+.Sy 0 ,
+multihost write failures or delays are ignored.
+The failures will still be reported to the ZED which depending on
+its configuration may take action such as suspending the pool or offlining a
+device.
+.Pp
+Otherwise, the pool will be suspended if
+.Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval
+milliseconds pass without a successful MMP write.
+This guarantees the activity test will see MMP writes if the pool is imported.
+.Sy 1 No is equivalent to Sy 2 ;
+this is necessary to prevent the pool from being suspended
+due to normal, small I/O latency variations.
+.
+.It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Set to disable scrub I/O.
+This results in scrubs not actually scrubbing data and
+simply doing a metadata crawl of the pool instead.
+.
+.It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Set to disable block prefetching for scrubs.
+.
+.It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Disable cache flush operations on disks when writing.
+Setting this will cause pool corruption on power loss
+if a volatile out-of-order write cache is enabled.
+.
+.It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Allow no-operation writes.
+The occurrence of nopwrites will further depend on other pool properties
+.Pq i.a. the checksumming and compression algorithms .
+.
+.It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Enable forcing TXG sync to find holes.
+When enabled forces ZFS to sync data when
+.Sy SEEK_HOLE No or Sy SEEK_DATA
+flags are used allowing holes in a file to be accurately reported.
+When disabled holes will not be reported in recently dirtied files.
+.
+.It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int
+The number of bytes which should be prefetched during a pool traversal, like
+.Nm zfs Cm send
+or other data crawling operations.
+.
+.It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq uint
+The number of blocks pointed by indirect (non-L0) block which should be
+prefetched during a pool traversal, like
+.Nm zfs Cm send
+or other data crawling operations.
+.
+.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64
+Control percentage of dirtied indirect blocks from frees allowed into one TXG.
+After this threshold is crossed, additional frees will wait until the next TXG.
+.Sy 0 No disables this throttle .
+.
+.It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Disable predictive prefetch.
+Note that it leaves "prescient" prefetch
+.Pq for, e.g., Nm zfs Cm send
+intact.
+Unlike predictive prefetch, prescient prefetch never issues I/O
+that ends up not being needed, so it can't hurt performance.
+.
+.It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Disable QAT hardware acceleration for SHA256 checksums.
+May be unset after the ZFS modules have been loaded to initialize the QAT
+hardware as long as support is compiled in and the QAT driver is present.
+.
+.It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Disable QAT hardware acceleration for gzip compression.
+May be unset after the ZFS modules have been loaded to initialize the QAT
+hardware as long as support is compiled in and the QAT driver is present.
+.
+.It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Disable QAT hardware acceleration for AES-GCM encryption.
+May be unset after the ZFS modules have been loaded to initialize the QAT
+hardware as long as support is compiled in and the QAT driver is present.
+.
+.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
+Bytes to read per chunk.
+.
+.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint
+Historical statistics for this many latest reads will be available in
+.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads .
+.
+.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Include cache hits in read history
+.
+.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
+Maximum read segment size to issue when sequentially resilvering a
+top-level vdev.
+.
+.It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Automatically start a pool scrub when the last active sequential resilver
+completes in order to verify the checksums of all blocks which have been
+resilvered.
+This is enabled by default and strongly recommended.
+.
+.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64
+Maximum amount of I/O that can be concurrently issued for a sequential
+resilver per leaf device, given in bytes.
+.
+.It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int
+If an indirect split block contains more than this many possible unique
+combinations when being reconstructed, consider it too computationally
+expensive to check them all.
+Instead, try at most this many randomly selected
+combinations each time the block is accessed.
+This allows all segment copies to participate fairly
+in the reconstruction when all combinations
+cannot be checked and prevents repeated use of one bad copy.
+.
+.It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Set to attempt to recover from fatal errors.
+This should only be used as a last resort,
+as it typically results in leaked space, or worse.
+.
+.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Ignore hard I/O errors during device removal.
+When set, if a device encounters a hard I/O error during the removal process
+the removal will not be cancelled.
+This can result in a normally recoverable block becoming permanently damaged
+and is hence not recommended.
+This should only be used as a last resort when the
+pool cannot be returned to a healthy state prior to removing the device.
+.
+.It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
+This is used by the test suite so that it can ensure that certain actions
+happen while in the middle of a removal.
+.
+.It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
+The largest contiguous segment that we will attempt to allocate when removing
+a device.
+If there is a performance problem with attempting to allocate large blocks,
+consider decreasing this.
+The default value is also the maximum.
+.
+.It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Ignore the
+.Sy resilver_defer
+feature, causing an operation that would start a resilver to
+immediately restart the one in progress.
+.
+.It Sy zfs_resilver_defer_percent Ns = Ns Sy 10 Ns % Pq uint
+If the ongoing resilver progress is below this threshold, a new resilver will
+restart from scratch instead of being deferred after the current one finishes,
+even if the
+.Sy resilver_defer
+feature is enabled.
+.
+.It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint
+Resilvers are processed by the sync thread.
+While resilvering, it will spend at least this much time
+working on a resilver between TXG flushes.
+.
+.It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
+If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub),
+even if there were unrepairable errors.
+Intended to be used during pool repair or recovery to
+stop resilvering when the pool is next imported.
+.
+.It Sy zfs_scrub_after_expand Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Automatically start a pool scrub after a RAIDZ expansion completes
+in order to verify the checksums of all blocks which have been
+copied during the expansion.
+This is enabled by default and strongly recommended.
+.
+.It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint
+Scrubs are processed by the sync thread.
+While scrubbing, it will spend at least this much time
+working on a scrub between TXG flushes.
+.
+.It Sy zfs_scrub_error_blocks_per_txg Ns = Ns Sy 4096 Pq uint
+Error blocks to be scrubbed in one txg.
+.
+.It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq uint
+To preserve progress across reboots, the sequential scan algorithm periodically
+needs to stop metadata scanning and issue all the verification I/O to disk.
+The frequency of this flushing is determined by this tunable.
+.
+.It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq uint
+This tunable affects how scrub and resilver I/O segments are ordered.
+A higher number indicates that we care more about how filled in a segment is,
+while a lower number indicates we care more about the size of the extent without
+considering the gaps within a segment.
+This value is only tunable upon module insertion.
+Changing the value afterwards will have no effect on scrub or resilver
+performance.
+.
+.It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq uint
+Determines the order that data will be verified while scrubbing or resilvering:
+.Bl -tag -compact -offset 4n -width "a"
+.It Sy 1
+Data will be verified as sequentially as possible, given the
+amount of memory reserved for scrubbing
+.Pq see Sy zfs_scan_mem_lim_fact .
+This may improve scrub performance if the pool's data is very fragmented.
+.It Sy 2
+The largest mostly-contiguous chunk of found data will be verified first.
+By deferring scrubbing of small segments, we may later find adjacent data
+to coalesce and increase the segment size.
+.It Sy 0
+.No Use strategy Sy 1 No during normal verification
+.No and strategy Sy 2 No while taking a checkpoint .
+.El
+.
+.It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int
+If unset, indicates that scrubs and resilvers will gather metadata in
+memory before issuing sequential I/O.
+Otherwise indicates that the legacy algorithm will be used,
+where I/O is initiated as soon as it is discovered.
+Unsetting will not affect scrubs or resilvers that are already in progress.
+.
+.It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int
+Sets the largest gap in bytes between scrub/resilver I/O operations
+that will still be considered sequential for sorting purposes.
+Changing this value will not
+affect scrubs or resilvers that are already in progress.
+.
+.It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
+Maximum fraction of RAM used for I/O sorting by sequential scan algorithm.
+This tunable determines the hard limit for I/O sorting memory usage.
+When the hard limit is reached we stop scanning metadata and start issuing
+data verification I/O.
+This is done until we get below the soft limit.
+.
+.It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
+The fraction of the hard limit used to determined the soft limit for I/O sorting
+by the sequential scan algorithm.
+When we cross this limit from below no action is taken.
+When we cross this limit from above it is because we are issuing verification
+I/O.
+In this case (unless the metadata scan is done) we stop issuing verification I/O
+and start scanning metadata again until we get to the hard limit.
+.
+.It Sy zfs_scan_report_txgs Ns = Ns Sy 0 Ns | Ns 1 Pq uint
+When reporting resilver throughput and estimated completion time use the
+performance observed over roughly the last
+.Sy zfs_scan_report_txgs
+TXGs.
+When set to zero performance is calculated over the time between checkpoints.
+.
+.It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Enforce tight memory limits on pool scans when a sequential scan is in progress.
+When disabled, the memory limit may be exceeded by fast disks.
+.
+.It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Freezes a scrub/resilver in progress without actually pausing it.
+Intended for testing/debugging.
+.
+.It Sy zfs_scan_vdev_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
+Maximum amount of data that can be concurrently issued at once for scrubs and
+resilvers per leaf device, given in bytes.
+.
+.It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Allow sending of corrupt data (ignore read/checksum errors when sending).
+.
+.It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Include unmodified spill blocks in the send stream.
+Under certain circumstances, previous versions of ZFS could incorrectly
+remove the spill block from an existing object.
+Including unmodified copies of the spill blocks creates a backwards-compatible
+stream which will recreate a spill block if it was incorrectly removed.
+.
+.It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
+The fill fraction of the
+.Nm zfs Cm send
+internal queues.
+The fill fraction controls the timing with which internal threads are woken up.
+.
+.It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
+The maximum number of bytes allowed in
+.Nm zfs Cm send Ns 's
+internal queues.
+.
+.It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
+The fill fraction of the
+.Nm zfs Cm send
+prefetch queue.
+The fill fraction controls the timing with which internal threads are woken up.
+.
+.It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
+The maximum number of bytes allowed that will be prefetched by
+.Nm zfs Cm send .
+This value must be at least twice the maximum block size in use.
+.
+.It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
+The fill fraction of the
+.Nm zfs Cm receive
+queue.
+The fill fraction controls the timing with which internal threads are woken up.
+.
+.It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
+The maximum number of bytes allowed in the
+.Nm zfs Cm receive
+queue.
+This value must be at least twice the maximum block size in use.
+.
+.It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
+The maximum amount of data, in bytes, that
+.Nm zfs Cm receive
+will write in one DMU transaction.
+This is the uncompressed size, even when receiving a compressed send stream.
+This setting will not reduce the write size below a single block.
+Capped at a maximum of
+.Sy 32 MiB .
+.
+.It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int
+When this variable is set to non-zero a corrective receive:
+.Bl -enum -compact -offset 4n -width "1."
+.It
+Does not enforce the restriction of source & destination snapshot GUIDs
+matching.
+.It
+If there is an error during healing, the healing receive is not
+terminated instead it moves on to the next record.
+.El
+.
+.It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq uint
+Setting this variable overrides the default logic for estimating block
+sizes when doing a
+.Nm zfs Cm send .
+The default heuristic is that the average block size
+will be the current recordsize.
+Override this value if most data in your dataset is not of that size
+and you require accurate zfs send size estimates.
+.
+.It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq uint
+Flushing of data to disk is done in passes.
+Defer frees starting in this pass.
+.
+.It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
+Maximum memory used for prefetching a checkpoint's space map on each
+vdev while discarding the checkpoint.
+.
+.It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq uint
+Only allow small data blocks to be allocated on the special and dedup vdev
+types when the available free space percentage on these vdevs exceeds this
+value.
+This ensures reserved space is available for pool metadata as the
+special vdevs approach capacity.
+.
+.It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq uint
+Starting in this sync pass, disable compression (including of metadata).
+With the default setting, in practice, we don't have this many sync passes,
+so this has no effect.
+.Pp
+The original intent was that disabling compression would help the sync passes
+to converge.
+However, in practice, disabling compression increases
+the average number of sync passes; because when we turn compression off,
+many blocks' size will change, and thus we have to re-allocate
+(not overwrite) them.
+It also increases the number of
+.Em 128 KiB
+allocations (e.g. for indirect blocks and spacemaps)
+because these will not be compressed.
+The
+.Em 128 KiB
+allocations are especially detrimental to performance
+on highly fragmented systems, which may have very few free segments of this
+size,
+and may need to load new metaslabs to satisfy these allocations.
+.
+.It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq uint
+Rewrite new block pointers starting in this pass.
+.
+.It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint
+Maximum size of TRIM command.
+Larger ranges will be split into chunks no larger than this value before
+issuing.
+.
+.It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
+Minimum size of TRIM commands.
+TRIM ranges smaller than this will be skipped,
+unless they're part of a larger range which was chunked.
+This is done because it's common for these small TRIMs
+to negatively impact overall performance.
+.
+.It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint
+Skip uninitialized metaslabs during the TRIM process.
+This option is useful for pools constructed from large thinly-provisioned
+devices
+where TRIM operations are slow.
+As a pool ages, an increasing fraction of the pool's metaslabs
+will be initialized, progressively degrading the usefulness of this option.
+This setting is stored when starting a manual TRIM and will
+persist for the duration of the requested TRIM.
+.
+.It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint
+Maximum number of queued TRIMs outstanding per leaf vdev.
+The number of concurrent TRIM commands issued to the device is controlled by
+.Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active .
+.
+.It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint
+The number of transaction groups' worth of frees which should be aggregated
+before TRIM operations are issued to the device.
+This setting represents a trade-off between issuing larger,
+more efficient TRIM operations and the delay
+before the recently trimmed space is available for use by the device.
+.Pp
+Increasing this value will allow frees to be aggregated for a longer time.
+This will result is larger TRIM operations and potentially increased memory
+usage.
+Decreasing this value will have the opposite effect.
+The default of
+.Sy 32
+was determined to be a reasonable compromise.
+.
+.It Sy zfs_txg_history Ns = Ns Sy 100 Pq uint
+Historical statistics for this many latest TXGs will be available in
+.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs .
+.
+.It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq uint
+Flush dirty data to disk at least every this many seconds (maximum TXG
+duration).
+.
+.It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
+Max vdev I/O aggregation size.
+.
+.It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
+Max vdev I/O aggregation size for non-rotating media.
+.
+.It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int
+A number by which the balancing algorithm increments the load calculation for
+the purpose of selecting the least busy mirror member when an I/O operation
+immediately follows its predecessor on rotational vdevs
+for the purpose of making decisions based on load.
+.
+.It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int
+A number by which the balancing algorithm increments the load calculation for
+the purpose of selecting the least busy mirror member when an I/O operation
+lacks locality as defined by
+.Sy zfs_vdev_mirror_rotating_seek_offset .
+Operations within this that are not immediately following the previous operation
+are incremented by half.
+.
+.It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int
+The maximum distance for the last queued I/O operation in which
+the balancing algorithm considers an operation to have locality.
+.No See Sx ZFS I/O SCHEDULER .
+.
+.It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int
+A number by which the balancing algorithm increments the load calculation for
+the purpose of selecting the least busy mirror member on non-rotational vdevs
+when I/O operations do not immediately follow one another.
+.
+.It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int
+A number by which the balancing algorithm increments the load calculation for
+the purpose of selecting the least busy mirror member when an I/O operation
+lacks
+locality as defined by the
+.Sy zfs_vdev_mirror_rotating_seek_offset .
+Operations within this that are not immediately following the previous operation
+are incremented by half.
+.
+.It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
+Aggregate read I/O operations if the on-disk gap between them is within this
+threshold.
+.
+.It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq uint
+Aggregate write I/O operations if the on-disk gap between them is within this
+threshold.
+.
+.It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string
+Select the raidz parity implementation to use.
+.Pp
+Variants that don't depend on CPU-specific features
+may be selected on module load, as they are supported on all systems.
+The remaining options may only be set after the module is loaded,
+as they are available only if the implementations are compiled in
+and supported on the running system.
+.Pp
+Once the module is loaded,
+.Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl
+will show the available options,
+with the currently selected one enclosed in square brackets.
+.Pp
+.TS
+lb l l .
+fastest selected by built-in benchmark
+original original implementation
+scalar scalar implementation
+sse2 SSE2 instruction set 64-bit x86
+ssse3 SSSE3 instruction set 64-bit x86
+avx2 AVX2 instruction set 64-bit x86
+avx512f AVX512F instruction set 64-bit x86
+avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
+aarch64_neon NEON Aarch64/64-bit ARMv8
+aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
+powerpc_altivec Altivec PowerPC
+.TE
+.
+.It Sy zfs_vdev_scheduler Pq charp
+.Sy DEPRECATED .
+Prints warning to kernel log for compatibility.
+.
+.It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq uint
+Max event queue length.
+Events in the queue can be viewed with
+.Xr zpool-events 8 .
+.
+.It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int
+Maximum recent zevent records to retain for duplicate checking.
+Setting this to
+.Sy 0
+disables duplicate detection.
+.
+.It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int
+Lifespan for a recent ereport that was retained for duplicate checking.
+.
+.It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int
+The maximum number of taskq entries that are allowed to be cached.
+When this limit is exceeded transaction records (itxs)
+will be cleaned synchronously.
+.
+.It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int
+The number of taskq entries that are pre-populated when the taskq is first
+created and are immediately available for use.
+.
+.It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int
+This controls the number of threads used by
+.Sy dp_zil_clean_taskq .
+The default value of
+.Sy 100%
+will create a maximum of one thread per cpu.
+.
+.It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
+This sets the maximum block size used by the ZIL.
+On very fragmented pools, lowering this
+.Pq typically to Sy 36 KiB
+can improve performance.
+.
+.It Sy zil_maxcopied Ns = Ns Sy 7680 Ns B Po 7.5 KiB Pc Pq uint
+This sets the maximum number of write bytes logged via WR_COPIED.
+It tunes a tradeoff between additional memory copy and possibly worse log
+space efficiency vs additional range lock/unlock.
+.
+.It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Disable the cache flush commands that are normally sent to disk by
+the ZIL after an LWB write has completed.
+Setting this will cause ZIL corruption on power loss
+if a volatile out-of-order write cache is enabled.
+.
+.It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Disable intent logging replay.
+Can be disabled for recovery from corrupted ZIL.
+.
+.It Sy zil_slog_bulk Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64
+Limit SLOG write size per commit executed with synchronous priority.
+Any writes above that will be executed with lower (asynchronous) priority
+to limit potential SLOG device abuse by single active ZIL writer.
+.
+.It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Setting this tunable to zero disables ZIL logging of new
+.Sy xattr Ns = Ns Sy sa
+records if the
+.Sy org.openzfs:zilsaxattr
+feature is enabled on the pool.
+This would only be necessary to work around bugs in the ZIL logging or replay
+code for this record type.
+The tunable has no effect if the feature is disabled.
+.
+.It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq uint
+Usually, one metaslab from each normal-class vdev is dedicated for use by
+the ZIL to log synchronous writes.
+However, if there are fewer than
+.Sy zfs_embedded_slog_min_ms
+metaslabs in the vdev, this functionality is disabled.
+This ensures that we don't set aside an unreasonable amount of space for the
+ZIL.
+.
+.It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq uint
+Whether heuristic for detection of incompressible data with zstd levels >= 3
+using LZ4 and zstd-1 passes is enabled.
+.
+.It Sy zstd_abort_size Ns = Ns Sy 131072 Pq uint
+Minimal uncompressed size (inclusive) of a record before the early abort
+heuristic will be attempted.
+.
+.It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int
+If non-zero, the zio deadman will produce debugging messages
+.Pq see Sy zfs_dbgmsg_enable
+for all zios, rather than only for leaf zios possessing a vdev.
+This is meant to be used by developers to gain
+diagnostic information for hang conditions which don't involve a mutex
+or other locking primitive: typically conditions in which a thread in
+the zio pipeline is looping indefinitely.
+.
+.It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int
+When an I/O operation takes more than this much time to complete,
+it's marked as slow.
+Each slow operation causes a delay zevent.
+Slow I/O counters can be seen with
+.Nm zpool Cm status Fl s .
+.
+.It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
+Throttle block allocations in the I/O pipeline.
+This allows for dynamic allocation distribution when devices are imbalanced.
+When enabled, the maximum number of pending allocations per top-level vdev
+is limited by
+.Sy zfs_vdev_queue_depth_pct .
+.
+.It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int
+Control the naming scheme used when setting new xattrs in the user namespace.
+If
+.Sy 0
+.Pq the default on Linux ,
+user namespace xattr names are prefixed with the namespace, to be backwards
+compatible with previous versions of ZFS on Linux.
+If
+.Sy 1
+.Pq the default on Fx ,
+user namespace xattr names are not prefixed, to be backwards compatible with
+previous versions of ZFS on illumos and
+.Fx .
+.Pp
+Either naming scheme can be read on this and future versions of ZFS, regardless
+of this tunable, but legacy ZFS on illumos or
+.Fx
+are unable to read user namespace xattrs written in the Linux format, and
+legacy versions of ZFS on Linux are unable to read user namespace xattrs written
+in the legacy ZFS format.
+.Pp
+An existing xattr with the alternate naming scheme is removed when overwriting
+the xattr so as to not accumulate duplicates.
+.
+.It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int
+Prioritize requeued I/O.
+.
+.It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint
+Percentage of online CPUs which will run a worker thread for I/O.
+These workers are responsible for I/O work such as compression, encryption,
+checksum and parity calculations.
+Fractional number of CPUs will be rounded down.
+.Pp
+The default value of
+.Sy 80%
+was chosen to avoid using all CPUs which can result in
+latency issues and inconsistent application performance,
+especially when slower compression and/or checksumming is enabled.
+Set value only applies to pools imported/created after that.
+.
+.It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint
+Number of worker threads per taskq.
+Higher values improve I/O ordering and CPU utilization,
+while lower reduce lock contention.
+Set value only applies to pools imported/created after that.
+.Pp
+If
+.Sy 0 ,
+generate a system-dependent value close to 6 threads per taskq.
+Set value only applies to pools imported/created after that.
+.
+.It Sy zio_taskq_write_tpq Ns = Ns Sy 16 Pq uint
+Determines the minumum number of threads per write issue taskq.
+Higher values improve CPU utilization on high throughput,
+while lower reduce taskq locks contention on high IOPS.
+Set value only applies to pools imported/created after that.
+.
+.It Sy zio_taskq_read Ns = Ns Sy fixed,1,8 null scale null Pq charp
+Set the queue and thread configuration for the IO read queues.
+This is an advanced debugging parameter.
+Don't change this unless you understand what it does.
+Set values only apply to pools imported/created after that.
+.
+.It Sy zio_taskq_write Ns = Ns Sy sync null scale null Pq charp
+Set the queue and thread configuration for the IO write queues.
+This is an advanced debugging parameter.
+Don't change this unless you understand what it does.
+Set values only apply to pools imported/created after that.
+.
+.It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint
+Do not create zvol device nodes.
+This may slightly improve startup time on
+systems with a very large number of zvols.
+.
+.It Sy zvol_major Ns = Ns Sy 230 Pq uint
+Major number for zvol block devices.
+.
+.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long
+Discard (TRIM) operations done on zvols will be done in batches of this
+many blocks, where block size is determined by the
+.Sy volblocksize
+property of a zvol.
+.
+.It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
+When adding a zvol to the system, prefetch this many bytes
+from the start and end of the volume.
+Prefetching these regions of the volume is desirable,
+because they are likely to be accessed immediately by
+.Xr blkid 8
+or the kernel partitioner.
+.
+.It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint
+When processing I/O requests for a zvol, submit them synchronously.
+This effectively limits the queue depth to
+.Em 1
+for each I/O submitter.
+When unset, requests are handled asynchronously by a thread pool.
+The number of requests which can be handled concurrently is controlled by
+.Sy zvol_threads .
+.Sy zvol_request_sync
+is ignored when running on a kernel that supports block multiqueue
+.Pq Li blk-mq .
+.
+.It Sy zvol_num_taskqs Ns = Ns Sy 0 Pq uint
+Number of zvol taskqs.
+If
+.Sy 0
+(the default) then scaling is done internally to prefer 6 threads per taskq.
+This only applies on Linux.
+.
+.It Sy zvol_threads Ns = Ns Sy 0 Pq uint
+The number of system wide threads to use for processing zvol block IOs.
+If
+.Sy 0
+(the default) then internally set
+.Sy zvol_threads
+to the number of CPUs present or 32 (whichever is greater).
+.
+.It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint
+The number of threads per zvol to use for queuing IO requests.
+This parameter will only appear if your kernel supports
+.Li blk-mq
+and is only read and assigned to a zvol at zvol load time.
+If
+.Sy 0
+(the default) then internally set
+.Sy zvol_blk_mq_threads
+to the number of CPUs present.
+.
+.It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint
+Set to
+.Sy 1
+to use the
+.Li blk-mq
+API for zvols.
+Set to
+.Sy 0
+(the default) to use the legacy zvol APIs.
+This setting can give better or worse zvol performance depending on
+the workload.
+This parameter will only appear if your kernel supports
+.Li blk-mq
+and is only read and assigned to a zvol at zvol load time.
+.
+.It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint
+If
+.Sy zvol_use_blk_mq
+is enabled, then process this number of
+.Sy volblocksize Ns -sized blocks per zvol thread.
+This tunable can be use to favor better performance for zvol reads (lower
+values) or writes (higher values).
+If set to
+.Sy 0 ,
+then the zvol layer will process the maximum number of blocks
+per thread that it can.
+This parameter will only appear if your kernel supports
+.Li blk-mq
+and is only applied at each zvol's load time.
+.
+.It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint
+The queue_depth value for the zvol
+.Li blk-mq
+interface.
+This parameter will only appear if your kernel supports
+.Li blk-mq
+and is only applied at each zvol's load time.
+If
+.Sy 0
+(the default) then use the kernel's default queue depth.
+Values are clamped to the kernel's
+.Dv BLKDEV_MIN_RQ
+and
+.Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ
+limits.
+.
+.It Sy zvol_volmode Ns = Ns Sy 1 Pq uint
+Defines zvol block devices behaviour when
+.Sy volmode Ns = Ns Sy default :
+.Bl -tag -compact -offset 4n -width "a"
+.It Sy 1
+.No equivalent to Sy full
+.It Sy 2
+.No equivalent to Sy dev
+.It Sy 3
+.No equivalent to Sy none
+.El
+.
+.It Sy zvol_enforce_quotas Ns = Ns Sy 0 Ns | Ns 1 Pq uint
+Enable strict ZVOL quota enforcement.
+The strict quota enforcement may have a performance impact.
+.El
+.
+.Sh ZFS I/O SCHEDULER
+ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations.
+The scheduler determines when and in what order those operations are issued.
+The scheduler divides operations into five I/O classes,
+prioritized in the following order: sync read, sync write, async read,
+async write, and scrub/resilver.
+Each queue defines the minimum and maximum number of concurrent operations
+that may be issued to the device.
+In addition, the device has an aggregate maximum,
+.Sy zfs_vdev_max_active .
+Note that the sum of the per-queue minima must not exceed the aggregate maximum.
+If the sum of the per-queue maxima exceeds the aggregate maximum,
+then the number of active operations may reach
+.Sy zfs_vdev_max_active ,
+in which case no further operations will be issued,
+regardless of whether all per-queue minima have been met.
+.Pp
+For many physical devices, throughput increases with the number of
+concurrent operations, but latency typically suffers.
+Furthermore, physical devices typically have a limit
+at which more concurrent operations have no
+effect on throughput or can actually cause it to decrease.
+.Pp
+The scheduler selects the next operation to issue by first looking for an
+I/O class whose minimum has not been satisfied.
+Once all are satisfied and the aggregate maximum has not been hit,
+the scheduler looks for classes whose maximum has not been satisfied.
+Iteration through the I/O classes is done in the order specified above.
+No further operations are issued
+if the aggregate maximum number of concurrent operations has been hit,
+or if there are no operations queued for an I/O class that has not hit its
+maximum.
+Every time an I/O operation is queued or an operation completes,
+the scheduler looks for new operations to issue.
+.Pp
+In general, smaller
+.Sy max_active Ns s
+will lead to lower latency of synchronous operations.
+Larger
+.Sy max_active Ns s
+may lead to higher overall throughput, depending on underlying storage.
+.Pp
+The ratio of the queues'
+.Sy max_active Ns s
+determines the balance of performance between reads, writes, and scrubs.
+For example, increasing
+.Sy zfs_vdev_scrub_max_active
+will cause the scrub or resilver to complete more quickly,
+but reads and writes to have higher latency and lower throughput.
+.Pp
+All I/O classes have a fixed maximum number of outstanding operations,
+except for the async write class.
+Asynchronous writes represent the data that is committed to stable storage
+during the syncing stage for transaction groups.
+Transaction groups enter the syncing state periodically,
+so the number of queued async writes will quickly burst up
+and then bleed down to zero.
+Rather than servicing them as quickly as possible,
+the I/O scheduler changes the maximum number of active async write operations
+according to the amount of dirty data in the pool.
+Since both throughput and latency typically increase with the number of
+concurrent operations issued to physical devices, reducing the
+burstiness in the number of simultaneous operations also stabilizes the
+response time of operations from other queues, in particular synchronous ones.
+In broad strokes, the I/O scheduler will issue more concurrent operations
+from the async write queue as there is more dirty data in the pool.
+.
+.Ss Async Writes
+The number of concurrent operations issued for the async write I/O class
+follows a piece-wise linear function defined by a few adjustable points:
+.Bd -literal
+ | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
+ ^ | /^ |
+ | | / | |
+active | / | |
+ I/O | / | |
+count | / | |
+ | / | |
+ |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
+ 0|_______^______|_________|
+ 0% | | 100% of \fBzfs_dirty_data_max\fP
+ | |
+ | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
+ `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
+.Ed
+.Pp
+Until the amount of dirty data exceeds a minimum percentage of the dirty
+data allowed in the pool, the I/O scheduler will limit the number of
+concurrent operations to the minimum.
+As that threshold is crossed, the number of concurrent operations issued
+increases linearly to the maximum at the specified maximum percentage
+of the dirty data allowed in the pool.
+.Pp
+Ideally, the amount of dirty data on a busy pool will stay in the sloped
+part of the function between
+.Sy zfs_vdev_async_write_active_min_dirty_percent
+and
+.Sy zfs_vdev_async_write_active_max_dirty_percent .
+If it exceeds the maximum percentage,
+this indicates that the rate of incoming data is
+greater than the rate that the backend storage can handle.
+In this case, we must further throttle incoming writes,
+as described in the next section.
+.
+.Sh ZFS TRANSACTION DELAY
+We delay transactions when we've determined that the backend storage
+isn't able to accommodate the rate of incoming writes.
+.Pp
+If there is already a transaction waiting, we delay relative to when
+that transaction will finish waiting.
+This way the calculated delay time
+is independent of the number of threads concurrently executing transactions.
+.Pp
+If we are the only waiter, wait relative to when the transaction started,
+rather than the current time.
+This credits the transaction for "time already served",
+e.g. reading indirect blocks.
+.Pp
+The minimum time for a transaction to take is calculated as
+.D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms)
+.Pp
+The delay has two degrees of freedom that can be adjusted via tunables.
+The percentage of dirty data at which we start to delay is defined by
+.Sy zfs_delay_min_dirty_percent .
+This should typically be at or above
+.Sy zfs_vdev_async_write_active_max_dirty_percent ,
+so that we only start to delay after writing at full speed
+has failed to keep up with the incoming write rate.
+The scale of the curve is defined by
+.Sy zfs_delay_scale .
+Roughly speaking, this variable determines the amount of delay at the midpoint
+of the curve.
+.Bd -literal
+delay
+ 10ms +-------------------------------------------------------------*+
+ | *|
+ 9ms + *+
+ | *|
+ 8ms + *+
+ | * |
+ 7ms + * +
+ | * |
+ 6ms + * +
+ | * |
+ 5ms + * +
+ | * |
+ 4ms + * +
+ | * |
+ 3ms + * +
+ | * |
+ 2ms + (midpoint) * +
+ | | ** |
+ 1ms + v *** +
+ | \fBzfs_delay_scale\fP ----------> ******** |
+ 0 +-------------------------------------*********----------------+
+ 0% <- \fBzfs_dirty_data_max\fP -> 100%
+.Ed
+.Pp
+Note, that since the delay is added to the outstanding time remaining on the
+most recent transaction it's effectively the inverse of IOPS.
+Here, the midpoint of
+.Em 500 us
+translates to
+.Em 2000 IOPS .
+The shape of the curve
+was chosen such that small changes in the amount of accumulated dirty data
+in the first three quarters of the curve yield relatively small differences
+in the amount of delay.
+.Pp
+The effects can be easier to understand when the amount of delay is
+represented on a logarithmic scale:
+.Bd -literal
+delay
+100ms +-------------------------------------------------------------++
+ + +
+ | |
+ + *+
+ 10ms + *+
+ + ** +
+ | (midpoint) ** |
+ + | ** +
+ 1ms + v **** +
+ + \fBzfs_delay_scale\fP ----------> ***** +
+ | **** |
+ + **** +
+100us + ** +
+ + * +
+ | * |
+ + * +
+ 10us + * +
+ + +
+ | |
+ + +
+ +--------------------------------------------------------------+
+ 0% <- \fBzfs_dirty_data_max\fP -> 100%
+.Ed
+.Pp
+Note here that only as the amount of dirty data approaches its limit does
+the delay start to increase rapidly.
+The goal of a properly tuned system should be to keep the amount of dirty data
+out of that range by first ensuring that the appropriate limits are set
+for the I/O scheduler to reach optimal throughput on the back-end storage,
+and then by changing the value of
+.Sy zfs_delay_scale
+to increase the steepness of the curve.
diff --git a/share/man/man7/zfsconcepts.7 b/share/man/man7/zfsconcepts.7
@@ -0,0 +1,245 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\" Copyright 2023 Klara, Inc.
+.\"
+.Dd October 6, 2023
+.Dt ZFSCONCEPTS 7
+.Os
+.
+.Sh NAME
+.Nm zfsconcepts
+.Nd overview of ZFS concepts
+.
+.Sh DESCRIPTION
+.Ss ZFS File System Hierarchy
+A ZFS storage pool is a logical collection of devices that provide space for
+datasets.
+A storage pool is also the root of the ZFS file system hierarchy.
+.Pp
+The root of the pool can be accessed as a file system, such as mounting and
+unmounting, taking snapshots, and setting properties.
+The physical storage characteristics, however, are managed by the
+.Xr zpool 8
+command.
+.Pp
+See
+.Xr zpool 8
+for more information on creating and administering pools.
+.Ss Snapshots
+A snapshot is a read-only copy of a file system or volume.
+Snapshots can be created extremely quickly, and initially consume no additional
+space within the pool.
+As data within the active dataset changes, the snapshot consumes more data than
+would otherwise be shared with the active dataset.
+.Pp
+Snapshots can have arbitrary names.
+Snapshots of volumes can be cloned or rolled back, visibility is determined
+by the
+.Sy snapdev
+property of the parent volume.
+.Pp
+File system snapshots can be accessed under the
+.Pa .zfs/snapshot
+directory in the root of the file system.
+Snapshots are automatically mounted on demand and may be unmounted at regular
+intervals.
+The availability and visibility of the
+.Pa .zfs
+directory can be controlled by the
+.Sy snapdir
+property.
+.Ss Bookmarks
+A bookmark is like a snapshot, a read-only copy of a file system or volume.
+Bookmarks can be created extremely quickly, compared to snapshots, and they
+consume no additional space within the pool.
+Bookmarks can also have arbitrary names, much like snapshots.
+.Pp
+Unlike snapshots, bookmarks can not be accessed through the filesystem in any
+way.
+From a storage standpoint a bookmark just provides a way to reference
+when a snapshot was created as a distinct object.
+Bookmarks are initially tied to a snapshot, not the filesystem or volume,
+and they will survive if the snapshot itself is destroyed.
+Since they are very light weight there's little incentive to destroy them.
+.Ss Clones
+A clone is a writable volume or file system whose initial contents are the same
+as another dataset.
+As with snapshots, creating a clone is nearly instantaneous, and initially
+consumes no additional space.
+.Pp
+Clones can only be created from a snapshot.
+When a snapshot is cloned, it creates an implicit dependency between the parent
+and child.
+Even though the clone is created somewhere else in the dataset hierarchy, the
+original snapshot cannot be destroyed as long as a clone exists.
+The
+.Sy origin
+property exposes this dependency, and the
+.Cm destroy
+command lists any such dependencies, if they exist.
+.Pp
+The clone parent-child dependency relationship can be reversed by using the
+.Cm promote
+subcommand.
+This causes the
+.Qq origin
+file system to become a clone of the specified file system, which makes it
+possible to destroy the file system that the clone was created from.
+.Ss "Mount Points"
+Creating a ZFS file system is a simple operation, so the number of file systems
+per system is likely to be numerous.
+To cope with this, ZFS automatically manages mounting and unmounting file
+systems without the need to edit the
+.Pa /etc/fstab
+file.
+All automatically managed file systems are mounted by ZFS at boot time.
+.Pp
+By default, file systems are mounted under
+.Pa /path ,
+where
+.Ar path
+is the name of the file system in the ZFS namespace.
+Directories are created and destroyed as needed.
+.Pp
+A file system can also have a mount point set in the
+.Sy mountpoint
+property.
+This directory is created as needed, and ZFS automatically mounts the file
+system when the
+.Nm zfs Cm mount Fl a
+command is invoked
+.Po without editing
+.Pa /etc/fstab
+.Pc .
+The
+.Sy mountpoint
+property can be inherited, so if
+.Em pool/home
+has a mount point of
+.Pa /export/stuff ,
+then
+.Em pool/home/user
+automatically inherits a mount point of
+.Pa /export/stuff/user .
+.Pp
+A file system
+.Sy mountpoint
+property of
+.Sy none
+prevents the file system from being mounted.
+.Pp
+If needed, ZFS file systems can also be managed with traditional tools
+.Po
+.Nm mount ,
+.Nm umount ,
+.Pa /etc/fstab
+.Pc .
+If a file system's mount point is set to
+.Sy legacy ,
+ZFS makes no attempt to manage the file system, and the administrator is
+responsible for mounting and unmounting the file system.
+Because pools must
+be imported before a legacy mount can succeed, administrators should ensure
+that legacy mounts are only attempted after the zpool import process
+finishes at boot time.
+For example, on machines using systemd, the mount option
+.Pp
+.Nm x-systemd.requires=zfs-import.target
+.Pp
+will ensure that the zfs-import completes before systemd attempts mounting
+the filesystem.
+See
+.Xr systemd.mount 5
+for details.
+.Ss Deduplication
+Deduplication is the process for removing redundant data at the block level,
+reducing the total amount of data stored.
+If a file system has the
+.Sy dedup
+property enabled, duplicate data blocks are removed synchronously.
+The result
+is that only unique data is stored and common components are shared among files.
+.Pp
+Deduplicating data is a very resource-intensive operation.
+It is generally recommended that you have at least 1.25 GiB of RAM
+per 1 TiB of storage when you enable deduplication.
+Calculating the exact requirement depends heavily
+on the type of data stored in the pool.
+.Pp
+Enabling deduplication on an improperly-designed system can result in
+performance issues (slow I/O and administrative operations).
+It can potentially lead to problems importing a pool due to memory exhaustion.
+Deduplication can consume significant processing power (CPU) and memory as well
+as generate additional disk I/O.
+.Pp
+Before creating a pool with deduplication enabled, ensure that you have planned
+your hardware requirements appropriately and implemented appropriate recovery
+practices, such as regular backups.
+Consider using the
+.Sy compression
+property as a less resource-intensive alternative.
+.Ss Block cloning
+Block cloning is a facility that allows a file (or parts of a file) to be
+.Qq cloned ,
+that is, a shallow copy made where the existing data blocks are referenced
+rather than copied.
+Later modifications to the data will cause a copy of the data block to be taken
+and that copy modified.
+This facility is used to implement
+.Qq reflinks
+or
+.Qq file-level copy-on-write .
+.Pp
+Cloned blocks are tracked in a special on-disk structure called the Block
+Reference Table
+.Po BRT
+.Pc .
+Unlike deduplication, this table has minimal overhead, so can be enabled at all
+times.
+.Pp
+Also unlike deduplication, cloning must be requested by a user program.
+Many common file copying programs, including newer versions of
+.Nm /bin/cp ,
+will try to create clones automatically.
+Look for
+.Qq clone ,
+.Qq dedupe
+or
+.Qq reflink
+in the documentation for more information.
+.Pp
+There are some limitations to block cloning.
+Only whole blocks can be cloned, and blocks can not be cloned if they are not
+yet written to disk, or if they are encrypted, or the source and destination
+.Sy recordsize
+properties differ.
+The OS may add additional restrictions;
+for example, most versions of Linux will not allow clones across datasets.
diff --git a/share/man/man7/zfsprops.7 b/share/man/man7/zfsprops.7
@@ -0,0 +1,2242 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2011, Pawel Jakub Dawidek <pjd@FreeBSD.org>
+.\" Copyright (c) 2012, Glen Barber <gjb@FreeBSD.org>
+.\" Copyright (c) 2012, Bryan Drewery <bdrewery@FreeBSD.org>
+.\" Copyright (c) 2013, Steven Hartland <smh@FreeBSD.org>
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright (c) 2016 Nexenta Systems, Inc. All Rights Reserved.
+.\" Copyright (c) 2014, Xin LI <delphij@FreeBSD.org>
+.\" Copyright (c) 2014-2015, The FreeBSD Foundation, All Rights Reserved.
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\" Copyright (c) 2019, Kjeld Schouten-Lebbing
+.\" Copyright (c) 2022 Hewlett Packard Enterprise Development LP.
+.\"
+.Dd June 29, 2024
+.Dt ZFSPROPS 7
+.Os
+.
+.Sh NAME
+.Nm zfsprops
+.Nd native and user-defined properties of ZFS datasets
+.
+.Sh DESCRIPTION
+Properties are divided into two types, native properties and user-defined
+.Po or
+.Qq user
+.Pc
+properties.
+Native properties either export internal statistics or control ZFS behavior.
+In addition, native properties are either editable or read-only.
+User properties have no effect on ZFS behavior, but you can use them to annotate
+datasets in a way that is meaningful in your environment.
+For more information about user properties, see the
+.Sx User Properties
+section, below.
+.
+.Ss Native Properties
+Every dataset has a set of properties that export statistics about the dataset
+as well as control various behaviors.
+Properties are inherited from the parent unless overridden by the child.
+Some properties apply only to certain types of datasets
+.Pq file systems, volumes, or snapshots .
+.Pp
+The values of numeric properties can be specified using human-readable suffixes
+.Po for example,
+.Sy k ,
+.Sy KB ,
+.Sy M ,
+.Sy Gb ,
+and so forth, up to
+.Sy Z
+for zettabyte
+.Pc .
+The following are all valid
+.Pq and equal
+specifications:
+.Li 1536M ,
+.Li 1.5g ,
+.Li 1.50GB .
+.Pp
+The values of non-numeric properties are case sensitive and must be lowercase,
+except for
+.Sy mountpoint ,
+.Sy sharenfs ,
+and
+.Sy sharesmb .
+.Pp
+The following native properties consist of read-only statistics about the
+dataset.
+These properties can be neither set, nor inherited.
+Native properties apply to all dataset types unless otherwise noted.
+.Bl -tag -width "usedbyrefreservation"
+.It Sy available
+The amount of space available to the dataset and all its children, assuming that
+there is no other activity in the pool.
+Because space is shared within a pool, availability can be limited by any number
+of factors, including physical pool size, quotas, reservations, or other
+datasets within the pool.
+.Pp
+This property can also be referred to by its shortened column name,
+.Sy avail .
+.It Sy compressratio
+For non-snapshots, the compression ratio achieved for the
+.Sy used
+space of this dataset, expressed as a multiplier.
+The
+.Sy used
+property includes descendant datasets, and, for clones, does not include the
+space shared with the origin snapshot.
+For snapshots, the
+.Sy compressratio
+is the same as the
+.Sy refcompressratio
+property.
+Compression can be turned on by running:
+.Nm zfs Cm set Sy compression Ns = Ns Sy on Ar dataset .
+The default value is
+.Sy off .
+.It Sy createtxg
+The transaction group (txg) in which the dataset was created.
+Bookmarks have the same
+.Sy createtxg
+as the snapshot they are initially tied to.
+This property is suitable for ordering a list of snapshots,
+e.g. for incremental send and receive.
+.It Sy creation
+The time this dataset was created.
+.It Sy clones
+For snapshots, this property is a comma-separated list of filesystems or volumes
+which are clones of this snapshot.
+The clones'
+.Sy origin
+property is this snapshot.
+If the
+.Sy clones
+property is not empty, then this snapshot can not be destroyed
+.Po even with the
+.Fl r
+or
+.Fl f
+options
+.Pc .
+The roles of origin and clone can be swapped by promoting the clone with the
+.Nm zfs Cm promote
+command.
+.It Sy defer_destroy
+This property is
+.Sy on
+if the snapshot has been marked for deferred destroy by using the
+.Nm zfs Cm destroy Fl d
+command.
+Otherwise, the property is
+.Sy off .
+.It Sy encryptionroot
+For encrypted datasets, indicates where the dataset is currently inheriting its
+encryption key from.
+Loading or unloading a key for the
+.Sy encryptionroot
+will implicitly load / unload the key for any inheriting datasets (see
+.Nm zfs Cm load-key
+and
+.Nm zfs Cm unload-key
+for details).
+Clones will always share an
+encryption key with their origin.
+See the
+.Sx Encryption
+section of
+.Xr zfs-load-key 8
+for details.
+.It Sy filesystem_count
+The total number of filesystems and volumes that exist under this location in
+the dataset tree.
+This value is only available when a
+.Sy filesystem_limit
+has been set somewhere in the tree under which the dataset resides.
+.It Sy keystatus
+Indicates if an encryption key is currently loaded into ZFS.
+The possible values are
+.Sy none ,
+.Sy available ,
+and
+.Sy unavailable .
+See
+.Nm zfs Cm load-key
+and
+.Nm zfs Cm unload-key .
+.It Sy guid
+The 64 bit GUID of this dataset or bookmark which does not change over its
+entire lifetime.
+When a snapshot is sent to another pool, the received snapshot has the same
+GUID.
+Thus, the
+.Sy guid
+is suitable to identify a snapshot across pools.
+.It Sy logicalreferenced
+The amount of space that is
+.Qq logically
+accessible by this dataset.
+See the
+.Sy referenced
+property.
+The logical space ignores the effect of the
+.Sy compression
+and
+.Sy copies
+properties, giving a quantity closer to the amount of data that applications
+see.
+However, it does include space consumed by metadata.
+.Pp
+This property can also be referred to by its shortened column name,
+.Sy lrefer .
+.It Sy logicalused
+The amount of space that is
+.Qq logically
+consumed by this dataset and all its descendents.
+See the
+.Sy used
+property.
+The logical space ignores the effect of the
+.Sy compression
+and
+.Sy copies
+properties, giving a quantity closer to the amount of data that applications
+see.
+However, it does include space consumed by metadata.
+.Pp
+This property can also be referred to by its shortened column name,
+.Sy lused .
+.It Sy mounted
+For file systems, indicates whether the file system is currently mounted.
+This property can be either
+.Sy yes
+or
+.Sy no .
+.It Sy objsetid
+A unique identifier for this dataset within the pool.
+Unlike the dataset's
+.Sy guid , No the Sy objsetid
+of a dataset is not transferred to other pools when the snapshot is copied
+with a send/receive operation.
+The
+.Sy objsetid
+can be reused (for a new dataset) after the dataset is deleted.
+.It Sy origin
+For cloned file systems or volumes, the snapshot from which the clone was
+created.
+See also the
+.Sy clones
+property.
+.It Sy receive_resume_token
+For filesystems or volumes which have saved partially-completed state from
+.Nm zfs Cm receive Fl s ,
+this opaque token can be provided to
+.Nm zfs Cm send Fl t
+to resume and complete the
+.Nm zfs Cm receive .
+.It Sy redact_snaps
+For bookmarks, this is the list of snapshot guids the bookmark contains a
+redaction
+list for.
+For snapshots, this is the list of snapshot guids the snapshot is redacted with
+respect to.
+.It Sy referenced
+The amount of data that is accessible by this dataset, which may or may not be
+shared with other datasets in the pool.
+When a snapshot or clone is created, it initially references the same amount of
+space as the file system or snapshot it was created from, since its contents are
+identical.
+.Pp
+This property can also be referred to by its shortened column name,
+.Sy refer .
+.It Sy refcompressratio
+The compression ratio achieved for the
+.Sy referenced
+space of this dataset, expressed as a multiplier.
+See also the
+.Sy compressratio
+property.
+.It Sy snapshot_count
+The total number of snapshots that exist under this location in the dataset
+tree.
+This value is only available when a
+.Sy snapshot_limit
+has been set somewhere in the tree under which the dataset resides.
+.It Sy type
+The type of dataset:
+.Sy filesystem ,
+.Sy volume ,
+.Sy snapshot ,
+or
+.Sy bookmark .
+.It Sy used
+The amount of space consumed by this dataset and all its descendents.
+This is the value that is checked against this dataset's quota and reservation.
+The space used does not include this dataset's reservation, but does take into
+account the reservations of any descendent datasets.
+The amount of space that a dataset consumes from its parent, as well as the
+amount of space that is freed if this dataset is recursively destroyed, is the
+greater of its space used and its reservation.
+.Pp
+The used space of a snapshot
+.Po see the
+.Sx Snapshots
+section of
+.Xr zfsconcepts 7
+.Pc
+is space that is referenced exclusively by this snapshot.
+If this snapshot is destroyed, the amount of
+.Sy used
+space will be freed.
+Space that is shared by multiple snapshots isn't accounted for in this metric.
+When a snapshot is destroyed, space that was previously shared with this
+snapshot can become unique to snapshots adjacent to it, thus changing the used
+space of those snapshots.
+The used space of the latest snapshot can also be affected by changes in the
+file system.
+Note that the
+.Sy used
+space of a snapshot is a subset of the
+.Sy written
+space of the snapshot.
+.Pp
+The amount of space used, available, or referenced does not take into account
+pending changes.
+Pending changes are generally accounted for within a few seconds.
+Committing a change to a disk using
+.Xr fsync 2
+or
+.Sy O_SYNC
+does not necessarily guarantee that the space usage information is updated
+immediately.
+.It Sy usedby*
+The
+.Sy usedby*
+properties decompose the
+.Sy used
+properties into the various reasons that space is used.
+Specifically,
+.Sy used No =
+.Sy usedbychildren No +
+.Sy usedbydataset No +
+.Sy usedbyrefreservation No +
+.Sy usedbysnapshots .
+These properties are only available for datasets created on
+.Nm zpool
+.Qo version 13 Qc
+pools.
+.It Sy usedbychildren
+The amount of space used by children of this dataset, which would be freed if
+all the dataset's children were destroyed.
+.It Sy usedbydataset
+The amount of space used by this dataset itself, which would be freed if the
+dataset were destroyed
+.Po after first removing any
+.Sy refreservation
+and destroying any necessary snapshots or descendents
+.Pc .
+.It Sy usedbyrefreservation
+The amount of space used by a
+.Sy refreservation
+set on this dataset, which would be freed if the
+.Sy refreservation
+was removed.
+.It Sy usedbysnapshots
+The amount of space consumed by snapshots of this dataset.
+In particular, it is the amount of space that would be freed if all of this
+dataset's snapshots were destroyed.
+Note that this is not simply the sum of the snapshots'
+.Sy used
+properties because space can be shared by multiple snapshots.
+.It Sy userused Ns @ Ns Ar user
+The amount of space consumed by the specified user in this dataset.
+Space is charged to the owner of each file, as displayed by
+.Nm ls Fl l .
+The amount of space charged is displayed by
+.Nm du No and Nm ls Fl s .
+See the
+.Nm zfs Cm userspace
+command for more information.
+.Pp
+Unprivileged users can access only their own space usage.
+The root user, or a user who has been granted the
+.Sy userused
+privilege with
+.Nm zfs Cm allow ,
+can access everyone's usage.
+.Pp
+The
+.Sy userused Ns @ Ns Ar …
+properties are not displayed by
+.Nm zfs Cm get Sy all .
+The user's name must be appended after the
+.Sy @
+symbol, using one of the following forms:
+.Bl -bullet -compact -offset 4n
+.It
+POSIX name
+.Pq Qq joe
+.It
+POSIX numeric ID
+.Pq Qq 789
+.It
+SID name
+.Pq Qq joe.smith@mydomain
+.It
+SID numeric ID
+.Pq Qq S-1-123-456-789
+.El
+.Pp
+Files created on Linux always have POSIX owners.
+.It Sy userobjused Ns @ Ns Ar user
+The
+.Sy userobjused
+property is similar to
+.Sy userused
+but instead it counts the number of objects consumed by a user.
+This property counts all objects allocated on behalf of the user,
+it may differ from the results of system tools such as
+.Nm df Fl i .
+.Pp
+When the property
+.Sy xattr Ns = Ns Sy on
+is set on a file system additional objects will be created per-file to store
+extended attributes.
+These additional objects are reflected in the
+.Sy userobjused
+value and are counted against the user's
+.Sy userobjquota .
+When a file system is configured to use
+.Sy xattr Ns = Ns Sy sa
+no additional internal objects are normally required.
+.It Sy userrefs
+This property is set to the number of user holds on this snapshot.
+User holds are set by using the
+.Nm zfs Cm hold
+command.
+.It Sy groupused Ns @ Ns Ar group
+The amount of space consumed by the specified group in this dataset.
+Space is charged to the group of each file, as displayed by
+.Nm ls Fl l .
+See the
+.Sy userused Ns @ Ns Ar user
+property for more information.
+.Pp
+Unprivileged users can only access their own groups' space usage.
+The root user, or a user who has been granted the
+.Sy groupused
+privilege with
+.Nm zfs Cm allow ,
+can access all groups' usage.
+.It Sy groupobjused Ns @ Ns Ar group
+The number of objects consumed by the specified group in this dataset.
+Multiple objects may be charged to the group for each file when extended
+attributes are in use.
+See the
+.Sy userobjused Ns @ Ns Ar user
+property for more information.
+.Pp
+Unprivileged users can only access their own groups' space usage.
+The root user, or a user who has been granted the
+.Sy groupobjused
+privilege with
+.Nm zfs Cm allow ,
+can access all groups' usage.
+.It Sy projectused Ns @ Ns Ar project
+The amount of space consumed by the specified project in this dataset.
+Project is identified via the project identifier (ID) that is object-based
+numeral attribute.
+An object can inherit the project ID from its parent object (if the
+parent has the flag of inherit project ID that can be set and changed via
+.Nm chattr Fl /+P
+or
+.Nm zfs project Fl s )
+when being created.
+The privileged user can set and change object's project
+ID via
+.Nm chattr Fl p
+or
+.Nm zfs project Fl s
+anytime.
+Space is charged to the project of each file, as displayed by
+.Nm lsattr Fl p
+or
+.Nm zfs project .
+See the
+.Sy userused Ns @ Ns Ar user
+property for more information.
+.Pp
+The root user, or a user who has been granted the
+.Sy projectused
+privilege with
+.Nm zfs allow ,
+can access all projects' usage.
+.It Sy projectobjused Ns @ Ns Ar project
+The
+.Sy projectobjused
+is similar to
+.Sy projectused
+but instead it counts the number of objects consumed by project.
+When the property
+.Sy xattr Ns = Ns Sy on
+is set on a fileset, ZFS will create additional objects per-file to store
+extended attributes.
+These additional objects are reflected in the
+.Sy projectobjused
+value and are counted against the project's
+.Sy projectobjquota .
+When a filesystem is configured to use
+.Sy xattr Ns = Ns Sy sa
+no additional internal objects are required.
+See the
+.Sy userobjused Ns @ Ns Ar user
+property for more information.
+.Pp
+The root user, or a user who has been granted the
+.Sy projectobjused
+privilege with
+.Nm zfs allow ,
+can access all projects' objects usage.
+.It Sy snapshots_changed
+Provides a mechanism to quickly determine whether snapshot list has
+changed without having to mount a dataset or iterate the snapshot list.
+Specifies the time at which a snapshot for a dataset was last
+created or deleted.
+.Pp
+This allows us to be more efficient how often we query snapshots.
+The property is persistent across mount and unmount operations only if the
+.Sy extensible_dataset
+feature is enabled.
+.It Sy volblocksize
+For volumes, specifies the block size of the volume.
+The
+.Sy blocksize
+cannot be changed once the volume has been written, so it should be set at
+volume creation time.
+The default
+.Sy blocksize
+for volumes is 16 Kbytes.
+Any power of 2 from 512 bytes to 128 Kbytes is valid.
+.Pp
+This property can also be referred to by its shortened column name,
+.Sy volblock .
+.It Sy written
+The amount of space
+.Sy referenced
+by this dataset, that was written since the previous snapshot
+.Pq i.e. that is not referenced by the previous snapshot .
+.It Sy written Ns @ Ns Ar snapshot
+The amount of
+.Sy referenced
+space written to this dataset since the specified snapshot.
+This is the space that is referenced by this dataset but was not referenced by
+the specified snapshot.
+.Pp
+The
+.Ar snapshot
+may be specified as a short snapshot name
+.Pq just the part after the Sy @ ,
+in which case it will be interpreted as a snapshot in the same filesystem as
+this dataset.
+The
+.Ar snapshot
+may be a full snapshot name
+.Pq Ar filesystem Ns @ Ns Ar snapshot ,
+which for clones may be a snapshot in the origin's filesystem
+.Pq or the origin of the origin's filesystem, etc.
+.El
+.Pp
+The following native properties can be used to change the behavior of a ZFS
+dataset.
+.Bl -tag -width ""
+.It Xo
+.Sy aclinherit Ns = Ns Sy discard Ns | Ns Sy noallow Ns | Ns
+.Sy restricted Ns | Ns Sy passthrough Ns | Ns Sy passthrough-x
+.Xc
+Controls how ACEs are inherited when files and directories are created.
+.Bl -tag -compact -offset 4n -width "passthrough-x"
+.It Sy discard
+does not inherit any ACEs.
+.It Sy noallow
+only inherits inheritable ACEs that specify
+.Qq deny
+permissions.
+.It Sy restricted
+default, removes the
+.Sy write_acl
+and
+.Sy write_owner
+permissions when the ACE is inherited.
+.It Sy passthrough
+inherits all inheritable ACEs without any modifications.
+.It Sy passthrough-x
+same meaning as
+.Sy passthrough ,
+except that the
+.Sy owner@ , group@ , No and Sy everyone@
+ACEs inherit the execute permission only if the file creation mode also requests
+the execute bit.
+.El
+.Pp
+When the property value is set to
+.Sy passthrough ,
+files are created with a mode determined by the inheritable ACEs.
+If no inheritable ACEs exist that affect the mode, then the mode is set in
+accordance to the requested mode from the application.
+.Pp
+The
+.Sy aclinherit
+property does not apply to POSIX ACLs.
+.It Xo
+.Sy aclmode Ns = Ns Sy discard Ns | Ns Sy groupmask Ns | Ns
+.Sy passthrough Ns | Ns Sy restricted Ns
+.Xc
+Controls how an ACL is modified during chmod(2) and how inherited ACEs
+are modified by the file creation mode:
+.Bl -tag -compact -offset 4n -width "passthrough"
+.It Sy discard
+default, deletes all
+.Sy ACEs
+except for those representing
+the mode of the file or directory requested by
+.Xr chmod 2 .
+.It Sy groupmask
+reduces permissions granted in all
+.Sy ALLOW
+entries found in the
+.Sy ACL
+such that they are no greater than the group permissions specified by
+.Xr chmod 2 .
+.It Sy passthrough
+indicates that no changes are made to the ACL other than creating or updating
+the necessary ACL entries to represent the new mode of the file or directory.
+.It Sy restricted
+will cause the
+.Xr chmod 2
+operation to return an error when used on any file or directory which has
+a non-trivial ACL whose entries can not be represented by a mode.
+.Xr chmod 2
+is required to change the set user ID, set group ID, or sticky bits on a file
+or directory, as they do not have equivalent ACL entries.
+In order to use
+.Xr chmod 2
+on a file or directory with a non-trivial ACL when
+.Sy aclmode
+is set to
+.Sy restricted ,
+you must first remove all ACL entries which do not represent the current mode.
+.El
+.It Sy acltype Ns = Ns Sy off Ns | Ns Sy nfsv4 Ns | Ns Sy posix
+Controls whether ACLs are enabled and if so what type of ACL to use.
+When this property is set to a type of ACL not supported by the current
+platform, the behavior is the same as if it were set to
+.Sy off .
+.Bl -tag -compact -offset 4n -width "posixacl"
+.It Sy off
+default on Linux, when a file system has the
+.Sy acltype
+property set to off then ACLs are disabled.
+.It Sy noacl
+an alias for
+.Sy off
+.It Sy nfsv4
+default on
+.Fx ,
+indicates that NFSv4-style ZFS ACLs should be used.
+These ACLs can be managed with the
+.Xr getfacl 1
+and
+.Xr setfacl 1 .
+The
+.Sy nfsv4
+ZFS ACL type is not yet supported on Linux.
+.It Sy posix
+indicates POSIX ACLs should be used.
+POSIX ACLs are specific to Linux and are not functional on other platforms.
+POSIX ACLs are stored as an extended
+attribute and therefore will not overwrite any existing NFSv4 ACLs which
+may be set.
+.It Sy posixacl
+an alias for
+.Sy posix
+.El
+.Pp
+To obtain the best performance when setting
+.Sy posix
+users are strongly encouraged to set the
+.Sy xattr Ns = Ns Sy sa
+property.
+This will result in the POSIX ACL being stored more efficiently on disk.
+But as a consequence, all new extended attributes will only be
+accessible from OpenZFS implementations which support the
+.Sy xattr Ns = Ns Sy sa
+property.
+See the
+.Sy xattr
+property for more details.
+.It Sy atime Ns = Ns Sy on Ns | Ns Sy off
+Controls whether the access time for files is updated when they are read.
+Turning this property off avoids producing write traffic when reading files and
+can result in significant performance gains, though it might confuse mailers
+and other similar utilities.
+The values
+.Sy on
+and
+.Sy off
+are equivalent to the
+.Sy atime
+and
+.Sy noatime
+mount options.
+The default value is
+.Sy on .
+See also
+.Sy relatime
+below.
+.It Sy canmount Ns = Ns Sy on Ns | Ns Sy off Ns | Ns Sy noauto
+If this property is set to
+.Sy off ,
+the file system cannot be mounted, and is ignored by
+.Nm zfs Cm mount Fl a .
+Setting this property to
+.Sy off
+is similar to setting the
+.Sy mountpoint
+property to
+.Sy none ,
+except that the dataset still has a normal
+.Sy mountpoint
+property, which can be inherited.
+Setting this property to
+.Sy off
+allows datasets to be used solely as a mechanism to inherit properties.
+One example of setting
+.Sy canmount Ns = Ns Sy off
+is to have two datasets with the same
+.Sy mountpoint ,
+so that the children of both datasets appear in the same directory, but might
+have different inherited characteristics.
+.Pp
+When set to
+.Sy noauto ,
+a dataset can only be mounted and unmounted explicitly.
+The dataset is not mounted automatically when the dataset is created or
+imported, nor is it mounted by the
+.Nm zfs Cm mount Fl a
+command or unmounted by the
+.Nm zfs Cm unmount Fl a
+command.
+.Pp
+This property is not inherited.
+.It Xo
+.Sy checksum Ns = Ns Sy on Ns | Ns Sy off Ns | Ns Sy fletcher2 Ns | Ns
+.Sy fletcher4 Ns | Ns Sy sha256 Ns | Ns Sy noparity Ns | Ns
+.Sy sha512 Ns | Ns Sy skein Ns | Ns Sy edonr Ns | Ns Sy blake3
+.Xc
+Controls the checksum used to verify data integrity.
+The default value is
+.Sy on ,
+which automatically selects an appropriate algorithm
+.Po currently,
+.Sy fletcher4 ,
+but this may change in future releases
+.Pc .
+The value
+.Sy off
+disables integrity checking on user data.
+The value
+.Sy noparity
+not only disables integrity but also disables maintaining parity for user data.
+This setting is used internally by a dump device residing on a RAID-Z pool and
+should not be used by any other dataset.
+Disabling checksums is
+.Em NOT
+a recommended practice.
+.Pp
+The
+.Sy sha512 ,
+.Sy skein ,
+.Sy edonr ,
+and
+.Sy blake3
+checksum algorithms require enabling the appropriate features on the pool.
+.Pp
+Please see
+.Xr zpool-features 7
+for more information on these algorithms.
+.Pp
+Changing this property affects only newly-written data.
+.It Xo
+.Sy compression Ns = Ns Sy on Ns | Ns Sy off Ns | Ns Sy gzip Ns | Ns
+.Sy gzip- Ns Ar N Ns | Ns Sy lz4 Ns | Ns Sy lzjb Ns | Ns Sy zle Ns | Ns Sy zstd Ns | Ns
+.Sy zstd- Ns Ar N Ns | Ns Sy zstd-fast Ns | Ns Sy zstd-fast- Ns Ar N
+.Xc
+Controls the compression algorithm used for this dataset.
+.Pp
+When set to
+.Sy on
+(the default), indicates that the current default compression algorithm should
+be used.
+The default balances compression and decompression speed, with compression ratio
+and is expected to work well on a wide variety of workloads.
+Unlike all other settings for this property,
+.Sy on
+does not select a fixed compression type.
+As new compression algorithms are added to ZFS and enabled on a pool, the
+default compression algorithm may change.
+The current default compression algorithm is either
+.Sy lzjb
+or, if the
+.Sy lz4_compress
+feature is enabled,
+.Sy lz4 .
+.Pp
+The
+.Sy lz4
+compression algorithm is a high-performance replacement for the
+.Sy lzjb
+algorithm.
+It features significantly faster compression and decompression, as well as a
+moderately higher compression ratio than
+.Sy lzjb ,
+but can only be used on pools with the
+.Sy lz4_compress
+feature set to
+.Sy enabled .
+See
+.Xr zpool-features 7
+for details on ZFS feature flags and the
+.Sy lz4_compress
+feature.
+.Pp
+The
+.Sy lzjb
+compression algorithm is optimized for performance while providing decent data
+compression.
+.Pp
+The
+.Sy gzip
+compression algorithm uses the same compression as the
+.Xr gzip 1
+command.
+You can specify the
+.Sy gzip
+level by using the value
+.Sy gzip- Ns Ar N ,
+where
+.Ar N
+is an integer from 1
+.Pq fastest
+to 9
+.Pq best compression ratio .
+Currently,
+.Sy gzip
+is equivalent to
+.Sy gzip-6
+.Po which is also the default for
+.Xr gzip 1
+.Pc .
+.Pp
+The
+.Sy zstd
+compression algorithm provides both high compression ratios and good
+performance.
+You can specify the
+.Sy zstd
+level by using the value
+.Sy zstd- Ns Ar N ,
+where
+.Ar N
+is an integer from 1
+.Pq fastest
+to 19
+.Pq best compression ratio .
+.Sy zstd
+is equivalent to
+.Sy zstd-3 .
+.Pp
+Faster speeds at the cost of the compression ratio can be requested by
+setting a negative
+.Sy zstd
+level.
+This is done using
+.Sy zstd-fast- Ns Ar N ,
+where
+.Ar N
+is an integer in
+.Bq Sy 1 Ns - Ns Sy 10 , 20 , 30 , No … , Sy 100 , 500 , 1000
+which maps to a negative
+.Sy zstd
+level.
+The lower the level the faster the compression \(em
+.Sy 1000
+provides the fastest compression and lowest compression ratio.
+.Sy zstd-fast
+is equivalent to
+.Sy zstd-fast- Ns Ar 1 .
+.Pp
+The
+.Sy zle
+compression algorithm compresses runs of zeros.
+.Pp
+This property can also be referred to by its shortened column name
+.Sy compress .
+Changing this property affects only newly-written data.
+.Pp
+When any setting except
+.Sy off
+is selected, compression will explicitly check for blocks consisting of only
+zeroes (the NUL byte).
+When a zero-filled block is detected, it is stored as
+a hole and not compressed using the indicated compression algorithm.
+.Pp
+All blocks are allocated as a whole number of sectors
+.Pq chunks of 2^ Ns Sy ashift No bytes , e.g . Sy 512B No or Sy 4KB .
+Compression may result in a non-sector-aligned size, which will be rounded up
+to a whole number of sectors.
+If compression saves less than one whole sector,
+the block will be stored uncompressed.
+Therefore, blocks whose logical size is a small number of sectors will
+experience less compression
+(e.g. for
+.Sy recordsize Ns = Ns Sy 16K
+with
+.Sy 4K
+sectors, which have 4 sectors per block,
+compression needs to save at least 25% to actually save space on disk).
+.Pp
+There is
+.Sy 12.5%
+default compression threshold in addition to sector rounding.
+.It Xo
+.Sy context Ns = Ns Sy none Ns | Ns
+.Ar SELinux-User : Ns Ar SELinux-Role : Ns Ar SELinux-Type : Ns Ar Sensitivity-Level
+.Xc
+This flag sets the SELinux context for all files in the file system under
+a mount point for that file system.
+See
+.Xr selinux 8
+for more information.
+.It Xo
+.Sy fscontext Ns = Ns Sy none Ns | Ns
+.Ar SELinux-User : Ns Ar SELinux-Role : Ns Ar SELinux-Type : Ns Ar Sensitivity-Level
+.Xc
+This flag sets the SELinux context for the file system file system being
+mounted.
+See
+.Xr selinux 8
+for more information.
+.It Xo
+.Sy defcontext Ns = Ns Sy none Ns | Ns
+.Ar SELinux-User : Ns Ar SELinux-Role : Ns Ar SELinux-Type : Ns Ar Sensitivity-Level
+.Xc
+This flag sets the SELinux default context for unlabeled files.
+See
+.Xr selinux 8
+for more information.
+.It Xo
+.Sy rootcontext Ns = Ns Sy none Ns | Ns
+.Ar SELinux-User : Ns Ar SELinux-Role : Ns Ar SELinux-Type : Ns Ar Sensitivity-Level
+.Xc
+This flag sets the SELinux context for the root inode of the file system.
+See
+.Xr selinux 8
+for more information.
+.It Sy copies Ns = Ns Sy 1 Ns | Ns Sy 2 Ns | Ns Sy 3
+Controls the number of copies of data stored for this dataset.
+These copies are in addition to any redundancy provided by the pool, for
+example, mirroring or RAID-Z.
+The copies are stored on different disks, if possible.
+The space used by multiple copies is charged to the associated file and dataset,
+changing the
+.Sy used
+property and counting against quotas and reservations.
+.Pp
+Changing this property only affects newly-written data.
+Therefore, set this property at file system creation time by using the
+.Fl o Sy copies Ns = Ns Ar N
+option.
+.Pp
+Remember that ZFS will not import a pool with a missing top-level vdev.
+Do
+.Em NOT
+create, for example a two-disk striped pool and set
+.Sy copies Ns = Ns Ar 2
+on some datasets thinking you have setup redundancy for them.
+When a disk fails you will not be able to import the pool
+and will have lost all of your data.
+.Pp
+Encrypted datasets may not have
+.Sy copies Ns = Ns Ar 3
+since the implementation stores some encryption metadata where the third copy
+would normally be.
+.It Sy devices Ns = Ns Sy on Ns | Ns Sy off
+Controls whether device nodes can be opened on this file system.
+The default value is
+.Sy on .
+The values
+.Sy on
+and
+.Sy off
+are equivalent to the
+.Sy dev
+and
+.Sy nodev
+mount options.
+.It Xo
+.Sy dedup Ns = Ns Sy off Ns | Ns Sy on Ns | Ns Sy verify Ns | Ns
+.Sy sha256 Ns Oo , Ns Sy verify Oc Ns | Ns Sy sha512 Ns Oo , Ns Sy verify Oc Ns | Ns Sy skein Ns Oo , Ns Sy verify Oc Ns | Ns
+.Sy edonr , Ns Sy verify Ns | Ns Sy blake3 Ns Oo , Ns Sy verify Oc Ns
+.Xc
+Configures deduplication for a dataset.
+The default value is
+.Sy off .
+The default deduplication checksum is
+.Sy sha256
+(this may change in the future).
+When
+.Sy dedup
+is enabled, the checksum defined here overrides the
+.Sy checksum
+property.
+Setting the value to
+.Sy verify
+has the same effect as the setting
+.Sy sha256 , Ns Sy verify .
+.Pp
+If set to
+.Sy verify ,
+ZFS will do a byte-to-byte comparison in case of two blocks having the same
+signature to make sure the block contents are identical.
+Specifying
+.Sy verify
+is mandatory for the
+.Sy edonr
+algorithm.
+.Pp
+Unless necessary, deduplication should
+.Em not
+be enabled on a system.
+See the
+.Sx Deduplication
+section of
+.Xr zfsconcepts 7 .
+.It Xo
+.Sy direct Ns = Ns Sy disabled Ns | Ns Sy standard Ns | Ns Sy always
+.Xc
+Controls the behavior of Direct I/O requests
+.Pq e.g. Dv O_DIRECT .
+The
+.Sy standard
+behavior for Direct I/O requests is to bypass the ARC when possible.
+These requests will not be cached and performance will be limited by the
+raw speed of the underlying disks
+.Pq Dv this is the default .
+.Sy always
+causes every properly aligned read or write to be treated as a direct request.
+.Sy disabled
+causes the O_DIRECT flag to be silently ignored and all direct requests will
+be handled by the ARC.
+This is the default behavior for OpenZFS 2.2 and prior releases.
+.Pp
+Bypassing the ARC requires that a direct request be correctly aligned.
+For write requests the starting offset and size of the request must be
+.Sy recordsize Ns
+-aligned, if not then the unaligned portion of the request will be silently
+redirected through the ARC.
+For read requests there is no
+.Sy recordsize
+alignment restriction on either the starting offset or size.
+All direct requests must use a page-aligned memory buffer and the request
+size must be a multiple of the page size or an error is returned.
+.Pp
+Concurrently mixing buffered and direct requests to overlapping regions of
+a file can decrease performance.
+However, the resulting file will always be coherent.
+For example, a direct read after a buffered write will return the data
+from the buffered write.
+Furthermore, if an application uses
+.Xr mmap 2
+based file access then in order to maintain coherency all direct requests
+are converted to buffered requests while the file is mapped.
+Currently Direct I/O is not supported with zvols.
+If dedup is enabled on a dataset, Direct I/O writes will not check for
+deduplication.
+Deduplication and Direct I/O writes are currently incompatible.
+.It Xo
+.Sy dnodesize Ns = Ns Sy legacy Ns | Ns Sy auto Ns | Ns Sy 1k Ns | Ns
+.Sy 2k Ns | Ns Sy 4k Ns | Ns Sy 8k Ns | Ns Sy 16k
+.Xc
+Specifies a compatibility mode or literal value for the size of dnodes in the
+file system.
+The default value is
+.Sy legacy .
+Setting this property to a value other than
+.Sy legacy No requires the Sy large_dnode No pool feature to be enabled .
+.Pp
+Consider setting
+.Sy dnodesize
+to
+.Sy auto
+if the dataset uses the
+.Sy xattr Ns = Ns Sy sa
+property setting and the workload makes heavy use of extended attributes.
+This
+may be applicable to SELinux-enabled systems, Lustre servers, and Samba
+servers, for example.
+Literal values are supported for cases where the optimal
+size is known in advance and for performance testing.
+.Pp
+Leave
+.Sy dnodesize
+set to
+.Sy legacy
+if you need to receive a send stream of this dataset on a pool that doesn't
+enable the
+.Sy large_dnode
+feature, or if you need to import this pool on a system that doesn't support the
+.Sy large_dnode No feature .
+.Pp
+This property can also be referred to by its shortened column name,
+.Sy dnsize .
+.It Xo
+.Sy encryption Ns = Ns Sy off Ns | Ns Sy on Ns | Ns Sy aes-128-ccm Ns | Ns
+.Sy aes-192-ccm Ns | Ns Sy aes-256-ccm Ns | Ns Sy aes-128-gcm Ns | Ns
+.Sy aes-192-gcm Ns | Ns Sy aes-256-gcm
+.Xc
+Controls the encryption cipher suite (block cipher, key length, and mode) used
+for this dataset.
+Requires the
+.Sy encryption
+feature to be enabled on the pool.
+Requires a
+.Sy keyformat
+to be set at dataset creation time.
+.Pp
+Selecting
+.Sy encryption Ns = Ns Sy on
+when creating a dataset indicates that the default encryption suite will be
+selected, which is currently
+.Sy aes-256-gcm .
+In order to provide consistent data protection, encryption must be specified at
+dataset creation time and it cannot be changed afterwards.
+.Pp
+For more details and caveats about encryption see the
+.Sx Encryption
+section of
+.Xr zfs-load-key 8 .
+.It Sy keyformat Ns = Ns Sy raw Ns | Ns Sy hex Ns | Ns Sy passphrase
+Controls what format the user's encryption key will be provided as.
+This property is only set when the dataset is encrypted.
+.Pp
+Raw keys and hex keys must be 32 bytes long (regardless of the chosen
+encryption suite) and must be randomly generated.
+A raw key can be generated with the following command:
+.Dl # Nm dd Sy if=/dev/urandom bs=32 count=1 Sy of= Ns Pa /path/to/output/key
+.Pp
+Passphrases must be between 8 and 512 bytes long and will be processed through
+PBKDF2 before being used (see the
+.Sy pbkdf2iters
+property).
+Even though the encryption suite cannot be changed after dataset creation,
+the keyformat can be with
+.Nm zfs Cm change-key .
+.It Xo
+.Sy keylocation Ns = Ns Sy prompt Ns | Ns Sy file:// Ns Ar /absolute/file/path Ns | Ns Sy https:// Ns Ar address Ns | Ns Sy http:// Ns Ar address
+.Xc
+Controls where the user's encryption key will be loaded from by default for
+commands such as
+.Nm zfs Cm load-key
+and
+.Nm zfs Cm mount Fl l .
+This property is only set for encrypted datasets which are encryption roots.
+If unspecified, the default is
+.Sy prompt .
+.Pp
+Even though the encryption suite cannot be changed after dataset creation, the
+keylocation can be with either
+.Nm zfs Cm set
+or
+.Nm zfs Cm change-key .
+If
+.Sy prompt
+is selected ZFS will ask for the key at the command prompt when it is required
+to access the encrypted data (see
+.Nm zfs Cm load-key
+for details).
+This setting will also allow the key to be passed in via the standard input
+stream,
+but users should be careful not to place keys which should be kept secret on
+the command line.
+If a file URI is selected, the key will be loaded from the
+specified absolute file path.
+If an HTTPS or HTTP URL is selected, it will be GETted using
+.Xr fetch 3 ,
+libcurl, or nothing, depending on compile-time configuration and run-time
+availability.
+The
+.Sy SSL_CA_CERT_FILE
+environment variable can be set to set the location
+of the concatenated certificate store.
+The
+.Sy SSL_CA_CERT_PATH
+environment variable can be set to override the location
+of the directory containing the certificate authority bundle.
+The
+.Sy SSL_CLIENT_CERT_FILE
+and
+.Sy SSL_CLIENT_KEY_FILE
+environment variables can be set to configure the path
+to the client certificate and its key.
+.It Sy pbkdf2iters Ns = Ns Ar iterations
+Controls the number of PBKDF2 iterations that a
+.Sy passphrase
+encryption key should be run through when processing it into an encryption key.
+This property is only defined when encryption is enabled and a keyformat of
+.Sy passphrase
+is selected.
+The goal of PBKDF2 is to significantly increase the
+computational difficulty needed to brute force a user's passphrase.
+This is accomplished by forcing the attacker to run each passphrase through a
+computationally expensive hashing function many times before they arrive at the
+resulting key.
+A user who actually knows the passphrase will only have to pay this cost once.
+As CPUs become better at processing, this number should be
+raised to ensure that a brute force attack is still not possible.
+The current default is
+.Sy 350000
+and the minimum is
+.Sy 100000 .
+This property may be changed with
+.Nm zfs Cm change-key .
+.It Sy exec Ns = Ns Sy on Ns | Ns Sy off
+Controls whether processes can be executed from within this file system.
+The default value is
+.Sy on .
+The values
+.Sy on
+and
+.Sy off
+are equivalent to the
+.Sy exec
+and
+.Sy noexec
+mount options.
+.It Sy volthreading Ns = Ns Sy on Ns | Ns Sy off
+Controls internal zvol threading.
+The value
+.Sy off
+disables zvol threading, and zvol relies on application threads.
+The default value is
+.Sy on ,
+which enables threading within a zvol.
+Please note that this property will be overridden by
+.Sy zvol_request_sync
+module parameter.
+This property is only applicable to Linux.
+.It Sy filesystem_limit Ns = Ns Ar count Ns | Ns Sy none
+Limits the number of filesystems and volumes that can exist under this point in
+the dataset tree.
+The limit is not enforced if the user is allowed to change the limit.
+Setting a
+.Sy filesystem_limit
+to
+.Sy on
+a descendent of a filesystem that already has a
+.Sy filesystem_limit
+does not override the ancestor's
+.Sy filesystem_limit ,
+but rather imposes an additional limit.
+This feature must be enabled to be used
+.Po see
+.Xr zpool-features 7
+.Pc .
+.It Sy special_small_blocks Ns = Ns Ar size
+This value represents the threshold block size for including small file
+blocks into the special allocation class.
+Blocks smaller than or equal to this
+value will be assigned to the special allocation class while greater blocks
+will be assigned to the regular class.
+Valid values are zero or a power of two from 512 up to 1048576 (1 MiB).
+The default size is 0 which means no small file blocks
+will be allocated in the special class.
+.Pp
+Before setting this property, a special class vdev must be added to the
+pool.
+See
+.Xr zpoolconcepts 7
+for more details on the special allocation class.
+.It Sy mountpoint Ns = Ns Pa path Ns | Ns Sy none Ns | Ns Sy legacy
+Controls the mount point used for this file system.
+See the
+.Sx Mount Points
+section of
+.Xr zfsconcepts 7
+for more information on how this property is used.
+.Pp
+When the
+.Sy mountpoint
+property is changed for a file system, the file system and any children that
+inherit the mount point are unmounted.
+If the new value is
+.Sy legacy ,
+then they remain unmounted.
+Otherwise, they are automatically remounted in the new location if the property
+was previously
+.Sy legacy
+or
+.Sy none .
+In addition, any shared file systems are unshared and shared in the new
+location.
+.Pp
+When the
+.Sy mountpoint
+property is set with
+.Nm zfs Cm set Fl u
+, the
+.Sy mountpoint
+property is updated but dataset is not mounted or unmounted and remains
+as it was before.
+.It Sy nbmand Ns = Ns Sy on Ns | Ns Sy off
+Controls whether the file system should be mounted with
+.Sy nbmand
+.Pq Non-blocking mandatory locks .
+Changes to this property only take effect when the file system is umounted and
+remounted.
+This was only supported by Linux prior to 5.15, and was buggy there,
+and is not supported by
+.Fx .
+On Solaris it's used for SMB clients.
+.It Sy overlay Ns = Ns Sy on Ns | Ns Sy off
+Allow mounting on a busy directory or a directory which already contains
+files or directories.
+This is the default mount behavior for Linux and
+.Fx
+file systems.
+On these platforms the property is
+.Sy on
+by default.
+Set to
+.Sy off
+to disable overlay mounts for consistency with OpenZFS on other platforms.
+.It Sy primarycache Ns = Ns Sy all Ns | Ns Sy none Ns | Ns Sy metadata
+Controls what is cached in the primary cache
+.Pq ARC .
+If this property is set to
+.Sy all ,
+then both user data and metadata is cached.
+If this property is set to
+.Sy none ,
+then neither user data nor metadata is cached.
+If this property is set to
+.Sy metadata ,
+then only metadata is cached.
+The default value is
+.Sy all .
+.It Sy quota Ns = Ns Ar size Ns | Ns Sy none
+Limits the amount of space a dataset and its descendents can consume.
+This property enforces a hard limit on the amount of space used.
+This includes all space consumed by descendents, including file systems and
+snapshots.
+Setting a quota on a descendent of a dataset that already has a quota does not
+override the ancestor's quota, but rather imposes an additional limit.
+.Pp
+Quotas cannot be set on volumes, as the
+.Sy volsize
+property acts as an implicit quota.
+.It Sy snapshot_limit Ns = Ns Ar count Ns | Ns Sy none
+Limits the number of snapshots that can be created on a dataset and its
+descendents.
+Setting a
+.Sy snapshot_limit
+on a descendent of a dataset that already has a
+.Sy snapshot_limit
+does not override the ancestor's
+.Sy snapshot_limit ,
+but rather imposes an additional limit.
+The limit is not enforced if the user is allowed to change the limit.
+For example, this means that recursive snapshots taken from the global zone are
+counted against each delegated dataset within a zone.
+This feature must be enabled to be used
+.Po see
+.Xr zpool-features 7
+.Pc .
+.It Sy userquota@ Ns Ar user Ns = Ns Ar size Ns | Ns Sy none
+Limits the amount of space consumed by the specified user.
+User space consumption is identified by the
+.Sy userspace@ Ns Ar user
+property.
+.Pp
+Enforcement of user quotas may be delayed by several seconds.
+This delay means that a user might exceed their quota before the system notices
+that they are over quota and begins to refuse additional writes with the
+.Er EDQUOT
+error message.
+See the
+.Nm zfs Cm userspace
+command for more information.
+.Pp
+Unprivileged users can only access their own groups' space usage.
+The root user, or a user who has been granted the
+.Sy userquota
+privilege with
+.Nm zfs Cm allow ,
+can get and set everyone's quota.
+.Pp
+This property is not available on volumes, on file systems before version 4, or
+on pools before version 15.
+The
+.Sy userquota@ Ns Ar …
+properties are not displayed by
+.Nm zfs Cm get Sy all .
+The user's name must be appended after the
+.Sy @
+symbol, using one of the following forms:
+.Bl -bullet -compact -offset 4n
+.It
+POSIX name
+.Pq Qq joe
+.It
+POSIX numeric ID
+.Pq Qq 789
+.It
+SID name
+.Pq Qq joe.smith@mydomain
+.It
+SID numeric ID
+.Pq Qq S-1-123-456-789
+.El
+.Pp
+Files created on Linux always have POSIX owners.
+.It Sy userobjquota@ Ns Ar user Ns = Ns Ar size Ns | Ns Sy none
+The
+.Sy userobjquota
+is similar to
+.Sy userquota
+but it limits the number of objects a user can create.
+Please refer to
+.Sy userobjused
+for more information about how objects are counted.
+.It Sy groupquota@ Ns Ar group Ns = Ns Ar size Ns | Ns Sy none
+Limits the amount of space consumed by the specified group.
+Group space consumption is identified by the
+.Sy groupused@ Ns Ar group
+property.
+.Pp
+Unprivileged users can access only their own groups' space usage.
+The root user, or a user who has been granted the
+.Sy groupquota
+privilege with
+.Nm zfs Cm allow ,
+can get and set all groups' quotas.
+.It Sy groupobjquota@ Ns Ar group Ns = Ns Ar size Ns | Ns Sy none
+The
+.Sy groupobjquota
+is similar to
+.Sy groupquota
+but it limits number of objects a group can consume.
+Please refer to
+.Sy userobjused
+for more information about how objects are counted.
+.It Sy projectquota@ Ns Ar project Ns = Ns Ar size Ns | Ns Sy none
+Limits the amount of space consumed by the specified project.
+Project space consumption is identified by the
+.Sy projectused@ Ns Ar project
+property.
+Please refer to
+.Sy projectused
+for more information about how project is identified and set/changed.
+.Pp
+The root user, or a user who has been granted the
+.Sy projectquota
+privilege with
+.Nm zfs allow ,
+can access all projects' quota.
+.It Sy projectobjquota@ Ns Ar project Ns = Ns Ar size Ns | Ns Sy none
+The
+.Sy projectobjquota
+is similar to
+.Sy projectquota
+but it limits number of objects a project can consume.
+Please refer to
+.Sy userobjused
+for more information about how objects are counted.
+.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
+Controls whether this dataset can be modified.
+The default value is
+.Sy off .
+The values
+.Sy on
+and
+.Sy off
+are equivalent to the
+.Sy ro
+and
+.Sy rw
+mount options.
+.Pp
+This property can also be referred to by its shortened column name,
+.Sy rdonly .
+.It Sy recordsize Ns = Ns Ar size
+Specifies a suggested block size for files in the file system.
+This property is designed solely for use with database workloads that access
+files in fixed-size records.
+ZFS automatically tunes block sizes according to internal algorithms optimized
+for typical access patterns.
+.Pp
+For databases that create very large files but access them in small random
+chunks, these algorithms may be suboptimal.
+Specifying a
+.Sy recordsize
+greater than or equal to the record size of the database can result in
+significant performance gains.
+Use of this property for general purpose file systems is strongly discouraged,
+and may adversely affect performance.
+.Pp
+The size specified must be a power of two greater than or equal to
+.Ar 512 B
+and less than or equal to
+.Ar 128 KiB .
+If the
+.Sy large_blocks
+feature is enabled on the pool, the size may be up to
+.Ar 16 MiB .
+See
+.Xr zpool-features 7
+for details on ZFS feature flags.
+.Pp
+However, blocks larger than
+.Ar 1 MiB
+can have an impact on i/o latency (e.g. tying up a spinning disk for
+~300ms), and also potentially on the memory allocator.
+.Pp
+Note that maximum size is still limited by default to
+.Ar 1 MiB
+on x86_32, see
+.Sy zfs_max_recordsize
+module parameter.
+.Pp
+Changing the file system's
+.Sy recordsize
+affects only files created afterward; existing files are unaffected.
+.Pp
+This property can also be referred to by its shortened column name,
+.Sy recsize .
+.It Sy redundant_metadata Ns = Ns Sy all Ns | Ns Sy most Ns | Ns Sy some Ns | Ns Sy none
+Controls what types of metadata are stored redundantly.
+ZFS stores an extra copy of metadata, so that if a single block is corrupted,
+the amount of user data lost is limited.
+This extra copy is in addition to any redundancy provided at the pool level
+.Pq e.g. by mirroring or RAID-Z ,
+and is in addition to an extra copy specified by the
+.Sy copies
+property
+.Pq up to a total of 3 copies .
+For example if the pool is mirrored,
+.Sy copies Ns = Ns 2 ,
+and
+.Sy redundant_metadata Ns = Ns Sy most ,
+then ZFS stores 6 copies of most metadata, and 4 copies of data and some
+metadata.
+.Pp
+When set to
+.Sy all ,
+ZFS stores an extra copy of all metadata.
+If a single on-disk block is corrupt, at worst a single block of user data
+.Po which is
+.Sy recordsize
+bytes long
+.Pc
+can be lost.
+.Pp
+When set to
+.Sy most ,
+ZFS stores an extra copy of most types of metadata.
+This can improve performance of random writes, because less metadata must be
+written.
+In practice, at worst about 1000 blocks
+.Po of
+.Sy recordsize
+bytes each
+.Pc
+of user data can be lost if a single on-disk block is corrupt.
+The exact behavior of which metadata blocks are stored redundantly may change in
+future releases.
+.Pp
+When set to
+.Sy some ,
+ZFS stores an extra copy of only critical metadata.
+This can improve file create performance since less metadata
+needs to be written.
+If a single on-disk block is corrupt, at worst a single user file can be lost.
+.Pp
+When set to
+.Sy none ,
+ZFS does not store any copies of metadata redundantly.
+If a single on-disk block is corrupt, an entire dataset can be lost.
+.Pp
+The default value is
+.Sy all .
+.It Sy refquota Ns = Ns Ar size Ns | Ns Sy none
+Limits the amount of space a dataset can consume.
+This property enforces a hard limit on the amount of space used.
+This hard limit does not include space used by descendents, including file
+systems and snapshots.
+.It Sy refreservation Ns = Ns Ar size Ns | Ns Sy none Ns | Ns Sy auto
+The minimum amount of space guaranteed to a dataset, not including its
+descendents.
+When the amount of space used is below this value, the dataset is treated as if
+it were taking up the amount of space specified by
+.Sy refreservation .
+The
+.Sy refreservation
+reservation is accounted for in the parent datasets' space used, and counts
+against the parent datasets' quotas and reservations.
+.Pp
+If
+.Sy refreservation
+is set, a snapshot is only allowed if there is enough free pool space outside of
+this reservation to accommodate the current number of
+.Qq referenced
+bytes in the dataset.
+.Pp
+If
+.Sy refreservation
+is set to
+.Sy auto ,
+a volume is thick provisioned
+.Po or
+.Qq not sparse
+.Pc .
+.Sy refreservation Ns = Ns Sy auto
+is only supported on volumes.
+See
+.Sy volsize
+in the
+.Sx Native Properties
+section for more information about sparse volumes.
+.Pp
+This property can also be referred to by its shortened column name,
+.Sy refreserv .
+.It Sy relatime Ns = Ns Sy on Ns | Ns Sy off
+Controls the manner in which the access time is updated when
+.Sy atime Ns = Ns Sy on
+is set.
+Turning this property on causes the access time to be updated relative
+to the modify or change time.
+Access time is only updated if the previous
+access time was earlier than the current modify or change time or if the
+existing access time hasn't been updated within the past 24 hours.
+The default value is
+.Sy on .
+The values
+.Sy on
+and
+.Sy off
+are equivalent to the
+.Sy relatime
+and
+.Sy norelatime
+mount options.
+.It Sy reservation Ns = Ns Ar size Ns | Ns Sy none
+The minimum amount of space guaranteed to a dataset and its descendants.
+When the amount of space used is below this value, the dataset is treated as if
+it were taking up the amount of space specified by its reservation.
+Reservations are accounted for in the parent datasets' space used, and count
+against the parent datasets' quotas and reservations.
+.Pp
+This property can also be referred to by its shortened column name,
+.Sy reserv .
+.It Sy secondarycache Ns = Ns Sy all Ns | Ns Sy none Ns | Ns Sy metadata
+Controls what is cached in the secondary cache
+.Pq L2ARC .
+If this property is set to
+.Sy all ,
+then both user data and metadata is cached.
+If this property is set to
+.Sy none ,
+then neither user data nor metadata is cached.
+If this property is set to
+.Sy metadata ,
+then only metadata is cached.
+The default value is
+.Sy all .
+.It Sy prefetch Ns = Ns Sy all Ns | Ns Sy none Ns | Ns Sy metadata
+Controls what speculative prefetch does.
+If this property is set to
+.Sy all ,
+then both user data and metadata are prefetched.
+If this property is set to
+.Sy none ,
+then neither user data nor metadata are prefetched.
+If this property is set to
+.Sy metadata ,
+then only metadata are prefetched.
+The default value is
+.Sy all .
+.Pp
+Please note that the module parameter zfs_prefetch_disable=1 can
+be used to totally disable speculative prefetch, bypassing anything
+this property does.
+.It Sy setuid Ns = Ns Sy on Ns | Ns Sy off
+Controls whether the setuid bit is respected for the file system.
+The default value is
+.Sy on .
+The values
+.Sy on
+and
+.Sy off
+are equivalent to the
+.Sy suid
+and
+.Sy nosuid
+mount options.
+.It Sy sharesmb Ns = Ns Sy on Ns | Ns Sy off Ns | Ns Ar opts
+Controls whether the file system is shared by using
+.Sy Samba USERSHARES
+and what options are to be used.
+Otherwise, the file system is automatically shared and unshared with the
+.Nm zfs Cm share
+and
+.Nm zfs Cm unshare
+commands.
+If the property is set to on, the
+.Xr net 8
+command is invoked to create a
+.Sy USERSHARE .
+.Pp
+Because SMB shares requires a resource name, a unique resource name is
+constructed from the dataset name.
+The constructed name is a copy of the
+dataset name except that the characters in the dataset name, which would be
+invalid in the resource name, are replaced with underscore (_) characters.
+Linux does not currently support additional options which might be available
+on Solaris.
+.Pp
+If the
+.Sy sharesmb
+property is set to
+.Sy off ,
+the file systems are unshared.
+.Pp
+The share is created with the ACL (Access Control List) "Everyone:F" ("F"
+stands for "full permissions", i.e. read and write permissions) and no guest
+access (which means Samba must be able to authenticate a real user \(em
+.Xr passwd 5 Ns / Ns Xr shadow 5 Ns - ,
+LDAP- or
+.Xr smbpasswd 5 Ns -based )
+by default.
+This means that any additional access control
+(disallow specific user specific access etc) must be done on the underlying file
+system.
+.Pp
+When the
+.Sy sharesmb
+property is updated with
+.Nm zfs Cm set Fl u
+, the property is set to desired value, but the operation to share, reshare
+or unshare the the dataset is not performed.
+.It Sy sharenfs Ns = Ns Sy on Ns | Ns Sy off Ns | Ns Ar opts
+Controls whether the file system is shared via NFS, and what options are to be
+used.
+A file system with a
+.Sy sharenfs
+property of
+.Sy off
+is managed with the
+.Xr exportfs 8
+command and entries in the
+.Pa /etc/exports
+file.
+Otherwise, the file system is automatically shared and unshared with the
+.Nm zfs Cm share
+and
+.Nm zfs Cm unshare
+commands.
+If the property is set to
+.Sy on ,
+the dataset is shared using the default options:
+.Dl sec=sys,rw,crossmnt,no_subtree_check
+.Pp
+Please note that the options are comma-separated, unlike those found in
+.Xr exports 5 .
+This is done to negate the need for quoting, as well as to make parsing
+with scripts easier.
+.Pp
+For
+.Fx ,
+there may be multiple sets of options separated by semicolon(s).
+Each set of options must apply to different hosts or networks and each
+set of options will create a separate line for
+.Xr exports 5 .
+Any semicolon separated option set that consists entirely of whitespace
+will be ignored.
+This use of semicolons is only for
+.Fx
+at this time.
+.Pp
+See
+.Xr exports 5
+for the meaning of the default options.
+Otherwise, the
+.Xr exportfs 8
+command is invoked with options equivalent to the contents of this property.
+.Pp
+When the
+.Sy sharenfs
+property is changed for a dataset, the dataset and any children inheriting the
+property are re-shared with the new options, only if the property was previously
+.Sy off ,
+or if they were shared before the property was changed.
+If the new property is
+.Sy off ,
+the file systems are unshared.
+.Pp
+When the
+.Sy sharenfs
+property is updated with
+.Nm zfs Cm set Fl u
+, the property is set to desired value, but the operation to share, reshare
+or unshare the the dataset is not performed.
+.It Sy logbias Ns = Ns Sy latency Ns | Ns Sy throughput
+Provide a hint to ZFS about handling of synchronous requests in this dataset.
+If
+.Sy logbias
+is set to
+.Sy latency
+.Pq the default ,
+ZFS will use pool log devices
+.Pq if configured
+to handle the requests at low latency.
+If
+.Sy logbias
+is set to
+.Sy throughput ,
+ZFS will not use configured pool log devices.
+ZFS will instead optimize synchronous operations for global pool throughput and
+efficient use of resources.
+.It Sy snapdev Ns = Ns Sy hidden Ns | Ns Sy visible
+Controls whether the volume snapshot devices under
+.Pa /dev/zvol/ Ns Aq Ar pool
+are hidden or visible.
+The default value is
+.Sy hidden .
+.It Sy snapdir Ns = Ns Sy disabled Ns | Ns Sy hidden Ns | Ns Sy visible
+Controls whether the
+.Pa .zfs
+directory is disabled, hidden or visible in the root of the file system as
+discussed in the
+.Sx Snapshots
+section of
+.Xr zfsconcepts 7 .
+The default value is
+.Sy hidden .
+.It Sy sync Ns = Ns Sy standard Ns | Ns Sy always Ns | Ns Sy disabled
+Controls the behavior of synchronous requests
+.Pq e.g. fsync, O_DSYNC .
+.Sy standard
+is the POSIX-specified behavior of ensuring all synchronous requests
+are written to stable storage and all devices are flushed to ensure
+data is not cached by device controllers
+.Pq this is the default .
+.Sy always
+causes every file system transaction to be written and flushed before its
+system call returns.
+This has a large performance penalty.
+.Sy disabled
+disables synchronous requests.
+File system transactions are only committed to stable storage periodically.
+This option will give the highest performance.
+However, it is very dangerous as ZFS would be ignoring the synchronous
+transaction demands of applications such as databases or NFS.
+Administrators should only use this option when the risks are understood.
+.It Sy version Ns = Ns Ar N Ns | Ns Sy current
+The on-disk version of this file system, which is independent of the pool
+version.
+This property can only be set to later supported versions.
+See the
+.Nm zfs Cm upgrade
+command.
+.It Sy volsize Ns = Ns Ar size
+For volumes, specifies the logical size of the volume.
+By default, creating a volume establishes a reservation of equal size.
+For storage pools with a version number of 9 or higher, a
+.Sy refreservation
+is set instead.
+Any changes to
+.Sy volsize
+are reflected in an equivalent change to the reservation
+.Pq or Sy refreservation .
+The
+.Sy volsize
+can only be set to a multiple of
+.Sy volblocksize ,
+and cannot be zero.
+.Pp
+The reservation is kept equal to the volume's logical size to prevent unexpected
+behavior for consumers.
+Without the reservation, the volume could run out of space, resulting in
+undefined behavior or data corruption, depending on how the volume is used.
+These effects can also occur when the volume size is changed while it is in use
+.Pq particularly when shrinking the size .
+Extreme care should be used when adjusting the volume size.
+.Pp
+Though not recommended, a
+.Qq sparse volume
+.Po also known as
+.Qq thin provisioned
+.Pc
+can be created by specifying the
+.Fl s
+option to the
+.Nm zfs Cm create Fl V
+command, or by changing the value of the
+.Sy refreservation
+property
+.Po or
+.Sy reservation
+property on pool version 8 or earlier
+.Pc
+after the volume has been created.
+A
+.Qq sparse volume
+is a volume where the value of
+.Sy refreservation
+is less than the size of the volume plus the space required to store its
+metadata.
+Consequently, writes to a sparse volume can fail with
+.Er ENOSPC
+when the pool is low on space.
+For a sparse volume, changes to
+.Sy volsize
+are not reflected in the
+.Sy refreservation .
+A volume that is not sparse is said to be
+.Qq thick provisioned .
+A sparse volume can become thick provisioned by setting
+.Sy refreservation
+to
+.Sy auto .
+.It Sy volmode Ns = Ns Sy default Ns | Ns Sy full Ns | Ns Sy geom Ns | Ns Sy dev Ns | Ns Sy none
+This property specifies how volumes should be exposed to the OS.
+Setting it to
+.Sy full
+exposes volumes as fully fledged block devices, providing maximal
+functionality.
+The value
+.Sy geom
+is just an alias for
+.Sy full
+and is kept for compatibility.
+Setting it to
+.Sy dev
+hides its partitions.
+Volumes with property set to
+.Sy none
+are not exposed outside ZFS, but can be snapshotted, cloned, replicated, etc,
+that can be suitable for backup purposes.
+Value
+.Sy default
+means that volumes exposition is controlled by system-wide tunable
+.Sy zvol_volmode ,
+where
+.Sy full ,
+.Sy dev
+and
+.Sy none
+are encoded as 1, 2 and 3 respectively.
+The default value is
+.Sy full .
+.It Sy vscan Ns = Ns Sy on Ns | Ns Sy off
+Controls whether regular files should be scanned for viruses when a file is
+opened and closed.
+In addition to enabling this property, the virus scan service must also be
+enabled for virus scanning to occur.
+The default value is
+.Sy off .
+This property is not used by OpenZFS.
+.It Sy xattr Ns = Ns Sy on Ns | Ns Sy off Ns | Ns Sy dir Ns | Ns Sy sa
+Controls whether extended attributes are enabled for this file system.
+Two styles of extended attributes are supported: either directory-based
+or system-attribute-based.
+.Pp
+Directory-based extended attributes can be enabled by setting the value to
+.Sy dir .
+This style of extended attribute imposes no practical limit
+on either the size or number of attributes which can be set on a file.
+Although under Linux the
+.Xr getxattr 2
+and
+.Xr setxattr 2
+system calls limit the maximum size to
+.Sy 64K .
+This is the most compatible
+style of extended attribute and is supported by all ZFS implementations.
+.Pp
+System-attribute-based xattrs can be enabled by setting the value to
+.Sy sa
+(default and equal to
+.Sy on
+) .
+The key advantage of this type of xattr is improved performance.
+Storing extended attributes as system attributes
+significantly decreases the amount of disk I/O required.
+Up to
+.Sy 64K
+of data may be stored per-file in the space reserved for system attributes.
+If there is not enough space available for an extended attribute
+then it will be automatically written as a directory-based xattr.
+System-attribute-based extended attributes are not accessible
+on platforms which do not support the
+.Sy xattr Ns = Ns Sy sa
+feature.
+OpenZFS supports
+.Sy xattr Ns = Ns Sy sa
+on both
+.Fx
+and Linux.
+.Pp
+The use of system-attribute-based xattrs is strongly encouraged for users of
+SELinux or POSIX ACLs.
+Both of these features heavily rely on extended
+attributes and benefit significantly from the reduced access time.
+.Pp
+The values
+.Sy on
+and
+.Sy off
+are equivalent to the
+.Sy xattr
+and
+.Sy noxattr
+mount options.
+.It Sy jailed Ns = Ns Sy off Ns | Ns Sy on
+Controls whether the dataset is managed from a jail.
+See
+.Xr zfs-jail 8
+for more information.
+Jails are a
+.Fx
+feature and this property is not available on other platforms.
+.It Sy zoned Ns = Ns Sy off Ns | Ns Sy on
+Controls whether the dataset is managed from a non-global zone or namespace.
+See
+.Xr zfs-zone 8
+for more information.
+Zoning is a
+Linux
+feature and this property is not available on other platforms.
+.El
+.Pp
+The following three properties cannot be changed after the file system is
+created, and therefore, should be set when the file system is created.
+If the properties are not set with the
+.Nm zfs Cm create
+or
+.Nm zpool Cm create
+commands, these properties are inherited from the parent dataset.
+If the parent dataset lacks these properties due to having been created prior to
+these features being supported, the new file system will have the default values
+for these properties.
+.Bl -tag -width ""
+.It Xo
+.Sy casesensitivity Ns = Ns Sy sensitive Ns | Ns
+.Sy insensitive Ns | Ns Sy mixed
+.Xc
+Indicates whether the file name matching algorithm used by the file system
+should be case-sensitive, case-insensitive, or allow a combination of both
+styles of matching.
+The default value for the
+.Sy casesensitivity
+property is
+.Sy sensitive .
+Traditionally,
+.Ux
+and POSIX file systems have case-sensitive file names.
+.Pp
+The
+.Sy mixed
+value for the
+.Sy casesensitivity
+property indicates that the file system can support requests for both
+case-sensitive and case-insensitive matching behavior.
+Currently, case-insensitive matching behavior on a file system that supports
+mixed behavior is limited to the SMB server product.
+For more information about the
+.Sy mixed
+value behavior, see the "ZFS Administration Guide".
+.It Xo
+.Sy normalization Ns = Ns Sy none Ns | Ns Sy formC Ns | Ns
+.Sy formD Ns | Ns Sy formKC Ns | Ns Sy formKD
+.Xc
+Indicates whether the file system should perform a
+.Sy unicode
+normalization of file names whenever two file names are compared, and which
+normalization algorithm should be used.
+File names are always stored unmodified, names are normalized as part of any
+comparison process.
+If this property is set to a legal value other than
+.Sy none ,
+and the
+.Sy utf8only
+property was left unspecified, the
+.Sy utf8only
+property is automatically set to
+.Sy on .
+The default value of the
+.Sy normalization
+property is
+.Sy none .
+This property cannot be changed after the file system is created.
+.It Sy utf8only Ns = Ns Sy on Ns | Ns Sy off
+Indicates whether the file system should reject file names that include
+characters that are not present in the
+.Sy UTF-8
+character code set.
+If this property is explicitly set to
+.Sy off ,
+the normalization property must either not be explicitly set or be set to
+.Sy none .
+The default value for the
+.Sy utf8only
+property is
+.Sy off .
+This property cannot be changed after the file system is created.
+.El
+.Pp
+The
+.Sy casesensitivity ,
+.Sy normalization ,
+and
+.Sy utf8only
+properties are also new permissions that can be assigned to non-privileged users
+by using the ZFS delegated administration feature.
+.
+.Ss Temporary Mount Point Properties
+When a file system is mounted, either through
+.Xr mount 8
+for legacy mounts or the
+.Nm zfs Cm mount
+command for normal file systems, its mount options are set according to its
+properties.
+The correlation between properties and mount options is as follows:
+.Bl -tag -compact -offset Ds -width "rootcontext="
+.It Sy atime
+atime/noatime
+.It Sy canmount
+auto/noauto
+.It Sy devices
+dev/nodev
+.It Sy exec
+exec/noexec
+.It Sy readonly
+ro/rw
+.It Sy relatime
+relatime/norelatime
+.It Sy setuid
+suid/nosuid
+.It Sy xattr
+xattr/noxattr
+.It Sy nbmand
+mand/nomand
+.It Sy context Ns =
+context=
+.It Sy fscontext Ns =
+fscontext=
+.It Sy defcontext Ns =
+defcontext=
+.It Sy rootcontext Ns =
+rootcontext=
+.El
+.Pp
+In addition, these options can be set on a per-mount basis using the
+.Fl o
+option, without affecting the property that is stored on disk.
+The values specified on the command line override the values stored in the
+dataset.
+The
+.Sy nosuid
+option is an alias for
+.Sy nodevices , Ns Sy nosetuid .
+These properties are reported as
+.Qq temporary
+by the
+.Nm zfs Cm get
+command.
+If the properties are changed while the dataset is mounted, the new setting
+overrides any temporary settings.
+.
+.Ss User Properties
+In addition to the standard native properties, ZFS supports arbitrary user
+properties.
+User properties have no effect on ZFS behavior, but applications or
+administrators can use them to annotate datasets
+.Pq file systems, volumes, and snapshots .
+.Pp
+User property names must contain a colon
+.Pq Qq Sy \&:
+character to distinguish them from native properties.
+They may contain lowercase letters, numbers, and the following punctuation
+characters: colon
+.Pq Qq Sy \&: ,
+dash
+.Pq Qq Sy - ,
+period
+.Pq Qq Sy \&. ,
+and underscore
+.Pq Qq Sy _ .
+The expected convention is that the property name is divided into two portions
+such as
+.Ar module : Ns Ar property ,
+but this namespace is not enforced by ZFS.
+User property names can be at most 256 characters, and cannot begin with a dash
+.Pq Qq Sy - .
+.Pp
+When making programmatic use of user properties, it is strongly suggested to use
+a reversed DNS domain name for the
+.Ar module
+component of property names to reduce the chance that two
+independently-developed packages use the same property name for different
+purposes.
+.Pp
+The values of user properties are arbitrary strings, are always inherited, and
+are never validated.
+All of the commands that operate on properties
+.Po Nm zfs Cm list ,
+.Nm zfs Cm get ,
+.Nm zfs Cm set ,
+and so forth
+.Pc
+can be used to manipulate both native properties and user properties.
+Use the
+.Nm zfs Cm inherit
+command to clear a user property.
+If the property is not defined in any parent dataset, it is removed entirely.
+Property values are limited to 8192 bytes.
diff --git a/share/man/man7/zpool-features.7 b/share/man/man7/zpool-features.7
@@ -0,0 +1,1060 @@
+.\"
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" The contents of this file are subject to the terms of the Common Development
+.\" and Distribution License (the "License"). You may not use this file except
+.\" in compliance with the License. You can obtain a copy of the license at
+.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
+.\"
+.\" See the License for the specific language governing permissions and
+.\" limitations under the License. When distributing Covered Code, include this
+.\" CDDL HEADER in each file and include the License file at
+.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
+.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
+.\" own identifying information:
+.\" Portions Copyright [yyyy] [name of copyright owner]
+.\" Copyright (c) 2019, 2023, 2024, Klara, Inc.
+.\" Copyright (c) 2019, Allan Jude
+.\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
+.\"
+.Dd October 2, 2024
+.Dt ZPOOL-FEATURES 7
+.Os
+.
+.Sh NAME
+.Nm zpool-features
+.Nd description of ZFS pool features
+.
+.Sh DESCRIPTION
+ZFS pool on-disk format versions are specified via
+.Dq features
+which replace the old on-disk format numbers
+.Pq the last supported on-disk format number is 28 .
+To enable a feature on a pool use the
+.Nm zpool Cm upgrade ,
+or set the
+.Sy feature Ns @ Ns Ar feature-name
+property to
+.Sy enabled .
+Please also see the
+.Sx Compatibility feature sets
+section for information on how sets of features may be enabled together.
+.Pp
+The pool format does not affect file system version compatibility or the ability
+to send file systems between pools.
+.Pp
+Since most features can be enabled independently of each other, the on-disk
+format of the pool is specified by the set of all features marked as
+.Sy active
+on the pool.
+If the pool was created by another software version
+this set may include unsupported features.
+.
+.Ss Identifying features
+Every feature has a GUID of the form
+.Ar com.example : Ns Ar feature-name .
+The reversed DNS name ensures that the feature's GUID is unique across all ZFS
+implementations.
+When unsupported features are encountered on a pool they will
+be identified by their GUIDs.
+Refer to the documentation for the ZFS
+implementation that created the pool for information about those features.
+.Pp
+Each supported feature also has a short name.
+By convention a feature's short name is the portion of its GUID which follows
+the
+.Sq \&:
+.Po
+i.e.
+.Ar com.example : Ns Ar feature-name
+would have the short name
+.Ar feature-name
+.Pc ,
+however a feature's short name may differ across ZFS implementations if
+following the convention would result in name conflicts.
+.
+.Ss Feature states
+Features can be in one of three states:
+.Bl -tag -width "disabled"
+.It Sy active
+This feature's on-disk format changes are in effect on the pool.
+Support for this feature is required to import the pool in read-write mode.
+If this feature is not read-only compatible,
+support is also required to import the pool in read-only mode
+.Pq see Sx Read-only compatibility .
+.It Sy enabled
+An administrator has marked this feature as enabled on the pool, but the
+feature's on-disk format changes have not been made yet.
+The pool can still be imported by software that does not support this feature,
+but changes may be made to the on-disk format at any time
+which will move the feature to the
+.Sy active
+state.
+Some features may support returning to the
+.Sy enabled
+state after becoming
+.Sy active .
+See feature-specific documentation for details.
+.It Sy disabled
+This feature's on-disk format changes have not been made and will not be made
+unless an administrator moves the feature to the
+.Sy enabled
+state.
+Features cannot be disabled once they have been enabled.
+.El
+.Pp
+The state of supported features is exposed through pool properties of the form
+.Sy feature Ns @ Ns Ar short-name .
+.
+.Ss Read-only compatibility
+Some features may make on-disk format changes that do not interfere with other
+software's ability to read from the pool.
+These features are referred to as
+.Dq read-only compatible .
+If all unsupported features on a pool are read-only compatible,
+the pool can be imported in read-only mode by setting the
+.Sy readonly
+property during import
+.Po see
+.Xr zpool-import 8
+for details on importing pools
+.Pc .
+.
+.Ss Unsupported features
+For each unsupported feature enabled on an imported pool, a pool property
+named
+.Sy unsupported Ns @ Ns Ar feature-name
+will indicate why the import was allowed despite the unsupported feature.
+Possible values for this property are:
+.Bl -tag -width "readonly"
+.It Sy inactive
+The feature is in the
+.Sy enabled
+state and therefore the pool's on-disk
+format is still compatible with software that does not support this feature.
+.It Sy readonly
+The feature is read-only compatible and the pool has been imported in
+read-only mode.
+.El
+.
+.Ss Feature dependencies
+Some features depend on other features being enabled in order to function.
+Enabling a feature will automatically enable any features it depends on.
+.
+.Ss Compatibility feature sets
+It is sometimes necessary for a pool to maintain compatibility with a
+specific on-disk format, by enabling and disabling particular features.
+The
+.Sy compatibility
+feature facilitates this by allowing feature sets to be read from text files.
+When set to
+.Sy off
+.Pq the default ,
+compatibility feature sets are disabled
+.Pq i.e. all features are enabled ;
+when set to
+.Sy legacy ,
+no features are enabled.
+When set to a comma-separated list of filenames
+.Po
+each filename may either be an absolute path, or relative to
+.Pa /etc/zfs/compatibility.d
+or
+.Pa /usr/share/zfs/compatibility.d
+.Pc ,
+the lists of requested features are read from those files,
+separated by whitespace and/or commas.
+Only features present in all files are enabled.
+.Pp
+Simple sanity checks are applied to the files:
+they must be between 1 B and 16 KiB in size, and must end with a newline
+character.
+.Pp
+The requested features are applied when a pool is created using
+.Nm zpool Cm create Fl o Sy compatibility Ns = Ns Ar …
+and controls which features are enabled when using
+.Nm zpool Cm upgrade .
+.Nm zpool Cm status
+will not show a warning about disabled features which are not part
+of the requested feature set.
+.Pp
+The special value
+.Sy legacy
+prevents any features from being enabled, either via
+.Nm zpool Cm upgrade
+or
+.Nm zpool Cm set Sy feature Ns @ Ns Ar feature-name Ns = Ns Sy enabled .
+This setting also prevents pools from being upgraded to newer on-disk versions.
+This is a safety measure to prevent new features from being
+accidentally enabled, breaking compatibility.
+.Pp
+By convention, compatibility files in
+.Pa /usr/share/zfs/compatibility.d
+are provided by the distribution, and include feature sets
+supported by important versions of popular distributions, and feature
+sets commonly supported at the start of each year.
+Compatibility files in
+.Pa /etc/zfs/compatibility.d ,
+if present, will take precedence over files with the same name in
+.Pa /usr/share/zfs/compatibility.d .
+.Pp
+If an unrecognized feature is found in these files, an error message will
+be shown.
+If the unrecognized feature is in a file in
+.Pa /etc/zfs/compatibility.d ,
+this is treated as an error and processing will stop.
+If the unrecognized feature is under
+.Pa /usr/share/zfs/compatibility.d ,
+this is treated as a warning and processing will continue.
+This difference is to allow distributions to include features
+which might not be recognized by the currently-installed binaries.
+.Pp
+Compatibility files may include comments:
+any text from
+.Sq #
+to the end of the line is ignored.
+.Pp
+.Sy Example :
+.Bd -literal -compact -offset 4n
+.No example# Nm cat Pa /usr/share/zfs/compatibility.d/grub2
+# Features which are supported by GRUB2 versions from v2.12 onwards.
+allocation_classes
+async_destroy
+block_cloning
+bookmarks
+device_rebuild
+embedded_data
+empty_bpobj
+enabled_txg
+extensible_dataset
+filesystem_limits
+hole_birth
+large_blocks
+livelist
+log_spacemap
+lz4_compress
+project_quota
+resilver_defer
+spacemap_histogram
+spacemap_v2
+userobj_accounting
+zilsaxattr
+zpool_checkpoint
+
+.No example# Nm cat Pa /usr/share/zfs/compatibility.d/grub2-2.06
+# Features which are supported by GRUB2 versions prior to v2.12.
+#
+# GRUB is not able to detect ZFS pool if snaphsot of top level boot pool
+# is created. This issue is observed with GRUB versions before v2.12 if
+# extensible_dataset feature is enabled on ZFS boot pool.
+#
+# This file lists all read-only comaptible features except
+# extensible_dataset and any other feature that depends on it.
+#
+allocation_classes
+async_destroy
+block_cloning
+device_rebuild
+embedded_data
+empty_bpobj
+enabled_txg
+hole_birth
+log_spacemap
+lz4_compress
+resilver_defer
+spacemap_histogram
+spacemap_v2
+zpool_checkpoint
+
+.No example# Nm zpool Cm create Fl o Sy compatibility Ns = Ns Ar grub2 Ar bootpool Ar vdev
+.Ed
+.Pp
+See
+.Xr zpool-create 8
+and
+.Xr zpool-upgrade 8
+for more information on how these commands are affected by feature sets.
+.
+.de feature
+.It Sy \\$2
+.Bl -tag -compact -width "READ-ONLY COMPATIBLE"
+.It GUID
+.Sy \\$1:\\$2
+.if !"\\$4"" \{\
+.It DEPENDENCIES
+\fB\\$4\fP\c
+.if !"\\$5"" , \fB\\$5\fP\c
+.if !"\\$6"" , \fB\\$6\fP\c
+.if !"\\$7"" , \fB\\$7\fP\c
+.if !"\\$8"" , \fB\\$8\fP\c
+.if !"\\$9"" , \fB\\$9\fP\c
+.\}
+.It READ-ONLY COMPATIBLE
+\\$3
+.El
+.Pp
+..
+.
+.ds instant-never \
+.No This feature becomes Sy active No as soon as it is enabled \
+and will never return to being Sy enabled .
+.
+.ds remount-upgrade \
+.No Each filesystem will be upgraded automatically when remounted, \
+or when a new file is created under that filesystem. \
+The upgrade can also be triggered on filesystems via \
+Nm zfs Cm set Sy version Ns = Ns Sy current Ar fs . \
+No The upgrade process runs in the background and may take a while to complete \
+for filesystems containing large amounts of files .
+.
+.de checksum-spiel
+When the
+.Sy \\$1
+feature is set to
+.Sy enabled ,
+the administrator can turn on the
+.Sy \\$1
+checksum on any dataset using
+.Nm zfs Cm set Sy checksum Ns = Ns Sy \\$1 Ar dset
+.Po see Xr zfs-set 8 Pc .
+This feature becomes
+.Sy active
+once a
+.Sy checksum
+property has been set to
+.Sy \\$1 ,
+and will return to being
+.Sy enabled
+once all filesystems that have ever had their checksum set to
+.Sy \\$1
+are destroyed.
+..
+.
+.Sh FEATURES
+The following features are supported on this system:
+.Bl -tag -width Ds
+.feature org.zfsonlinux allocation_classes yes
+This feature enables support for separate allocation classes.
+.Pp
+This feature becomes
+.Sy active
+when a dedicated allocation class vdev
+.Pq dedup or special
+is created with the
+.Nm zpool Cm create No or Nm zpool Cm add No commands .
+With device removal, it can be returned to the
+.Sy enabled
+state if all the dedicated allocation class vdevs are removed.
+.
+.feature com.delphix async_destroy yes
+Destroying a file system requires traversing all of its data in order to
+return its used space to the pool.
+Without
+.Sy async_destroy ,
+the file system is not fully removed until all space has been reclaimed.
+If the destroy operation is interrupted by a reboot or power outage,
+the next attempt to open the pool will need to complete the destroy
+operation synchronously.
+.Pp
+When
+.Sy async_destroy
+is enabled, the file system's data will be reclaimed by a background process,
+allowing the destroy operation to complete
+without traversing the entire file system.
+The background process is able to resume
+interrupted destroys after the pool has been opened, eliminating the need
+to finish interrupted destroys as part of the open operation.
+The amount of space remaining to be reclaimed by the background process
+is available through the
+.Sy freeing
+property.
+.Pp
+This feature is only
+.Sy active
+while
+.Sy freeing
+is non-zero.
+.
+.feature org.openzfs blake3 no extensible_dataset
+This feature enables the use of the BLAKE3 hash algorithm for checksum and
+dedup.
+BLAKE3 is a secure hash algorithm focused on high performance.
+.Pp
+.checksum-spiel blake3
+.
+.feature com.fudosecurity block_cloning yes
+When this feature is enabled ZFS will use block cloning for operations like
+.Fn copy_file_range 2 .
+Block cloning allows to create multiple references to a single block.
+It is much faster than copying the data (as the actual data is neither read nor
+written) and takes no additional space.
+Blocks can be cloned across datasets under some conditions (like equal
+.Nm recordsize ,
+the same master encryption key, etc.).
+ZFS tries its best to clone across datasets including encrypted ones.
+This is limited for various (nontrivial) reasons depending on the OS
+and/or ZFS internals.
+.Pp
+This feature becomes
+.Sy active
+when first block is cloned.
+When the last cloned block is freed, it goes back to the enabled state.
+.feature com.delphix bookmarks yes extensible_dataset
+This feature enables use of the
+.Nm zfs Cm bookmark
+command.
+.Pp
+This feature is
+.Sy active
+while any bookmarks exist in the pool.
+All bookmarks in the pool can be listed by running
+.Nm zfs Cm list Fl t Sy bookmark Fl r Ar poolname .
+.
+.feature com.datto bookmark_v2 no bookmark extensible_dataset
+This feature enables the creation and management of larger bookmarks which are
+needed for other features in ZFS.
+.Pp
+This feature becomes
+.Sy active
+when a v2 bookmark is created and will be returned to the
+.Sy enabled
+state when all v2 bookmarks are destroyed.
+.
+.feature com.delphix bookmark_written no bookmark extensible_dataset bookmark_v2
+This feature enables additional bookmark accounting fields, enabling the
+.Sy written Ns # Ns Ar bookmark
+property
+.Pq space written since a bookmark
+and estimates of send stream sizes for incrementals from bookmarks.
+.Pp
+This feature becomes
+.Sy active
+when a bookmark is created and will be
+returned to the
+.Sy enabled
+state when all bookmarks with these fields are destroyed.
+.
+.feature org.openzfs device_rebuild yes
+This feature enables the ability for the
+.Nm zpool Cm attach
+and
+.Nm zpool Cm replace
+commands to perform sequential reconstruction
+.Pq instead of healing reconstruction
+when resilvering.
+.Pp
+Sequential reconstruction resilvers a device in LBA order without immediately
+verifying the checksums.
+Once complete, a scrub is started, which then verifies the checksums.
+This approach allows full redundancy to be restored to the pool
+in the minimum amount of time.
+This two-phase approach will take longer than a healing resilver
+when the time to verify the checksums is included.
+However, unless there is additional pool damage,
+no checksum errors should be reported by the scrub.
+This feature is incompatible with raidz configurations.
+.
+This feature becomes
+.Sy active
+while a sequential resilver is in progress, and returns to
+.Sy enabled
+when the resilver completes.
+.
+.feature com.delphix device_removal no
+This feature enables the
+.Nm zpool Cm remove
+command to remove top-level vdevs,
+evacuating them to reduce the total size of the pool.
+.Pp
+This feature becomes
+.Sy active
+when the
+.Nm zpool Cm remove
+command is used
+on a top-level vdev, and will never return to being
+.Sy enabled .
+.
+.feature org.openzfs draid no
+This feature enables use of the
+.Sy draid
+vdev type.
+dRAID is a variant of RAID-Z which provides integrated distributed
+hot spares that allow faster resilvering while retaining the benefits of RAID-Z.
+Data, parity, and spare space are organized in redundancy groups
+and distributed evenly over all of the devices.
+.Pp
+This feature becomes
+.Sy active
+when creating a pool which uses the
+.Sy draid
+vdev type, or when adding a new
+.Sy draid
+vdev to an existing pool.
+.
+.feature org.illumos edonr no extensible_dataset
+This feature enables the use of the Edon-R hash algorithm for checksum,
+including for nopwrite
+.Po if compression is also enabled, an overwrite of
+a block whose checksum matches the data being written will be ignored
+.Pc .
+In an abundance of caution, Edon-R requires verification when used with
+dedup:
+.Nm zfs Cm set Sy dedup Ns = Ns Sy edonr , Ns Sy verify
+.Po see Xr zfs-set 8 Pc .
+.Pp
+Edon-R is a very high-performance hash algorithm that was part
+of the NIST SHA-3 competition.
+It provides extremely high hash performance
+.Pq over 350% faster than SHA-256 ,
+but was not selected because of its unsuitability
+as a general purpose secure hash algorithm.
+This implementation utilizes the new salted checksumming functionality
+in ZFS, which means that the checksum is pre-seeded with a secret
+256-bit random key
+.Pq stored on the pool
+before being fed the data block to be checksummed.
+Thus the produced checksums are unique to a given pool,
+preventing hash collision attacks on systems with dedup.
+.Pp
+.checksum-spiel edonr
+.
+.feature com.delphix embedded_data no
+This feature improves the performance and compression ratio of
+highly-compressible blocks.
+Blocks whose contents can compress to 112 bytes
+or smaller can take advantage of this feature.
+.Pp
+When this feature is enabled, the contents of highly-compressible blocks are
+stored in the block
+.Dq pointer
+itself
+.Po a misnomer in this case, as it contains
+the compressed data, rather than a pointer to its location on disk
+.Pc .
+Thus the space of the block
+.Pq one sector, typically 512 B or 4 KiB
+is saved, and no additional I/O is needed to read and write the data block.
+.
+\*[instant-never]
+.
+.feature com.delphix empty_bpobj yes
+This feature increases the performance of creating and using a large
+number of snapshots of a single filesystem or volume, and also reduces
+the disk space required.
+.Pp
+When there are many snapshots, each snapshot uses many Block Pointer
+Objects
+.Pq bpobjs
+to track blocks associated with that snapshot.
+However, in common use cases, most of these bpobjs are empty.
+This feature allows us to create each bpobj on-demand,
+thus eliminating the empty bpobjs.
+.Pp
+This feature is
+.Sy active
+while there are any filesystems, volumes,
+or snapshots which were created after enabling this feature.
+.
+.feature com.delphix enabled_txg yes
+Once this feature is enabled, ZFS records the transaction group number
+in which new features are enabled.
+This has no user-visible impact, but other features may depend on this feature.
+.Pp
+This feature becomes
+.Sy active
+as soon as it is enabled and will never return to being
+.Sy enabled .
+.
+.feature com.datto encryption no bookmark_v2 extensible_dataset
+This feature enables the creation and management of natively encrypted datasets.
+.Pp
+This feature becomes
+.Sy active
+when an encrypted dataset is created and will be returned to the
+.Sy enabled
+state when all datasets that use this feature are destroyed.
+.
+.feature com.klarasystems fast_dedup yes
+This feature allows more advanced deduplication features to be enabled on new
+dedup tables.
+.Pp
+This feature will be
+.Sy active
+when the first deduplicated block is written after a new dedup table is created
+(ie after a new pool creation, or new checksum used on a dataset with
+.Sy dedup
+enabled).
+It will be returned to the
+.Sy enabled
+state when all deduplicated blocks using it are freed.
+.
+.feature com.delphix extensible_dataset no
+This feature allows more flexible use of internal ZFS data structures,
+and exists for other features to depend on.
+.Pp
+This feature will be
+.Sy active
+when the first dependent feature uses it, and will be returned to the
+.Sy enabled
+state when all datasets that use this feature are destroyed.
+.
+.feature com.joyent filesystem_limits yes extensible_dataset
+This feature enables filesystem and snapshot limits.
+These limits can be used to control how many filesystems and/or snapshots
+can be created at the point in the tree on which the limits are set.
+.Pp
+This feature is
+.Sy active
+once either of the limit properties has been set on a dataset
+and will never return to being
+.Sy enabled .
+.
+.feature com.delphix head_errlog no
+This feature enables the upgraded version of errlog, which required an on-disk
+error log format change.
+Now the error log of each head dataset is stored separately in the zap object
+and keyed by the head id.
+With this feature enabled, every dataset affected by an error block is listed
+in the output of
+.Nm zpool Cm status .
+In case of encrypted filesystems with unloaded keys we are unable to check
+their snapshots or clones for errors and these will not be reported.
+An "access denied" error will be reported.
+.Pp
+\*[instant-never]
+.
+.feature com.delphix hole_birth no enabled_txg
+This feature has/had bugs, the result of which is that, if you do a
+.Nm zfs Cm send Fl i
+.Pq or Fl R , No since it uses Fl i
+from an affected dataset, the receiving party will not see any checksum
+or other errors, but the resulting destination snapshot
+will not match the source.
+Its use by
+.Nm zfs Cm send Fl i
+has been disabled by default
+.Po
+see
+.Sy send_holes_without_birth_time
+in
+.Xr zfs 4
+.Pc .
+.Pp
+This feature improves performance of incremental sends
+.Pq Nm zfs Cm send Fl i
+and receives for objects with many holes.
+The most common case of hole-filled objects is zvols.
+.Pp
+An incremental send stream from snapshot
+.Sy A No to snapshot Sy B
+contains information about every block that changed between
+.Sy A No and Sy B .
+Blocks which did not change between those snapshots can be
+identified and omitted from the stream using a piece of metadata called
+the
+.Dq block birth time ,
+but birth times are not recorded for holes
+.Pq blocks filled only with zeroes .
+Since holes created after
+.Sy A No cannot be distinguished from holes created before Sy A ,
+information about every hole in the entire filesystem or zvol
+is included in the send stream.
+.Pp
+For workloads where holes are rare this is not a problem.
+However, when incrementally replicating filesystems or zvols with many holes
+.Pq for example a zvol formatted with another filesystem
+a lot of time will be spent sending and receiving unnecessary information
+about holes that already exist on the receiving side.
+.Pp
+Once the
+.Sy hole_birth
+feature has been enabled the block birth times
+of all new holes will be recorded.
+Incremental sends between snapshots created after this feature is enabled
+will use this new metadata to avoid sending information about holes that
+already exist on the receiving side.
+.Pp
+\*[instant-never]
+.
+.feature org.open-zfs large_blocks no extensible_dataset
+This feature allows the record size on a dataset to be set larger than 128 KiB.
+.Pp
+This feature becomes
+.Sy active
+once a dataset contains a file with a block size larger than 128 KiB,
+and will return to being
+.Sy enabled
+once all filesystems that have ever had their recordsize larger than 128 KiB
+are destroyed.
+.
+.feature org.zfsonlinux large_dnode no extensible_dataset
+This feature allows the size of dnodes in a dataset to be set larger than 512 B.
+.
+This feature becomes
+.Sy active
+once a dataset contains an object with a dnode larger than 512 B,
+which occurs as a result of setting the
+.Sy dnodesize
+dataset property to a value other than
+.Sy legacy .
+The feature will return to being
+.Sy enabled
+once all filesystems that have ever contained a dnode larger than 512 B
+are destroyed.
+Large dnodes allow more data to be stored in the bonus buffer,
+thus potentially improving performance by avoiding the use of spill blocks.
+.
+.feature com.klarasystems large_microzap yes extensible_dataset large_blocks
+This feature allows "micro" ZAPs to grow larger than 128 KiB without being
+upgraded to "fat" ZAPs.
+.Pp
+This feature becomes
+.Sy active
+the first time a micro ZAP grows larger than 128KiB.
+It will only be returned to the
+.Sy enabled
+state when all datasets that ever had a large micro ZAP are destroyed.
+.Pp
+Note that even when this feature is enabled, micro ZAPs cannot grow larger
+than 128 KiB without also changing the
+.Sy zap_micro_max_size
+module parameter.
+See
+.Xr zfs 4 .
+.
+.feature com.delphix livelist yes extensible_dataset
+This feature allows clones to be deleted faster than the traditional method
+when a large number of random/sparse writes have been made to the clone.
+All blocks allocated and freed after a clone is created are tracked by the
+the clone's livelist which is referenced during the deletion of the clone.
+The feature is activated when a clone is created and remains
+.Sy active
+until all clones have been destroyed.
+.
+.feature com.delphix log_spacemap yes com.delphix:spacemap_v2
+This feature improves performance for heavily-fragmented pools,
+especially when workloads are heavy in random-writes.
+It does so by logging all the metaslab changes on a single spacemap every TXG
+instead of scattering multiple writes to all the metaslab spacemaps.
+.Pp
+\*[instant-never]
+.
+.feature org.zfsonlinux longname no extensible_dataset
+This feature allows creating files and directories with name up to 1023 bytes
+in length.
+A new dataset property
+.Sy longname
+is also introduced to toggle longname support for each dataset individually.
+This property can be disabled even if it contains longname files.
+In such case, new file cannot be created with longname but existing longname
+files can still be looked up.
+.Pp
+This feature becomes
+.Sy active
+when a file name greater than 255 is created in a dataset, and returns to
+being
+.Sy enabled
+when all such datasets are destroyed.
+.
+.feature org.illumos lz4_compress no
+.Sy lz4
+is a high-performance real-time compression algorithm that
+features significantly faster compression and decompression as well as a
+higher compression ratio than the older
+.Sy lzjb
+compression.
+Typically,
+.Sy lz4
+compression is approximately 50% faster on compressible data and 200% faster
+on incompressible data than
+.Sy lzjb .
+It is also approximately 80% faster on decompression,
+while giving approximately a 10% better compression ratio.
+.Pp
+When the
+.Sy lz4_compress
+feature is set to
+.Sy enabled ,
+the administrator can turn on
+.Sy lz4
+compression on any dataset on the pool using the
+.Xr zfs-set 8
+command.
+All newly written metadata will be compressed with the
+.Sy lz4
+algorithm.
+.Pp
+\*[instant-never]
+.
+.feature com.joyent multi_vdev_crash_dump no
+This feature allows a dump device to be configured with a pool comprised
+of multiple vdevs.
+Those vdevs may be arranged in any mirrored or raidz configuration.
+.Pp
+When the
+.Sy multi_vdev_crash_dump
+feature is set to
+.Sy enabled ,
+the administrator can use
+.Xr dumpadm 8
+to configure a dump device on a pool comprised of multiple vdevs.
+.Pp
+Under
+.Fx
+and Linux this feature is unused, but registered for compatibility.
+New pools created on these systems will have the feature
+.Sy enabled
+but will never transition to
+.Sy active ,
+as this functionality is not required for crash dump support.
+Existing pools where this feature is
+.Sy active
+can be imported.
+.
+.feature com.delphix obsolete_counts yes device_removal
+This feature is an enhancement of
+.Sy device_removal ,
+which will over time reduce the memory used to track removed devices.
+When indirect blocks are freed or remapped,
+we note that their part of the indirect mapping is
+.Dq obsolete
+– no longer needed.
+.Pp
+This feature becomes
+.Sy active
+when the
+.Nm zpool Cm remove
+command is used on a top-level vdev, and will never return to being
+.Sy enabled .
+.
+.feature org.zfsonlinux project_quota yes extensible_dataset
+This feature allows administrators to account the spaces and objects usage
+information against the project identifier
+.Pq ID .
+.Pp
+The project ID is an object-based attribute.
+When upgrading an existing filesystem,
+objects without a project ID will be assigned a zero project ID.
+When this feature is enabled, newly created objects inherit
+their parent directories' project ID if the parent's inherit flag is set
+.Pq via Nm chattr Sy [+-]P No or Nm zfs Cm project Fl s Ns | Ns Fl C .
+Otherwise, the new object's project ID will be zero.
+An object's project ID can be changed at any time by the owner
+.Pq or privileged user
+via
+.Nm chattr Fl p Ar prjid
+or
+.Nm zfs Cm project Fl p Ar prjid .
+.Pp
+This feature will become
+.Sy active
+as soon as it is enabled and will never return to being
+.Sy disabled .
+\*[remount-upgrade]
+.
+.feature org.openzfs raidz_expansion no none
+This feature enables the
+.Nm zpool Cm attach
+subcommand to attach a new device to a RAID-Z group, expanding the total
+amount usable space in the pool.
+See
+.Xr zpool-attach 8 .
+.
+.feature com.delphix redaction_bookmarks no bookmarks extensible_dataset
+This feature enables the use of redacted
+.Nm zfs Cm send Ns s ,
+which create redaction bookmarks storing the list of blocks
+redacted by the send that created them.
+For more information about redacted sends, see
+.Xr zfs-send 8 .
+.
+.feature com.delphix redacted_datasets no extensible_dataset
+This feature enables the receiving of redacted
+.Nm zfs Cm send
+streams, which create redacted datasets when received.
+These datasets are missing some of their blocks,
+and so cannot be safely mounted, and their contents cannot be safely read.
+For more information about redacted receives, see
+.Xr zfs-send 8 .
+.
+.feature com.delphix redaction_list_spill no redaction_bookmarks
+This feature enables the redaction list created by zfs redact to store
+many more entries.
+It becomes
+.Sy active
+when a redaction list is created with more than 36 entries,
+and returns to being
+.Sy enabled
+when no long redaction lists remain in the pool.
+For more information about redacted sends, see
+.Xr zfs-send 8 .
+.
+.feature com.datto resilver_defer yes
+This feature allows ZFS to postpone new resilvers if an existing one is already
+in progress.
+Without this feature, any new resilvers will cause the currently
+running one to be immediately restarted from the beginning.
+.Pp
+This feature becomes
+.Sy active
+once a resilver has been deferred, and returns to being
+.Sy enabled
+when the deferred resilver begins.
+.
+.feature org.illumos sha512 no extensible_dataset
+This feature enables the use of the SHA-512/256 truncated hash algorithm
+.Pq FIPS 180-4
+for checksum and dedup.
+The native 64-bit arithmetic of SHA-512 provides an approximate 50%
+performance boost over SHA-256 on 64-bit hardware
+and is thus a good minimum-change replacement candidate
+for systems where hash performance is important,
+but these systems cannot for whatever reason utilize the faster
+.Sy skein No and Sy edonr
+algorithms.
+.Pp
+.checksum-spiel sha512
+.
+.feature org.illumos skein no extensible_dataset
+This feature enables the use of the Skein hash algorithm for checksum and dedup.
+Skein is a high-performance secure hash algorithm that was a
+finalist in the NIST SHA-3 competition.
+It provides a very high security margin and high performance on 64-bit hardware
+.Pq 80% faster than SHA-256 .
+This implementation also utilizes the new salted checksumming
+functionality in ZFS, which means that the checksum is pre-seeded with a
+secret 256-bit random key
+.Pq stored on the pool
+before being fed the data block to be checksummed.
+Thus the produced checksums are unique to a given pool,
+preventing hash collision attacks on systems with dedup.
+.Pp
+.checksum-spiel skein
+.
+.feature com.delphix spacemap_histogram yes
+This features allows ZFS to maintain more information about how free space
+is organized within the pool.
+If this feature is
+.Sy enabled ,
+it will be activated when a new space map object is created, or
+an existing space map is upgraded to the new format,
+and never returns back to being
+.Sy enabled .
+.
+.feature com.delphix spacemap_v2 yes
+This feature enables the use of the new space map encoding which
+consists of two words
+.Pq instead of one
+whenever it is advantageous.
+The new encoding allows space maps to represent large regions of
+space more efficiently on-disk while also increasing their maximum
+addressable offset.
+.Pp
+This feature becomes
+.Sy active
+once it is
+.Sy enabled ,
+and never returns back to being
+.Sy enabled .
+.
+.feature org.zfsonlinux userobj_accounting yes extensible_dataset
+This feature allows administrators to account the object usage information
+by user and group.
+.Pp
+\*[instant-never]
+\*[remount-upgrade]
+.
+.feature com.klarasystems vdev_zaps_v2 no
+This feature creates a ZAP object for the root vdev.
+.Pp
+This feature becomes active after the next
+.Nm zpool Cm import
+or
+.Nm zpool reguid .
+.
+Properties can be retrieved or set on the root vdev using
+.Nm zpool Cm get
+and
+.Nm zpool Cm set
+with
+.Sy root
+as the vdev name which is an alias for
+.Sy root-0 .
+.feature org.openzfs zilsaxattr yes extensible_dataset
+This feature enables
+.Sy xattr Ns = Ns Sy sa
+extended attribute logging in the ZIL.
+If enabled, extended attribute changes
+.Pq both Sy xattrdir Ns = Ns Sy dir No and Sy xattr Ns = Ns Sy sa
+are guaranteed to be durable if either the dataset had
+.Sy sync Ns = Ns Sy always
+set at the time the changes were made, or
+.Xr sync 2
+is called on the dataset after the changes were made.
+.Pp
+This feature becomes
+.Sy active
+when a ZIL is created for at least one dataset and will be returned to the
+.Sy enabled
+state when it is destroyed for all datasets that use this feature.
+.
+.feature com.delphix zpool_checkpoint yes
+This feature enables the
+.Nm zpool Cm checkpoint
+command that can checkpoint the state of the pool
+at the time it was issued and later rewind back to it or discard it.
+.Pp
+This feature becomes
+.Sy active
+when the
+.Nm zpool Cm checkpoint
+command is used to checkpoint the pool.
+The feature will only return back to being
+.Sy enabled
+when the pool is rewound or the checkpoint has been discarded.
+.
+.feature org.freebsd zstd_compress no extensible_dataset
+.Sy zstd
+is a high-performance compression algorithm that features a
+combination of high compression ratios and high speed.
+Compared to
+.Sy gzip ,
+.Sy zstd
+offers slightly better compression at much higher speeds.
+Compared to
+.Sy lz4 ,
+.Sy zstd
+offers much better compression while being only modestly slower.
+Typically,
+.Sy zstd
+compression speed ranges from 250 to 500 MB/s per thread
+and decompression speed is over 1 GB/s per thread.
+.Pp
+When the
+.Sy zstd
+feature is set to
+.Sy enabled ,
+the administrator can turn on
+.Sy zstd
+compression of any dataset using
+.Nm zfs Cm set Sy compress Ns = Ns Sy zstd Ar dset
+.Po see Xr zfs-set 8 Pc .
+This feature becomes
+.Sy active
+once a
+.Sy compress
+property has been set to
+.Sy zstd ,
+and will return to being
+.Sy enabled
+once all filesystems that have ever had their
+.Sy compress
+property set to
+.Sy zstd
+are destroyed.
+.El
+.
+.Sh SEE ALSO
+.Xr zfs 8 ,
+.Xr zpool 8
diff --git a/share/man/man7/zpoolconcepts.7 b/share/man/man7/zpoolconcepts.7
@@ -0,0 +1,512 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd April 7, 2023
+.Dt ZPOOLCONCEPTS 7
+.Os
+.
+.Sh NAME
+.Nm zpoolconcepts
+.Nd overview of ZFS storage pools
+.
+.Sh DESCRIPTION
+.Ss Virtual Devices (vdevs)
+A "virtual device" describes a single device or a collection of devices,
+organized according to certain performance and fault characteristics.
+The following virtual devices are supported:
+.Bl -tag -width "special"
+.It Sy disk
+A block device, typically located under
+.Pa /dev .
+ZFS can use individual slices or partitions, though the recommended mode of
+operation is to use whole disks.
+A disk can be specified by a full path, or it can be a shorthand name
+.Po the relative portion of the path under
+.Pa /dev
+.Pc .
+A whole disk can be specified by omitting the slice or partition designation.
+For example,
+.Pa sda
+is equivalent to
+.Pa /dev/sda .
+When given a whole disk, ZFS automatically labels the disk, if necessary.
+.It Sy file
+A regular file.
+The use of files as a backing store is strongly discouraged.
+It is designed primarily for experimental purposes, as the fault tolerance of a
+file is only as good as the file system on which it resides.
+A file must be specified by a full path.
+.It Sy mirror
+A mirror of two or more devices.
+Data is replicated in an identical fashion across all components of a mirror.
+A mirror with
+.Em N No disks of size Em X No can hold Em X No bytes and can withstand Em N-1
+devices failing, without losing data.
+.It Sy raidz , raidz1 , raidz2 , raidz3
+A distributed-parity layout, similar to RAID-5/6, with improved distribution of
+parity, and which does not suffer from the RAID-5/6
+.Qq write hole ,
+.Pq in which data and parity become inconsistent after a power loss .
+Data and parity is striped across all disks within a raidz group, though not
+necessarily in a consistent stripe width.
+.Pp
+A raidz group can have single, double, or triple parity, meaning that the
+raidz group can sustain one, two, or three failures, respectively, without
+losing any data.
+The
+.Sy raidz1
+vdev type specifies a single-parity raidz group; the
+.Sy raidz2
+vdev type specifies a double-parity raidz group; and the
+.Sy raidz3
+vdev type specifies a triple-parity raidz group.
+The
+.Sy raidz
+vdev type is an alias for
+.Sy raidz1 .
+.Pp
+A raidz group with
+.Em N No disks of size Em X No with Em P No parity disks can hold approximately
+.Em (N-P)*X No bytes and can withstand Em P No devices failing without losing data .
+The minimum number of devices in a raidz group is one more than the number of
+parity disks.
+The recommended number is between 3 and 9 to help increase performance.
+.It Sy draid , draid1 , draid2 , draid3
+A variant of raidz that provides integrated distributed hot spares, allowing
+for faster resilvering, while retaining the benefits of raidz.
+A dRAID vdev is constructed from multiple internal raidz groups, each with
+.Em D No data devices and Em P No parity devices .
+These groups are distributed over all of the children in order to fully
+utilize the available disk performance.
+.Pp
+Unlike raidz, dRAID uses a fixed stripe width (padding as necessary with
+zeros) to allow fully sequential resilvering.
+This fixed stripe width significantly affects both usable capacity and IOPS.
+For example, with the default
+.Em D=8 No and Em 4 KiB No disk sectors the minimum allocation size is Em 32 KiB .
+If using compression, this relatively large allocation size can reduce the
+effective compression ratio.
+When using ZFS volumes (zvols) and dRAID, the default of the
+.Sy volblocksize
+property is increased to account for the allocation size.
+If a dRAID pool will hold a significant amount of small blocks, it is
+recommended to also add a mirrored
+.Sy special
+vdev to store those blocks.
+.Pp
+In regards to I/O, performance is similar to raidz since, for any read, all
+.Em D No data disks must be accessed .
+Delivered random IOPS can be reasonably approximated as
+.Sy floor((N-S)/(D+P))*single_drive_IOPS .
+.Pp
+Like raidz, a dRAID can have single-, double-, or triple-parity.
+The
+.Sy draid1 ,
+.Sy draid2 ,
+and
+.Sy draid3
+types can be used to specify the parity level.
+The
+.Sy draid
+vdev type is an alias for
+.Sy draid1 .
+.Pp
+A dRAID with
+.Em N No disks of size Em X , D No data disks per redundancy group , Em P
+.No parity level, and Em S No distributed hot spares can hold approximately
+.Em (N-S)*(D/(D+P))*X No bytes and can withstand Em P
+devices failing without losing data.
+.It Sy draid Ns Oo Ar parity Oc Ns Oo Sy \&: Ns Ar data Ns Sy d Oc Ns Oo Sy \&: Ns Ar children Ns Sy c Oc Ns Oo Sy \&: Ns Ar spares Ns Sy s Oc
+A non-default dRAID configuration can be specified by appending one or more
+of the following optional arguments to the
+.Sy draid
+keyword:
+.Bl -tag -compact -width "children"
+.It Ar parity
+The parity level (1-3).
+.It Ar data
+The number of data devices per redundancy group.
+In general, a smaller value of
+.Em D No will increase IOPS, improve the compression ratio ,
+and speed up resilvering at the expense of total usable capacity.
+Defaults to
+.Em 8 , No unless Em N-P-S No is less than Em 8 .
+.It Ar children
+The expected number of children.
+Useful as a cross-check when listing a large number of devices.
+An error is returned when the provided number of children differs.
+.It Ar spares
+The number of distributed hot spares.
+Defaults to zero.
+.El
+.It Sy spare
+A pseudo-vdev which keeps track of available hot spares for a pool.
+For more information, see the
+.Sx Hot Spares
+section.
+.It Sy log
+A separate intent log device.
+If more than one log device is specified, then writes are load-balanced between
+devices.
+Log devices can be mirrored.
+However, raidz vdev types are not supported for the intent log.
+For more information, see the
+.Sx Intent Log
+section.
+.It Sy dedup
+A device solely dedicated for deduplication tables.
+The redundancy of this device should match the redundancy of the other normal
+devices in the pool.
+If more than one dedup device is specified, then
+allocations are load-balanced between those devices.
+.It Sy special
+A device dedicated solely for allocating various kinds of internal metadata,
+and optionally small file blocks.
+The redundancy of this device should match the redundancy of the other normal
+devices in the pool.
+If more than one special device is specified, then
+allocations are load-balanced between those devices.
+.Pp
+For more information on special allocations, see the
+.Sx Special Allocation Class
+section.
+.It Sy cache
+A device used to cache storage pool data.
+A cache device cannot be configured as a mirror or raidz group.
+For more information, see the
+.Sx Cache Devices
+section.
+.El
+.Pp
+Virtual devices cannot be nested arbitrarily.
+A mirror, raidz or draid virtual device can only be created with files or disks.
+Mirrors of mirrors or other such combinations are not allowed.
+.Pp
+A pool can have any number of virtual devices at the top of the configuration
+.Po known as
+.Qq root vdevs
+.Pc .
+Data is dynamically distributed across all top-level devices to balance data
+among devices.
+As new virtual devices are added, ZFS automatically places data on the newly
+available devices.
+.Pp
+Virtual devices are specified one at a time on the command line,
+separated by whitespace.
+Keywords like
+.Sy mirror No and Sy raidz
+are used to distinguish where a group ends and another begins.
+For example, the following creates a pool with two root vdevs,
+each a mirror of two disks:
+.Dl # Nm zpool Cm create Ar mypool Sy mirror Ar sda sdb Sy mirror Ar sdc sdd
+.
+.Ss Device Failure and Recovery
+ZFS supports a rich set of mechanisms for handling device failure and data
+corruption.
+All metadata and data is checksummed, and ZFS automatically repairs bad data
+from a good copy, when corruption is detected.
+.Pp
+In order to take advantage of these features, a pool must make use of some form
+of redundancy, using either mirrored or raidz groups.
+While ZFS supports running in a non-redundant configuration, where each root
+vdev is simply a disk or file, this is strongly discouraged.
+A single case of bit corruption can render some or all of your data unavailable.
+.Pp
+A pool's health status is described by one of three states:
+.Sy online , degraded , No or Sy faulted .
+An online pool has all devices operating normally.
+A degraded pool is one in which one or more devices have failed, but the data is
+still available due to a redundant configuration.
+A faulted pool has corrupted metadata, or one or more faulted devices, and
+insufficient replicas to continue functioning.
+.Pp
+The health of the top-level vdev, such as a mirror or raidz device,
+is potentially impacted by the state of its associated vdevs
+or component devices.
+A top-level vdev or component device is in one of the following states:
+.Bl -tag -width "DEGRADED"
+.It Sy DEGRADED
+One or more top-level vdevs is in the degraded state because one or more
+component devices are offline.
+Sufficient replicas exist to continue functioning.
+.Pp
+One or more component devices is in the degraded or faulted state, but
+sufficient replicas exist to continue functioning.
+The underlying conditions are as follows:
+.Bl -bullet -compact
+.It
+The number of checksum errors or slow I/Os exceeds acceptable levels and the
+device is degraded as an indication that something may be wrong.
+ZFS continues to use the device as necessary.
+.It
+The number of I/O errors exceeds acceptable levels.
+The device could not be marked as faulted because there are insufficient
+replicas to continue functioning.
+.El
+.It Sy FAULTED
+One or more top-level vdevs is in the faulted state because one or more
+component devices are offline.
+Insufficient replicas exist to continue functioning.
+.Pp
+One or more component devices is in the faulted state, and insufficient
+replicas exist to continue functioning.
+The underlying conditions are as follows:
+.Bl -bullet -compact
+.It
+The device could be opened, but the contents did not match expected values.
+.It
+The number of I/O errors exceeds acceptable levels and the device is faulted to
+prevent further use of the device.
+.El
+.It Sy OFFLINE
+The device was explicitly taken offline by the
+.Nm zpool Cm offline
+command.
+.It Sy ONLINE
+The device is online and functioning.
+.It Sy REMOVED
+The device was physically removed while the system was running.
+Device removal detection is hardware-dependent and may not be supported on all
+platforms.
+.It Sy UNAVAIL
+The device could not be opened.
+If a pool is imported when a device was unavailable, then the device will be
+identified by a unique identifier instead of its path since the path was never
+correct in the first place.
+.El
+.Pp
+Checksum errors represent events where a disk returned data that was expected
+to be correct, but was not.
+In other words, these are instances of silent data corruption.
+The checksum errors are reported in
+.Nm zpool Cm status
+and
+.Nm zpool Cm events .
+When a block is stored redundantly, a damaged block may be reconstructed
+(e.g. from raidz parity or a mirrored copy).
+In this case, ZFS reports the checksum error against the disks that contained
+damaged data.
+If a block is unable to be reconstructed (e.g. due to 3 disks being damaged
+in a raidz2 group), it is not possible to determine which disks were silently
+corrupted.
+In this case, checksum errors are reported for all disks on which the block
+is stored.
+.Pp
+If a device is removed and later re-attached to the system,
+ZFS attempts to bring the device online automatically.
+Device attachment detection is hardware-dependent
+and might not be supported on all platforms.
+.
+.Ss Hot Spares
+ZFS allows devices to be associated with pools as
+.Qq hot spares .
+These devices are not actively used in the pool.
+But, when an active device
+fails, it is automatically replaced by a hot spare.
+To create a pool with hot spares, specify a
+.Sy spare
+vdev with any number of devices.
+For example,
+.Dl # Nm zpool Cm create Ar pool Sy mirror Ar sda sdb Sy spare Ar sdc sdd
+.Pp
+Spares can be shared across multiple pools, and can be added with the
+.Nm zpool Cm add
+command and removed with the
+.Nm zpool Cm remove
+command.
+Once a spare replacement is initiated, a new
+.Sy spare
+vdev is created within the configuration that will remain there until the
+original device is replaced.
+At this point, the hot spare becomes available again, if another device fails.
+.Pp
+If a pool has a shared spare that is currently being used, the pool cannot be
+exported, since other pools may use this shared spare, which may lead to
+potential data corruption.
+.Pp
+Shared spares add some risk.
+If the pools are imported on different hosts,
+and both pools suffer a device failure at the same time,
+both could attempt to use the spare at the same time.
+This may not be detected, resulting in data corruption.
+.Pp
+An in-progress spare replacement can be cancelled by detaching the hot spare.
+If the original faulted device is detached, then the hot spare assumes its
+place in the configuration, and is removed from the spare list of all active
+pools.
+.Pp
+The
+.Sy draid
+vdev type provides distributed hot spares.
+These hot spares are named after the dRAID vdev they're a part of
+.Po Sy draid1 Ns - Ns Ar 2 Ns - Ns Ar 3 No specifies spare Ar 3 No of vdev Ar 2 ,
+.No which is a single parity dRAID Pc
+and may only be used by that dRAID vdev.
+Otherwise, they behave the same as normal hot spares.
+.Pp
+Spares cannot replace log devices.
+.
+.Ss Intent Log
+The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
+transactions.
+For instance, databases often require their transactions to be on stable storage
+devices when returning from a system call.
+NFS and other applications can also use
+.Xr fsync 2
+to ensure data stability.
+By default, the intent log is allocated from blocks within the main pool.
+However, it might be possible to get better performance using separate intent
+log devices such as NVRAM or a dedicated disk.
+For example:
+.Dl # Nm zpool Cm create Ar pool sda sdb Sy log Ar sdc
+.Pp
+Multiple log devices can also be specified, and they can be mirrored.
+See the
+.Sx EXAMPLES
+section for an example of mirroring multiple log devices.
+.Pp
+Log devices can be added, replaced, attached, detached, and removed.
+In addition, log devices are imported and exported as part of the pool
+that contains them.
+Mirrored devices can be removed by specifying the top-level mirror vdev.
+.
+.Ss Cache Devices
+Devices can be added to a storage pool as
+.Qq cache devices .
+These devices provide an additional layer of caching between main memory and
+disk.
+For read-heavy workloads, where the working set size is much larger than what
+can be cached in main memory, using cache devices allows much more of this
+working set to be served from low latency media.
+Using cache devices provides the greatest performance improvement for random
+read-workloads of mostly static content.
+.Pp
+To create a pool with cache devices, specify a
+.Sy cache
+vdev with any number of devices.
+For example:
+.Dl # Nm zpool Cm create Ar pool sda sdb Sy cache Ar sdc sdd
+.Pp
+Cache devices cannot be mirrored or part of a raidz configuration.
+If a read error is encountered on a cache device, that read I/O is reissued to
+the original storage pool device, which might be part of a mirrored or raidz
+configuration.
+.Pp
+The content of the cache devices is persistent across reboots and restored
+asynchronously when importing the pool in L2ARC (persistent L2ARC).
+This can be disabled by setting
+.Sy l2arc_rebuild_enabled Ns = Ns Sy 0 .
+For cache devices smaller than
+.Em 1 GiB ,
+ZFS does not write the metadata structures
+required for rebuilding the L2ARC, to conserve space.
+This can be changed with
+.Sy l2arc_rebuild_blocks_min_l2size .
+The cache device header
+.Pq Em 512 B
+is updated even if no metadata structures are written.
+Setting
+.Sy l2arc_headroom Ns = Ns Sy 0
+will result in scanning the full-length ARC lists for cacheable content to be
+written in L2ARC (persistent ARC).
+If a cache device is added with
+.Nm zpool Cm add ,
+its label and header will be overwritten and its contents will not be
+restored in L2ARC, even if the device was previously part of the pool.
+If a cache device is onlined with
+.Nm zpool Cm online ,
+its contents will be restored in L2ARC.
+This is useful in case of memory pressure,
+where the contents of the cache device are not fully restored in L2ARC.
+The user can off- and online the cache device when there is less memory
+pressure, to fully restore its contents to L2ARC.
+.
+.Ss Pool checkpoint
+Before starting critical procedures that include destructive actions
+.Pq like Nm zfs Cm destroy ,
+an administrator can checkpoint the pool's state and, in the case of a
+mistake or failure, rewind the entire pool back to the checkpoint.
+Otherwise, the checkpoint can be discarded when the procedure has completed
+successfully.
+.Pp
+A pool checkpoint can be thought of as a pool-wide snapshot and should be used
+with care as it contains every part of the pool's state, from properties to vdev
+configuration.
+Thus, certain operations are not allowed while a pool has a checkpoint.
+Specifically, vdev removal/attach/detach, mirror splitting, and
+changing the pool's GUID.
+Adding a new vdev is supported, but in the case of a rewind it will have to be
+added again.
+Finally, users of this feature should keep in mind that scrubs in a pool that
+has a checkpoint do not repair checkpointed data.
+.Pp
+To create a checkpoint for a pool:
+.Dl # Nm zpool Cm checkpoint Ar pool
+.Pp
+To later rewind to its checkpointed state, you need to first export it and
+then rewind it during import:
+.Dl # Nm zpool Cm export Ar pool
+.Dl # Nm zpool Cm import Fl -rewind-to-checkpoint Ar pool
+.Pp
+To discard the checkpoint from a pool:
+.Dl # Nm zpool Cm checkpoint Fl d Ar pool
+.Pp
+Dataset reservations (controlled by the
+.Sy reservation No and Sy refreservation
+properties) may be unenforceable while a checkpoint exists, because the
+checkpoint is allowed to consume the dataset's reservation.
+Finally, data that is part of the checkpoint but has been freed in the
+current state of the pool won't be scanned during a scrub.
+.
+.Ss Special Allocation Class
+Allocations in the special class are dedicated to specific block types.
+By default, this includes all metadata, the indirect blocks of user data, and
+any deduplication tables.
+The class can also be provisioned to accept small file blocks.
+.Pp
+A pool must always have at least one normal
+.Pq non- Ns Sy dedup Ns /- Ns Sy special
+vdev before
+other devices can be assigned to the special class.
+If the
+.Sy special
+class becomes full, then allocations intended for it
+will spill back into the normal class.
+.Pp
+Deduplication tables can be excluded from the special class by unsetting the
+.Sy zfs_ddt_data_is_special
+ZFS module parameter.
+.Pp
+Inclusion of small file blocks in the special class is opt-in.
+Each dataset can control the size of small file blocks allowed
+in the special class by setting the
+.Sy special_small_blocks
+property to nonzero.
+See
+.Xr zfsprops 7
+for more info on this property.
diff --git a/share/man/man7/zpoolprops.7 b/share/man/man7/zpoolprops.7
@@ -0,0 +1,526 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
+.\" Copyright (c) 2023, Klara Inc.
+.\"
+.Dd November 18, 2024
+.Dt ZPOOLPROPS 7
+.Os
+.
+.Sh NAME
+.Nm zpoolprops
+.Nd properties of ZFS storage pools
+.
+.Sh DESCRIPTION
+Each pool has several properties associated with it.
+Some properties are read-only statistics while others are configurable and
+change the behavior of the pool.
+.Pp
+User properties have no effect on ZFS behavior.
+Use them to annotate pools in a way that is meaningful in your environment.
+For more information about user properties, see the
+.Sx User Properties
+section.
+.Pp
+The following are read-only properties:
+.Bl -tag -width "unsupported@guid"
+.It Sy allocated
+Amount of storage used within the pool.
+See
+.Sy fragmentation
+and
+.Sy free
+for more information.
+.It Sy bcloneratio
+The ratio of the total amount of storage that would be required to store all
+the cloned blocks without cloning to the actual storage used.
+The
+.Sy bcloneratio
+property is calculated as:
+.Pp
+.Sy ( ( bclonesaved + bcloneused ) * 100 ) / bcloneused
+.It Sy bclonesaved
+The amount of additional storage that would be required if block cloning
+was not used.
+.It Sy bcloneused
+The amount of storage used by cloned blocks.
+.It Sy capacity
+Percentage of pool space used.
+This property can also be referred to by its shortened column name,
+.Sy cap .
+.It Sy dedupcached
+Total size of the deduplication table currently loaded into the ARC.
+See
+.Xr zpool-prefetch 8 .
+.It Sy dedup_table_size
+Total on-disk size of the deduplication table.
+.It Sy expandsize
+Amount of uninitialized space within the pool or device that can be used to
+increase the total capacity of the pool.
+On whole-disk vdevs, this is the space beyond the end of the GPT –
+typically occurring when a LUN is dynamically expanded
+or a disk replaced with a larger one.
+On partition vdevs, this is the space appended to the partition after it was
+added to the pool – most likely by resizing it in-place.
+The space can be claimed for the pool by bringing it online with
+.Sy autoexpand=on
+or using
+.Nm zpool Cm online Fl e .
+.It Sy fragmentation
+The amount of fragmentation in the pool.
+As the amount of space
+.Sy allocated
+increases, it becomes more difficult to locate
+.Sy free
+space.
+This may result in lower write performance compared to pools with more
+unfragmented free space.
+.It Sy free
+The amount of free space available in the pool.
+By contrast, the
+.Xr zfs 8
+.Sy available
+property describes how much new data can be written to ZFS filesystems/volumes.
+The zpool
+.Sy free
+property is not generally useful for this purpose, and can be substantially more
+than the zfs
+.Sy available
+space.
+This discrepancy is due to several factors, including raidz parity;
+zfs reservation, quota, refreservation, and refquota properties; and space set
+aside by
+.Sy spa_slop_shift
+(see
+.Xr zfs 4
+for more information).
+.It Sy freeing
+After a file system or snapshot is destroyed, the space it was using is
+returned to the pool asynchronously.
+.Sy freeing
+is the amount of space remaining to be reclaimed.
+Over time
+.Sy freeing
+will decrease while
+.Sy free
+increases.
+.It Sy guid
+A unique identifier for the pool.
+.It Sy health
+The current health of the pool.
+Health can be one of
+.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
+.It Sy last_scrubbed_txg
+Indicates the transaction group (TXG) up to which the most recent scrub
+operation has checked and repaired the dataset.
+This provides insight into the data integrity status of their pool at
+a specific point in time.
+.Xr zpool-scrub 8
+can utilize this property to scan only data that has changed since the last
+scrub completed, when given the
+.Fl C
+flag.
+This property is not updated when performing an error scrub with the
+.Fl e
+flag.
+.It Sy leaked
+Space not released while
+.Sy freeing
+due to corruption, now permanently leaked into the pool.
+.It Sy load_guid
+A unique identifier for the pool.
+Unlike the
+.Sy guid
+property, this identifier is generated every time we load the pool (i.e. does
+not persist across imports/exports) and never changes while the pool is loaded
+(even if a
+.Sy reguid
+operation takes place).
+.It Sy size
+Total size of the storage pool.
+.It Sy unsupported@ Ns Em guid
+Information about unsupported features that are enabled on the pool.
+See
+.Xr zpool-features 7
+for details.
+.El
+.Pp
+The space usage properties report actual physical space available to the
+storage pool.
+The physical space can be different from the total amount of space that any
+contained datasets can actually use.
+The amount of space used in a raidz configuration depends on the characteristics
+of the data being written.
+In addition, ZFS reserves some space for internal accounting that the
+.Xr zfs 8
+command takes into account, but the
+.Nm
+command does not.
+For non-full pools of a reasonable size, these effects should be invisible.
+For small pools, or pools that are close to being completely full, these
+discrepancies may become more noticeable.
+.Pp
+The following property can be set at creation time and import time:
+.Bl -tag -width Ds
+.It Sy altroot
+Alternate root directory.
+If set, this directory is prepended to any mount points within the pool.
+This can be used when examining an unknown pool where the mount points cannot be
+trusted, or in an alternate boot environment, where the typical paths are not
+valid.
+.Sy altroot
+is not a persistent property.
+It is valid only while the system is up.
+Setting
+.Sy altroot
+defaults to using
+.Sy cachefile Ns = Ns Sy none ,
+though this may be overridden using an explicit setting.
+.El
+.Pp
+The following property can be set only at import time:
+.Bl -tag -width Ds
+.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
+If set to
+.Sy on ,
+the pool will be imported in read-only mode.
+This property can also be referred to by its shortened column name,
+.Sy rdonly .
+.El
+.Pp
+The following properties can be set at creation time and import time, and later
+changed with the
+.Nm zpool Cm set
+command:
+.Bl -tag -width Ds
+.It Sy ashift Ns = Ns Ar ashift
+Pool sector size exponent, to the power of
+.Sy 2
+(internally referred to as
+.Sy ashift ) .
+Values from 9 to 16, inclusive, are valid; also, the
+value 0 (the default) means to auto-detect using the kernel's block
+layer and a ZFS internal exception list.
+I/O operations will be aligned to the specified size boundaries.
+Additionally, the minimum (disk)
+write size will be set to the specified size, so this represents a
+space/performance trade-off.
+For optimal performance, the pool sector size should be greater than
+or equal to the sector size of the underlying disks.
+The typical case for setting this property is when
+performance is important and the underlying disks use 4KiB sectors but
+report 512B sectors to the OS (for compatibility reasons); in that
+case, set
+.Sy ashift Ns = Ns Sy 12
+(which is
+.Sy 1<<12 No = Sy 4096 ) .
+When set, this property is
+used as the default hint value in subsequent vdev operations (add,
+attach and replace).
+Changing this value will not modify any existing
+vdev, not even on disk replacement; however it can be used, for
+instance, to replace a dying 512B sectors disk with a newer 4KiB
+sectors device: this will probably result in bad performance but at the
+same time could prevent loss of data.
+.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
+Controls automatic pool expansion when the underlying LUN is grown.
+If set to
+.Sy on ,
+the pool will be resized according to the size of the expanded device.
+If the device is part of a mirror or raidz then all devices within that
+mirror/raidz group must be expanded before the new space is made available to
+the pool.
+The default behavior is
+.Sy off .
+This property can also be referred to by its shortened column name,
+.Sy expand .
+.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
+Controls automatic device replacement.
+If set to
+.Sy off ,
+device replacement must be initiated by the administrator by using the
+.Nm zpool Cm replace
+command.
+If set to
+.Sy on ,
+any new device, found in the same physical location as a device that previously
+belonged to the pool, is automatically formatted and replaced.
+The default behavior is
+.Sy off .
+This property can also be referred to by its shortened column name,
+.Sy replace .
+Autoreplace can also be used with virtual disks (like device
+mapper) provided that you use the /dev/disk/by-vdev paths setup by
+vdev_id.conf.
+See the
+.Xr vdev_id 8
+manual page for more details.
+Autoreplace and autoonline require the ZFS Event Daemon be configured and
+running.
+See the
+.Xr zed 8
+manual page for more details.
+.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
+When set to
+.Sy on
+space which has been recently freed, and is no longer allocated by the pool,
+will be periodically trimmed.
+This allows block device vdevs which support
+BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
+supports hole-punching, to reclaim unused blocks.
+The default value for this property is
+.Sy off .
+.Pp
+Automatic TRIM does not immediately reclaim blocks after a free.
+Instead, it will optimistically delay allowing smaller ranges to be aggregated
+into a few larger ones.
+These can then be issued more efficiently to the storage.
+TRIM on L2ARC devices is enabled by setting
+.Sy l2arc_trim_ahead > 0 .
+.Pp
+Be aware that automatic trimming of recently freed data blocks can put
+significant stress on the underlying storage devices.
+This will vary depending of how well the specific device handles these commands.
+For lower-end devices it is often possible to achieve most of the benefits
+of automatic trimming by running an on-demand (manual) TRIM periodically
+using the
+.Nm zpool Cm trim
+command.
+.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns Op / Ns Ar dataset
+Identifies the default bootable dataset for the root pool.
+This property is expected to be set mainly by the installation and upgrade
+programs.
+Not all Linux distribution boot processes use the bootfs property.
+.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
+Controls the location of where the pool configuration is cached.
+Discovering all pools on system startup requires a cached copy of the
+configuration data that is stored on the root file system.
+All pools in this cache are automatically imported when the system boots.
+Some environments, such as install and clustering, need to cache this
+information in a different location so that pools are not automatically
+imported.
+Setting this property caches the pool configuration in a different location that
+can later be imported with
+.Nm zpool Cm import Fl c .
+Setting it to the value
+.Sy none
+creates a temporary pool that is never cached, and the
+.Qq
+.Pq empty string
+uses the default location.
+.Pp
+Multiple pools can share the same cache file.
+Because the kernel destroys and recreates this file when pools are added and
+removed, care should be taken when attempting to access this file.
+When the last pool using a
+.Sy cachefile
+is exported or destroyed, the file will be empty.
+.It Sy comment Ns = Ns Ar text
+A text string consisting of printable ASCII characters that will be stored
+such that it is available even if the pool becomes faulted.
+An administrator can provide additional information about a pool using this
+property.
+.It Sy compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns …
+Specifies that the pool maintain compatibility with specific feature sets.
+When set to
+.Sy off
+(or unset) compatibility is disabled (all features may be enabled); when set to
+.Sy legacy
+no features may be enabled.
+When set to a comma-separated list of filenames
+(each filename may either be an absolute path, or relative to
+.Pa /etc/zfs/compatibility.d
+or
+.Pa /usr/share/zfs/compatibility.d )
+the lists of requested features are read from those files, separated by
+whitespace and/or commas.
+Only features present in all files may be enabled.
+.Pp
+See
+.Xr zpool-features 7 ,
+.Xr zpool-create 8
+and
+.Xr zpool-upgrade 8
+for more information on the operation of compatibility feature sets.
+.It Sy dedup_table_quota Ns = Ns Ar number Ns | Ns Sy none Ns | Ns Sy auto
+This property sets a limit on the on-disk size of the pool's dedup table.
+Entries will not be added to the dedup table once this size is reached;
+if a dedup table already exists, and is larger than this size, they
+will not be removed as part of setting this property.
+Existing entries will still have their reference counts updated.
+.Pp
+The actual size limit of the table may be above or below the quota,
+depending on the actual on-disk size of the entries (which may be
+approximated for purposes of calculating the quota).
+That is, setting a quota size of 1M may result in the maximum size being
+slightly below, or slightly above, that value.
+Set to
+.Sy 'none'
+to disable.
+In automatic mode, which is the default, the size of a dedicated dedup vdev
+is used as the quota limit.
+.Pp
+The
+.Sy dedup_table_quota
+property works for both legacy and fast dedup tables.
+.It Sy dedupditto Ns = Ns Ar number
+This property is deprecated and no longer has any effect.
+.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
+Controls whether a non-privileged user is granted access based on the dataset
+permissions defined on the dataset.
+See
+.Xr zfs 8
+for more information on ZFS delegated administration.
+.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
+Controls the system behavior in the event of catastrophic pool failure.
+This condition is typically a result of a loss of connectivity to the underlying
+storage device(s) or a failure of all devices within the pool.
+The behavior of such an event is determined as follows:
+.Bl -tag -width "continue"
+.It Sy wait
+Blocks all I/O access until the device connectivity is recovered and the errors
+are cleared with
+.Nm zpool Cm clear .
+This is the default behavior.
+.It Sy continue
+Returns
+.Er EIO
+to any new write I/O requests but allows reads to any of the remaining healthy
+devices.
+Any write requests that have yet to be committed to disk would be blocked.
+.It Sy panic
+Prints out a message to the console and generates a system crash dump.
+.El
+.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
+The value of this property is the current state of
+.Ar feature_name .
+The only valid value when setting this property is
+.Sy enabled
+which moves
+.Ar feature_name
+to the enabled state.
+See
+.Xr zpool-features 7
+for details on feature states.
+.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
+Controls whether information about snapshots associated with this pool is
+output when
+.Nm zfs Cm list
+is run without the
+.Fl t
+option.
+The default value is
+.Sy off .
+This property can also be referred to by its shortened name,
+.Sy listsnaps .
+.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
+Controls whether a pool activity check should be performed during
+.Nm zpool Cm import .
+When a pool is determined to be active it cannot be imported, even with the
+.Fl f
+option.
+This property is intended to be used in failover configurations
+where multiple hosts have access to a pool on shared storage.
+.Pp
+Multihost provides protection on import only.
+It does not protect against an
+individual device being used in multiple pools, regardless of the type of vdev.
+See the discussion under
+.Nm zpool Cm create .
+.Pp
+When this property is on, periodic writes to storage occur to show the pool is
+in use.
+See
+.Sy zfs_multihost_interval
+in the
+.Xr zfs 4
+manual page.
+In order to enable this property each host must set a unique hostid.
+See
+.Xr genhostid 1
+.Xr zgenhostid 8
+.Xr spl 4
+for additional details.
+The default value is
+.Sy off .
+.It Sy version Ns = Ns Ar version
+The current on-disk version of the pool.
+This can be increased, but never decreased.
+The preferred method of updating pools is with the
+.Nm zpool Cm upgrade
+command, though this property can be used when a specific version is needed for
+backwards compatibility.
+Once feature flags are enabled on a pool this property will no longer have a
+value.
+.El
+.
+.Ss User Properties
+In addition to the standard native properties, ZFS supports arbitrary user
+properties.
+User properties have no effect on ZFS behavior, but applications or
+administrators can use them to annotate pools.
+.Pp
+User property names must contain a colon
+.Pq Qq Sy \&:
+character to distinguish them from native properties.
+They may contain lowercase letters, numbers, and the following punctuation
+characters: colon
+.Pq Qq Sy \&: ,
+dash
+.Pq Qq Sy - ,
+period
+.Pq Qq Sy \&. ,
+and underscore
+.Pq Qq Sy _ .
+The expected convention is that the property name is divided into two portions
+such as
+.Ar module : Ns Ar property ,
+but this namespace is not enforced by ZFS.
+User property names can be at most 255 characters, and cannot begin with a dash
+.Pq Qq Sy - .
+.Pp
+When making programmatic use of user properties, it is strongly suggested to use
+a reversed DNS domain name for the
+.Ar module
+component of property names to reduce the chance that two
+independently-developed packages use the same property name for different
+purposes.
+.Pp
+The values of user properties are arbitrary strings and
+are never validated.
+All of the commands that operate on properties
+.Po Nm zpool Cm list ,
+.Nm zpool Cm get ,
+.Nm zpool Cm set ,
+and so forth
+.Pc
+can be used to manipulate both native properties and user properties.
+Use
+.Nm zpool Cm set Ar name Ns =
+to clear a user property.
+Property values are limited to 8192 bytes.
diff --git a/share/man/man8/zed.8 b/share/man/man8/zed.8
@@ -0,0 +1,273 @@
+.\"
+.\" This file is part of the ZFS Event Daemon (ZED).
+.\" Developed at Lawrence Livermore National Laboratory (LLNL-CODE-403049).
+.\" Copyright (C) 2013-2014 Lawrence Livermore National Security, LLC.
+.\" Refer to the OpenZFS git commit log for authoritative copyright attribution.
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License Version 1.0 (CDDL-1.0).
+.\" You can obtain a copy of the license from the top-level file
+.\" "OPENSOLARIS.LICENSE" or at <http://opensource.org/licenses/CDDL-1.0>.
+.\" You may not use this file except in compliance with the license.
+.\"
+.\" Developed at Lawrence Livermore National Laboratory (LLNL-CODE-403049)
+.\"
+.Dd May 26, 2021
+.Dt ZED 8
+.Os
+.
+.Sh NAME
+.Nm ZED
+.Nd ZFS Event Daemon
+.Sh SYNOPSIS
+.Nm
+.Op Fl fFhILMvVZ
+.Op Fl d Ar zedletdir
+.Op Fl p Ar pidfile
+.Op Fl P Ar path
+.Op Fl s Ar statefile
+.Op Fl j Ar jobs
+.Op Fl b Ar buflen
+.
+.Sh DESCRIPTION
+The
+.Nm
+(ZFS Event Daemon) monitors events generated by the ZFS kernel
+module.
+When a zevent (ZFS Event) is posted, the
+.Nm
+will run any ZEDLETs (ZFS Event Daemon Linkage for Executable Tasks)
+that have been enabled for the corresponding zevent class.
+.
+.Sh OPTIONS
+.Bl -tag -width "-h"
+.It Fl h
+Display a summary of the command-line options.
+.It Fl L
+Display license information.
+.It Fl V
+Display version information.
+.It Fl v
+Be verbose.
+.It Fl f
+Force the daemon to run if at all possible, disabling security checks and
+throwing caution to the wind.
+Not recommended for use in production.
+.It Fl F
+Don't daemonise: remain attached to the controlling terminal,
+log to the standard I/O streams.
+.It Fl M
+Lock all current and future pages in the virtual memory address space.
+This may help the daemon remain responsive when the system is under heavy
+memory pressure.
+.It Fl I
+Request that the daemon idle rather than exit when the kernel modules are not
+loaded.
+Processing of events will start, or resume, when the kernel modules are
+(re)loaded.
+Under Linux the kernel modules cannot be unloaded while the daemon is running.
+.It Fl Z
+Zero the daemon's state, thereby allowing zevents still within the kernel
+to be reprocessed.
+.It Fl d Ar zedletdir
+Read the enabled ZEDLETs from the specified directory.
+.It Fl p Ar pidfile
+Write the daemon's process ID to the specified file.
+.It Fl P Ar path
+Custom
+.Ev $PATH
+for zedlets to use.
+Normally zedlets run in a locked-down environment, with hardcoded paths to the
+ZFS commands
+.Pq Ev $ZFS , $ZPOOL , $ZED , … ,
+and a hard-coded
+.Ev $PATH .
+This is done for security reasons.
+However, the ZFS test suite uses a custom PATH for its ZFS commands, and passes
+it to
+.Nm
+with
+.Fl P .
+In short,
+.Fl P
+is only to be used by the ZFS test suite; never use
+it in production!
+.It Fl s Ar statefile
+Write the daemon's state to the specified file.
+.It Fl j Ar jobs
+Allow at most
+.Ar jobs
+ZEDLETs to run concurrently,
+delaying execution of new ones until they finish.
+Defaults to
+.Sy 16 .
+.It Fl b Ar buflen
+Cap kernel event buffer growth to
+.Ar buflen
+entries.
+This buffer is grown when the daemon misses an event, but results in
+unreclaimable memory use in the kernel.
+A value of
+.Sy 0
+removes the cap.
+Defaults to
+.Sy 1048576 .
+.El
+.Sh ZEVENTS
+A zevent is comprised of a list of nvpairs (name/value pairs).
+Each zevent contains an EID (Event IDentifier) that uniquely identifies it
+throughout
+the lifetime of the loaded ZFS kernel module; this EID is a monotonically
+increasing integer that resets to 1 each time the kernel module is loaded.
+Each zevent also contains a class string that identifies the type of event.
+For brevity, a subclass string is defined that omits the leading components
+of the class string.
+Additional nvpairs exist to provide event details.
+.Pp
+The kernel maintains a list of recent zevents that can be viewed (along with
+their associated lists of nvpairs) using the
+.Nm zpool Cm events Fl v
+command.
+.
+.Sh CONFIGURATION
+ZEDLETs to be invoked in response to zevents are located in the
+.Em enabled-zedlets
+directory
+.Pq Ar zedletdir .
+These can be symlinked or copied from the
+.Em installed-zedlets
+directory; symlinks allow for automatic updates
+from the installed ZEDLETs, whereas copies preserve local modifications.
+As a security measure, since ownership change is a privileged operation,
+ZEDLETs must be owned by root.
+They must have execute permissions for the user,
+but they must not have write permissions for group or other.
+Dotfiles are ignored.
+.Pp
+ZEDLETs are named after the zevent class for which they should be invoked.
+In particular, a ZEDLET will be invoked for a given zevent if either its
+class or subclass string is a prefix of its filename (and is followed by
+a non-alphabetic character).
+As a special case, the prefix
+.Sy all
+matches all zevents.
+Multiple ZEDLETs may be invoked for a given zevent.
+.
+.Sh ZEDLETS
+ZEDLETs are executables invoked by the ZED in response to a given zevent.
+They should be written under the presumption they can be invoked concurrently,
+and they should use appropriate locking to access any shared resources.
+Common variables used by ZEDLETs can be stored in the default rc file which
+is sourced by scripts; these variables should be prefixed with
+.Sy ZED_ .
+.Pp
+The zevent nvpairs are passed to ZEDLETs as environment variables.
+Each nvpair name is converted to an environment variable in the following
+manner:
+.Bl -enum -compact
+.It
+it is prefixed with
+.Sy ZEVENT_ ,
+.It
+it is converted to uppercase, and
+.It
+each non-alphanumeric character is converted to an underscore.
+.El
+.Pp
+Some additional environment variables have been defined to present certain
+nvpair values in a more convenient form.
+An incomplete list of zevent environment variables is as follows:
+.Bl -tag -compact -width "ZEVENT_TIME_STRING"
+.It Sy ZEVENT_EID
+The Event IDentifier.
+.It Sy ZEVENT_CLASS
+The zevent class string.
+.It Sy ZEVENT_SUBCLASS
+The zevent subclass string.
+.It Sy ZEVENT_TIME
+The time at which the zevent was posted as
+.Dq Em seconds nanoseconds
+since the Epoch.
+.It Sy ZEVENT_TIME_SECS
+The
+.Em seconds
+component of
+.Sy ZEVENT_TIME .
+.It Sy ZEVENT_TIME_NSECS
+The
+.Em nanoseconds
+component of
+.Sy ZEVENT_TIME .
+.It Sy ZEVENT_TIME_STRING
+An almost-RFC3339-compliant string for
+.Sy ZEVENT_TIME .
+.El
+.Pp
+Additionally, the following ZED & ZFS variables are defined:
+.Bl -tag -compact -width "ZEVENT_TIME_STRING"
+.It Sy ZED_PID
+The daemon's process ID.
+.It Sy ZED_ZEDLET_DIR
+The daemon's current
+.Em enabled-zedlets
+directory.
+.It Sy ZFS_ALIAS
+The alias
+.Pq Dq Em name Ns - Ns Em version Ns - Ns Em release
+string of the ZFS distribution the daemon is part of.
+.It Sy ZFS_VERSION
+The ZFS version the daemon is part of.
+.It Sy ZFS_RELEASE
+The ZFS release the daemon is part of.
+.El
+.Pp
+ZEDLETs may need to call other ZFS commands.
+The installation paths of the following executables are defined as environment
+variables:
+.Sy ZDB ,
+.Sy ZED ,
+.Sy ZFS ,
+.Sy ZINJECT ,
+and
+.Sy ZPOOL .
+These variables may be overridden in the rc file.
+.
+.Sh FILES
+.Bl -tag -width "-c"
+.It Pa /etc/zfs/zed.d
+The default directory for enabled ZEDLETs.
+.It Pa /etc/zfs/zed.d/zed.rc
+The default rc file for common variables used by ZEDLETs.
+.It Pa /libexec/zfs/zed.d
+The default directory for installed ZEDLETs.
+.It Pa /run/zed.pid
+The default file containing the daemon's process ID.
+.It Pa /run/zed.state
+The default file containing the daemon's state.
+.El
+.
+.Sh SIGNALS
+.Bl -tag -width "-c"
+.It Sy SIGHUP
+Reconfigure the daemon and rescan the directory for enabled ZEDLETs.
+.It Sy SIGTERM , SIGINT
+Terminate the daemon.
+.El
+.
+.Sh SEE ALSO
+.Xr zfs 8 ,
+.Xr zpool 8 ,
+.Xr zpool-events 8
+.
+.Sh NOTES
+The
+.Nm
+requires root privileges.
+.Pp
+Do not taunt the
+.Nm .
+.
+.Sh BUGS
+ZEDLETs are unable to return state/status information to the kernel.
+.Pp
+Internationalization support via gettext has not been added.
diff --git a/share/man/man8/zfs-allow.8 b/share/man/man8/zfs-allow.8
@@ -0,0 +1,492 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 16, 2022
+.Dt ZFS-ALLOW 8
+.Os
+.
+.Sh NAME
+.Nm zfs-allow
+.Nd delegate ZFS administration permissions to unprivileged users
+.Sh SYNOPSIS
+.Nm zfs
+.Cm allow
+.Op Fl dglu
+.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns …
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm allow
+.Op Fl dl
+.Fl e Ns | Ns Sy everyone
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm allow
+.Fl c
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm allow
+.Fl s No @ Ns Ar setname
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm unallow
+.Op Fl dglru
+.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns …
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm unallow
+.Op Fl dlr
+.Fl e Ns | Ns Sy everyone
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm unallow
+.Op Fl r
+.Fl c
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm unallow
+.Op Fl r
+.Fl s No @ Ns Ar setname
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm allow
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Displays permissions that have been delegated on the specified filesystem or
+volume.
+See the other forms of
+.Nm zfs Cm allow
+for more information.
+.Pp
+Delegations are supported under Linux with the exception of
+.Sy mount ,
+.Sy unmount ,
+.Sy mountpoint ,
+.Sy canmount ,
+.Sy rename ,
+and
+.Sy share .
+These permissions cannot be delegated because the Linux
+.Xr mount 8
+command restricts modifications of the global namespace to the root user.
+.It Xo
+.Nm zfs
+.Cm allow
+.Op Fl dglu
+.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns …
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+.It Xo
+.Nm zfs
+.Cm allow
+.Op Fl dl
+.Fl e Ns | Ns Sy everyone
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Delegates ZFS administration permission for the file systems to non-privileged
+users.
+.Bl -tag -width "-d"
+.It Fl d
+Allow only for the descendent file systems.
+.It Fl e Ns | Ns Sy everyone
+Specifies that the permissions be delegated to everyone.
+.It Fl g Ar group Ns Oo , Ns Ar group Oc Ns …
+Explicitly specify that permissions are delegated to the group.
+.It Fl l
+Allow
+.Qq locally
+only for the specified file system.
+.It Fl u Ar user Ns Oo , Ns Ar user Oc Ns …
+Explicitly specify that permissions are delegated to the user.
+.It Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns …
+Specifies to whom the permissions are delegated.
+Multiple entities can be specified as a comma-separated list.
+If neither of the
+.Fl gu
+options are specified, then the argument is interpreted preferentially as the
+keyword
+.Sy everyone ,
+then as a user name, and lastly as a group name.
+To specify a user or group named
+.Qq everyone ,
+use the
+.Fl g
+or
+.Fl u
+options.
+To specify a group with the same name as a user, use the
+.Fl g
+options.
+.It Xo
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Xc
+The permissions to delegate.
+Multiple permissions may be specified as a comma-separated list.
+Permission names are the same as ZFS subcommand and property names.
+See the property list below.
+Property set names, which begin with
+.Sy @ ,
+may be specified.
+See the
+.Fl s
+form below for details.
+.El
+.Pp
+If neither of the
+.Fl dl
+options are specified, or both are, then the permissions are allowed for the
+file system or volume, and all of its descendents.
+.Pp
+Permissions are generally the ability to use a ZFS subcommand or change a ZFS
+property.
+The following permissions are available:
+.TS
+l l l .
+NAME TYPE NOTES
+_ _ _
+allow subcommand Must also have the permission that is being allowed
+bookmark subcommand
+clone subcommand Must also have the \fBcreate\fR ability and \fBmount\fR ability in the origin file system
+create subcommand Must also have the \fBmount\fR ability. Must also have the \fBrefreservation\fR ability to create a non-sparse volume.
+destroy subcommand Must also have the \fBmount\fR ability
+diff subcommand Allows lookup of paths within a dataset given an object number, and the ability to create snapshots necessary to \fBzfs diff\fR.
+hold subcommand Allows adding a user hold to a snapshot
+load-key subcommand Allows loading and unloading of encryption key (see \fBzfs load-key\fR and \fBzfs unload-key\fR).
+change-key subcommand Allows changing an encryption key via \fBzfs change-key\fR.
+mount subcommand Allows mounting/umounting ZFS datasets
+promote subcommand Must also have the \fBmount\fR and \fBpromote\fR ability in the origin file system
+receive subcommand Must also have the \fBmount\fR and \fBcreate\fR ability
+release subcommand Allows releasing a user hold which might destroy the snapshot
+rename subcommand Must also have the \fBmount\fR and \fBcreate\fR ability in the new parent
+rollback subcommand Must also have the \fBmount\fR ability
+send subcommand
+share subcommand Allows sharing file systems over NFS or SMB protocols
+snapshot subcommand Must also have the \fBmount\fR ability
+
+groupquota other Allows accessing any \fBgroupquota@\fI…\fR property
+groupobjquota other Allows accessing any \fBgroupobjquota@\fI…\fR property
+groupused other Allows reading any \fBgroupused@\fI…\fR property
+groupobjused other Allows reading any \fBgroupobjused@\fI…\fR property
+userprop other Allows changing any user property
+userquota other Allows accessing any \fBuserquota@\fI…\fR property
+userobjquota other Allows accessing any \fBuserobjquota@\fI…\fR property
+userused other Allows reading any \fBuserused@\fI…\fR property
+userobjused other Allows reading any \fBuserobjused@\fI…\fR property
+projectobjquota other Allows accessing any \fBprojectobjquota@\fI…\fR property
+projectquota other Allows accessing any \fBprojectquota@\fI…\fR property
+projectobjused other Allows reading any \fBprojectobjused@\fI…\fR property
+projectused other Allows reading any \fBprojectused@\fI…\fR property
+
+aclinherit property
+aclmode property
+acltype property
+atime property
+canmount property
+casesensitivity property
+checksum property
+compression property
+context property
+copies property
+dedup property
+defcontext property
+devices property
+dnodesize property
+encryption property
+exec property
+filesystem_limit property
+fscontext property
+keyformat property
+keylocation property
+logbias property
+mlslabel property
+mountpoint property
+nbmand property
+normalization property
+overlay property
+pbkdf2iters property
+primarycache property
+quota property
+readonly property
+recordsize property
+redundant_metadata property
+refquota property
+refreservation property
+relatime property
+reservation property
+rootcontext property
+secondarycache property
+setuid property
+sharenfs property
+sharesmb property
+snapdev property
+snapdir property
+snapshot_limit property
+special_small_blocks property
+sync property
+utf8only property
+version property
+volblocksize property
+volmode property
+volsize property
+vscan property
+xattr property
+zoned property
+.TE
+.It Xo
+.Nm zfs
+.Cm allow
+.Fl c
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Sets
+.Qq create time
+permissions.
+These permissions are granted
+.Pq locally
+to the creator of any newly-created descendent file system.
+.It Xo
+.Nm zfs
+.Cm allow
+.Fl s No @ Ns Ar setname
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Defines or adds permissions to a permission set.
+The set can be used by other
+.Nm zfs Cm allow
+commands for the specified file system and its descendents.
+Sets are evaluated dynamically, so changes to a set are immediately reflected.
+Permission sets follow the same naming restrictions as ZFS file systems, but the
+name must begin with
+.Sy @ ,
+and can be no more than 64 characters long.
+.It Xo
+.Nm zfs
+.Cm unallow
+.Op Fl dglru
+.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns …
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+.It Xo
+.Nm zfs
+.Cm unallow
+.Op Fl dlr
+.Fl e Ns | Ns Sy everyone
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+.It Xo
+.Nm zfs
+.Cm unallow
+.Op Fl r
+.Fl c
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Removes permissions that were granted with the
+.Nm zfs Cm allow
+command.
+No permissions are explicitly denied, so other permissions granted are still in
+effect.
+For example, if the permission is granted by an ancestor.
+If no permissions are specified, then all permissions for the specified
+.Ar user ,
+.Ar group ,
+or
+.Sy everyone
+are removed.
+Specifying
+.Sy everyone
+.Po or using the
+.Fl e
+option
+.Pc
+only removes the permissions that were granted to everyone, not all permissions
+for every user and group.
+See the
+.Nm zfs Cm allow
+command for a description of the
+.Fl ldugec
+options.
+.Bl -tag -width "-r"
+.It Fl r
+Recursively remove the permissions from this file system and all descendents.
+.El
+.It Xo
+.Nm zfs
+.Cm unallow
+.Op Fl r
+.Fl s No @ Ns Ar setname
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Removes permissions from a permission set.
+If no permissions are specified, then all permissions are removed, thus removing
+the set entirely.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 17, 18, 19, 20 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Delegating ZFS Administration Permissions on a ZFS Dataset
+The following example shows how to set permissions so that user
+.Ar cindys
+can create, destroy, mount, and take snapshots on
+.Ar tank/cindys .
+The permissions on
+.Ar tank/cindys
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Sy cindys create , Ns Sy destroy , Ns Sy mount , Ns Sy snapshot Ar tank/cindys
+.No # Nm zfs Cm allow Ar tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+ user cindys create,destroy,mount,snapshot
+.Ed
+.Pp
+Because the
+.Ar tank/cindys
+mount point permission is set to 755 by default, user
+.Ar cindys
+will be unable to mount file systems under
+.Ar tank/cindys .
+Add an ACE similar to the following syntax to provide mount point access:
+.Dl # Cm chmod No A+user : Ns Ar cindys Ns :add_subdirectory:allow Ar /tank/cindys
+.
+.Ss Example 2 : No Delegating Create Time Permissions on a ZFS Dataset
+The following example shows how to grant anyone in the group
+.Ar staff
+to create file systems in
+.Ar tank/users .
+This syntax also allows staff members to destroy their own file systems, but not
+destroy anyone else's file system.
+The permissions on
+.Ar tank/users
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Ar staff Sy create , Ns Sy mount Ar tank/users
+.No # Nm zfs Cm allow Fl c Sy destroy Ar tank/users
+.No # Nm zfs Cm allow Ar tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+ destroy
+Local+Descendent permissions:
+ group staff create,mount
+.Ed
+.
+.Ss Example 3 : No Defining and Granting a Permission Set on a ZFS Dataset
+The following example shows how to define and grant a permission set on the
+.Ar tank/users
+file system.
+The permissions on
+.Ar tank/users
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Fl s No @ Ns Ar pset Sy create , Ns Sy destroy , Ns Sy snapshot , Ns Sy mount Ar tank/users
+.No # Nm zfs Cm allow staff No @ Ns Ar pset tank/users
+.No # Nm zfs Cm allow Ar tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+ @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+ group staff @pset
+.Ed
+.
+.Ss Example 4 : No Delegating Property Permissions on a ZFS Dataset
+The following example shows to grant the ability to set quotas and reservations
+on the
+.Ar users/home
+file system.
+The permissions on
+.Ar users/home
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Ar cindys Sy quota , Ns Sy reservation Ar users/home
+.No # Nm zfs Cm allow Ar users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+ user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME PROPERTY VALUE SOURCE
+users/home/marks quota 10G local
+.Ed
+.
+.Ss Example 5 : No Removing ZFS Delegated Permissions on a ZFS Dataset
+The following example shows how to remove the snapshot permission from the
+.Ar staff
+group on the
+.Sy tank/users
+file system.
+The permissions on
+.Sy tank/users
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm unallow Ar staff Sy snapshot Ar tank/users
+.No # Nm zfs Cm allow Ar tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+ @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+ group staff @pset
+.Ed
diff --git a/share/man/man8/zfs-bookmark.8 b/share/man/man8/zfs-bookmark.8
@@ -0,0 +1,75 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\" Copyright (c) 2019, 2020 by Christian Schwarz. All Rights Reserved.
+.\"
+.Dd May 12, 2022
+.Dt ZFS-BOOKMARK 8
+.Os
+.
+.Sh NAME
+.Nm zfs-bookmark
+.Nd create bookmark of ZFS snapshot
+.Sh SYNOPSIS
+.Nm zfs
+.Cm bookmark
+.Ar snapshot Ns | Ns Ar bookmark
+.Ar newbookmark
+.
+.Sh DESCRIPTION
+Creates a new bookmark of the given snapshot or bookmark.
+Bookmarks mark the point in time when the snapshot was created, and can be used
+as the incremental source for a
+.Nm zfs Cm send .
+.Pp
+When creating a bookmark from an existing redaction bookmark, the resulting
+bookmark is
+.Em not
+a redaction bookmark.
+.Pp
+This feature must be enabled to be used.
+See
+.Xr zpool-features 7
+for details on ZFS feature flags and the
+.Sy bookmarks
+feature.
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 23 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Creating a bookmark
+The following example creates a bookmark to a snapshot.
+This bookmark can then be used instead of a snapshot in send streams.
+.Dl # Nm zfs Cm bookmark Ar rpool Ns @ Ns Ar snapshot rpool Ns # Ns Ar bookmark
+.
+.Sh SEE ALSO
+.Xr zfs-destroy 8 ,
+.Xr zfs-send 8 ,
+.Xr zfs-snapshot 8
diff --git a/share/man/man8/zfs-change-key.8 b/share/man/man8/zfs-change-key.8
@@ -0,0 +1,304 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd January 13, 2020
+.Dt ZFS-LOAD-KEY 8
+.Os
+.
+.Sh NAME
+.Nm zfs-load-key
+.Nd load, unload, or change encryption key of ZFS dataset
+.Sh SYNOPSIS
+.Nm zfs
+.Cm load-key
+.Op Fl nr
+.Op Fl L Ar keylocation
+.Fl a Ns | Ns Ar filesystem
+.Nm zfs
+.Cm unload-key
+.Op Fl r
+.Fl a Ns | Ns Ar filesystem
+.Nm zfs
+.Cm change-key
+.Op Fl l
+.Op Fl o Ar keylocation Ns = Ns Ar value
+.Op Fl o Ar keyformat Ns = Ns Ar value
+.Op Fl o Ar pbkdf2iters Ns = Ns Ar value
+.Ar filesystem
+.Nm zfs
+.Cm change-key
+.Fl i
+.Op Fl l
+.Ar filesystem
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm load-key
+.Op Fl nr
+.Op Fl L Ar keylocation
+.Fl a Ns | Ns Ar filesystem
+.Xc
+Load the key for
+.Ar filesystem ,
+allowing it and all children that inherit the
+.Sy keylocation
+property to be accessed.
+The key will be expected in the format specified by the
+.Sy keyformat
+and location specified by the
+.Sy keylocation
+property.
+Note that if the
+.Sy keylocation
+is set to
+.Sy prompt
+the terminal will interactively wait for the key to be entered.
+Loading a key will not automatically mount the dataset.
+If that functionality is desired,
+.Nm zfs Cm mount Fl l
+will ask for the key and mount the dataset
+.Po
+see
+.Xr zfs-mount 8
+.Pc .
+Once the key is loaded the
+.Sy keystatus
+property will become
+.Sy available .
+.Bl -tag -width "-r"
+.It Fl r
+Recursively loads the keys for the specified filesystem and all descendent
+encryption roots.
+.It Fl a
+Loads the keys for all encryption roots in all imported pools.
+.It Fl n
+Do a dry-run
+.Pq Qq No-op
+.Cm load-key .
+This will cause
+.Nm zfs
+to simply check that the provided key is correct.
+This command may be run even if the key is already loaded.
+.It Fl L Ar keylocation
+Use
+.Ar keylocation
+instead of the
+.Sy keylocation
+property.
+This will not change the value of the property on the dataset.
+Note that if used with either
+.Fl r
+or
+.Fl a ,
+.Ar keylocation
+may only be given as
+.Sy prompt .
+.El
+.It Xo
+.Nm zfs
+.Cm unload-key
+.Op Fl r
+.Fl a Ns | Ns Ar filesystem
+.Xc
+Unloads a key from ZFS, removing the ability to access the dataset and all of
+its children that inherit the
+.Sy keylocation
+property.
+This requires that the dataset is not currently open or mounted.
+Once the key is unloaded the
+.Sy keystatus
+property will become
+.Sy unavailable .
+.Bl -tag -width "-r"
+.It Fl r
+Recursively unloads the keys for the specified filesystem and all descendent
+encryption roots.
+.It Fl a
+Unloads the keys for all encryption roots in all imported pools.
+.El
+.It Xo
+.Nm zfs
+.Cm change-key
+.Op Fl l
+.Op Fl o Ar keylocation Ns = Ns Ar value
+.Op Fl o Ar keyformat Ns = Ns Ar value
+.Op Fl o Ar pbkdf2iters Ns = Ns Ar value
+.Ar filesystem
+.Xc
+.It Xo
+.Nm zfs
+.Cm change-key
+.Fl i
+.Op Fl l
+.Ar filesystem
+.Xc
+Changes the user's key (e.g. a passphrase) used to access a dataset.
+This command requires that the existing key for the dataset is already loaded.
+This command may also be used to change the
+.Sy keylocation ,
+.Sy keyformat ,
+and
+.Sy pbkdf2iters
+properties as needed.
+If the dataset was not previously an encryption root it will become one.
+Alternatively, the
+.Fl i
+flag may be provided to cause an encryption root to inherit the parent's key
+instead.
+.Pp
+If the user's key is compromised,
+.Nm zfs Cm change-key
+does not necessarily protect existing or newly-written data from attack.
+Newly-written data will continue to be encrypted with the same master key as
+the existing data.
+The master key is compromised if an attacker obtains a
+user key and the corresponding wrapped master key.
+Currently,
+.Nm zfs Cm change-key
+does not overwrite the previous wrapped master key on disk, so it is
+accessible via forensic analysis for an indeterminate length of time.
+.Pp
+In the event of a master key compromise, ideally the drives should be securely
+erased to remove all the old data (which is readable using the compromised
+master key), a new pool created, and the data copied back.
+This can be approximated in place by creating new datasets, copying the data
+.Pq e.g. using Nm zfs Cm send | Nm zfs Cm recv ,
+and then clearing the free space with
+.Nm zpool Cm trim Fl -secure
+if supported by your hardware, otherwise
+.Nm zpool Cm initialize .
+.Bl -tag -width "-r"
+.It Fl l
+Ensures the key is loaded before attempting to change the key.
+This is effectively equivalent to running
+.Nm zfs Cm load-key Ar filesystem ; Nm zfs Cm change-key Ar filesystem
+.It Fl o Ar property Ns = Ns Ar value
+Allows the user to set encryption key properties
+.Pq Sy keyformat , keylocation , No and Sy pbkdf2iters
+while changing the key.
+This is the only way to alter
+.Sy keyformat
+and
+.Sy pbkdf2iters
+after the dataset has been created.
+.It Fl i
+Indicates that zfs should make
+.Ar filesystem
+inherit the key of its parent.
+Note that this command can only be run on an encryption root
+that has an encrypted parent.
+.El
+.El
+.Ss Encryption
+Enabling the
+.Sy encryption
+feature allows for the creation of encrypted filesystems and volumes.
+ZFS will encrypt file and volume data, file attributes, ACLs, permission bits,
+directory listings, FUID mappings, and
+.Sy userused Ns / Ns Sy groupused
+data.
+ZFS will not encrypt metadata related to the pool structure, including
+dataset and snapshot names, dataset hierarchy, properties, file size, file
+holes, and deduplication tables (though the deduplicated data itself is
+encrypted).
+.Pp
+Key rotation is managed by ZFS.
+Changing the user's key (e.g. a passphrase)
+does not require re-encrypting the entire dataset.
+Datasets can be scrubbed,
+resilvered, renamed, and deleted without the encryption keys being loaded (see
+the
+.Cm load-key
+subcommand for more info on key loading).
+.Pp
+Creating an encrypted dataset requires specifying the
+.Sy encryption No and Sy keyformat
+properties at creation time, along with an optional
+.Sy keylocation No and Sy pbkdf2iters .
+After entering an encryption key, the
+created dataset will become an encryption root.
+Any descendant datasets will
+inherit their encryption key from the encryption root by default, meaning that
+loading, unloading, or changing the key for the encryption root will implicitly
+do the same for all inheriting datasets.
+If this inheritance is not desired, simply supply a
+.Sy keyformat
+when creating the child dataset or use
+.Nm zfs Cm change-key
+to break an existing relationship, creating a new encryption root on the child.
+Note that the child's
+.Sy keyformat
+may match that of the parent while still creating a new encryption root, and
+that changing the
+.Sy encryption
+property alone does not create a new encryption root; this would simply use a
+different cipher suite with the same key as its encryption root.
+The one exception is that clones will always use their origin's encryption key.
+As a result of this exception, some encryption-related properties
+.Pq namely Sy keystatus , keyformat , keylocation , No and Sy pbkdf2iters
+do not inherit like other ZFS properties and instead use the value determined
+by their encryption root.
+Encryption root inheritance can be tracked via the read-only
+.Sy encryptionroot
+property.
+.Pp
+Encryption changes the behavior of a few ZFS
+operations.
+Encryption is applied after compression so compression ratios are preserved.
+Normally checksums in ZFS are 256 bits long, but for encrypted data
+the checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from
+the encryption suite, which provides additional protection against maliciously
+altered data.
+Deduplication is still possible with encryption enabled but for security,
+datasets will only deduplicate against themselves, their snapshots,
+and their clones.
+.Pp
+There are a few limitations on encrypted datasets.
+Encrypted data cannot be embedded via the
+.Sy embedded_data
+feature.
+Encrypted datasets may not have
+.Sy copies Ns = Ns Em 3
+since the implementation stores some encryption metadata where the third copy
+would normally be.
+Since compression is applied before encryption, datasets may
+be vulnerable to a CRIME-like attack if applications accessing the data allow
+for it.
+Deduplication with encryption will leak information about which blocks
+are equivalent in a dataset and will incur an extra CPU cost for each block
+written.
+.
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr zfs-create 8 ,
+.Xr zfs-set 8
diff --git a/share/man/man8/zfs-clone.8 b/share/man/man8/zfs-clone.8
@@ -0,0 +1,96 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 16, 2022
+.Dt ZFS-CLONE 8
+.Os
+.
+.Sh NAME
+.Nm zfs-clone
+.Nd clone snapshot of ZFS dataset
+.Sh SYNOPSIS
+.Nm zfs
+.Cm clone
+.Op Fl p
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Ar snapshot Ar filesystem Ns | Ns Ar volume
+.
+.Sh DESCRIPTION
+See the
+.Sx Clones
+section of
+.Xr zfsconcepts 7
+for details.
+The target dataset can be located anywhere in the ZFS hierarchy,
+and is created as the same type as the original.
+.Bl -tag -width Ds
+.It Fl o Ar property Ns = Ns Ar value
+Sets the specified property; see
+.Nm zfs Cm create
+for details.
+.It Fl p
+Creates all the non-existing parent datasets.
+Datasets created in this manner are automatically mounted according to the
+.Sy mountpoint
+property inherited from their parent.
+If the target filesystem or volume already exists, the operation completes
+successfully.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 9, 10 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Creating a ZFS Clone
+The following command creates a writable file system whose initial contents are
+the same as
+.Ar pool/home/bob@yesterday .
+.Dl # Nm zfs Cm clone Ar pool/home/bob@yesterday pool/clone
+.
+.Ss Example 2 : No Promoting a ZFS Clone
+The following commands illustrate how to test out changes to a file system, and
+then replace the original file system with the changed one, using clones, clone
+promotion, and renaming:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm create Ar pool/project/production
+ populate /pool/project/production with data
+.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today
+.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta
+ make changes to /pool/project/beta and test them
+.No # Nm zfs Cm promote Ar pool/project/beta
+.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy
+.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production
+ once the legacy version is no longer needed, it can be destroyed
+.No # Nm zfs Cm destroy Ar pool/project/legacy
+.Ed
+.
+.Sh SEE ALSO
+.Xr zfs-promote 8 ,
+.Xr zfs-snapshot 8
diff --git a/share/man/man8/zfs-create.8 b/share/man/man8/zfs-create.8
@@ -0,0 +1,279 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 16, 2022
+.Dt ZFS-CREATE 8
+.Os
+.
+.Sh NAME
+.Nm zfs-create
+.Nd create ZFS dataset
+.Sh SYNOPSIS
+.Nm zfs
+.Cm create
+.Op Fl Pnpuv
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Ar filesystem
+.Nm zfs
+.Cm create
+.Op Fl ps
+.Op Fl b Ar blocksize
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Fl V Ar size Ar volume
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm create
+.Op Fl Pnpuv
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Ar filesystem
+.Xc
+Creates a new ZFS file system.
+The file system is automatically mounted according to the
+.Sy mountpoint
+property inherited from the parent, unless the
+.Fl u
+option is used.
+.Bl -tag -width "-o"
+.It Fl o Ar property Ns = Ns Ar value
+Sets the specified property as if the command
+.Nm zfs Cm set Ar property Ns = Ns Ar value
+was invoked at the same time the dataset was created.
+Any editable ZFS property can also be set at creation time.
+Multiple
+.Fl o
+options can be specified.
+An error results if the same property is specified in multiple
+.Fl o
+options.
+.It Fl p
+Creates all the non-existing parent datasets.
+Datasets created in this manner are automatically mounted according to the
+.Sy mountpoint
+property inherited from their parent.
+Any property specified on the command line using the
+.Fl o
+option is ignored.
+If the target filesystem already exists, the operation completes successfully.
+.It Fl n
+Do a dry-run
+.Pq Qq No-op
+creation.
+No datasets will be created.
+This is useful in conjunction with the
+.Fl v
+or
+.Fl P
+flags to validate properties that are passed via
+.Fl o
+options and those implied by other options.
+The actual dataset creation can still fail due to insufficient privileges or
+available capacity.
+.It Fl P
+Print machine-parsable verbose information about the created dataset.
+Each line of output contains a key and one or two values, all separated by tabs.
+The
+.Sy create_ancestors
+and
+.Sy create
+keys have
+.Em filesystem
+as their only value.
+The
+.Sy create_ancestors
+key only appears if the
+.Fl p
+option is used.
+The
+.Sy property
+key has two values, a property name that property's value.
+The
+.Sy property
+key may appear zero or more times, once for each property that will be set local
+to
+.Em filesystem
+due to the use of the
+.Fl o
+option.
+.It Fl u
+Do not mount the newly created file system.
+.It Fl v
+Print verbose information about the created dataset.
+.El
+.It Xo
+.Nm zfs
+.Cm create
+.Op Fl ps
+.Op Fl b Ar blocksize
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Fl V Ar size Ar volume
+.Xc
+Creates a volume of the given size.
+The volume is exported as a block device in
+.Pa /dev/zvol/path ,
+where
+.Em path
+is the name of the volume in the ZFS namespace.
+The size represents the logical size as exported by the device.
+By default, a reservation of equal size is created.
+.Pp
+.Ar size
+is automatically rounded up to the nearest multiple of the
+.Sy blocksize .
+.Bl -tag -width "-b"
+.It Fl b Ar blocksize
+Equivalent to
+.Fl o Sy volblocksize Ns = Ns Ar blocksize .
+If this option is specified in conjunction with
+.Fl o Sy volblocksize ,
+the resulting behavior is undefined.
+.It Fl o Ar property Ns = Ns Ar value
+Sets the specified property as if the
+.Nm zfs Cm set Ar property Ns = Ns Ar value
+command was invoked at the same time the dataset was created.
+Any editable ZFS property can also be set at creation time.
+Multiple
+.Fl o
+options can be specified.
+An error results if the same property is specified in multiple
+.Fl o
+options.
+.It Fl p
+Creates all the non-existing parent datasets.
+Datasets created in this manner are automatically mounted according to the
+.Sy mountpoint
+property inherited from their parent.
+Any property specified on the command line using the
+.Fl o
+option is ignored.
+If the target filesystem already exists, the operation completes successfully.
+.It Fl s
+Creates a sparse volume with no reservation.
+See
+.Sy volsize
+in the
+.Em Native Properties
+section of
+.Xr zfsprops 7
+for more information about sparse volumes.
+.It Fl n
+Do a dry-run
+.Pq Qq No-op
+creation.
+No datasets will be created.
+This is useful in conjunction with the
+.Fl v
+or
+.Fl P
+flags to validate properties that are passed via
+.Fl o
+options and those implied by other options.
+The actual dataset creation can still fail due to insufficient privileges or
+available capacity.
+.It Fl P
+Print machine-parsable verbose information about the created dataset.
+Each line of output contains a key and one or two values, all separated by tabs.
+The
+.Sy create_ancestors
+and
+.Sy create
+keys have
+.Em volume
+as their only value.
+The
+.Sy create_ancestors
+key only appears if the
+.Fl p
+option is used.
+The
+.Sy property
+key has two values, a property name that property's value.
+The
+.Sy property
+key may appear zero or more times, once for each property that will be set local
+to
+.Em volume
+due to the use of the
+.Fl b
+or
+.Fl o
+options, as well as
+.Sy refreservation
+if the volume is not sparse.
+.It Fl v
+Print verbose information about the created dataset.
+.El
+.El
+.Ss ZFS for Swap
+Swapping to a ZFS volume is prone to deadlock and not recommended.
+See OpenZFS FAQ.
+.Pp
+Swapping to a file on a ZFS filesystem is not supported.
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 1, 10 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Creating a ZFS File System Hierarchy
+The following commands create a file system named
+.Ar pool/home
+and a file system named
+.Ar pool/home/bob .
+The mount point
+.Pa /export/home
+is set for the parent file system, and is automatically inherited by the child
+file system.
+.Dl # Nm zfs Cm create Ar pool/home
+.Dl # Nm zfs Cm set Sy mountpoint Ns = Ns Ar /export/home pool/home
+.Dl # Nm zfs Cm create Ar pool/home/bob
+.
+.Ss Example 2 : No Promoting a ZFS Clone
+The following commands illustrate how to test out changes to a file system, and
+then replace the original file system with the changed one, using clones, clone
+promotion, and renaming:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm create Ar pool/project/production
+ populate /pool/project/production with data
+.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today
+.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta
+ make changes to /pool/project/beta and test them
+.No # Nm zfs Cm promote Ar pool/project/beta
+.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy
+.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production
+ once the legacy version is no longer needed, it can be destroyed
+.No # Nm zfs Cm destroy Ar pool/project/legacy
+.Ed
+.
+.Sh SEE ALSO
+.Xr zfs-destroy 8 ,
+.Xr zfs-list 8 ,
+.Xr zpool-create 8
diff --git a/share/man/man8/zfs-destroy.8 b/share/man/man8/zfs-destroy.8
@@ -0,0 +1,226 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 16, 2022
+.Dt ZFS-DESTROY 8
+.Os
+.
+.Sh NAME
+.Nm zfs-destroy
+.Nd destroy ZFS dataset, snapshots, or bookmark
+.Sh SYNOPSIS
+.Nm zfs
+.Cm destroy
+.Op Fl Rfnprv
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm destroy
+.Op Fl Rdnprv
+.Ar filesystem Ns | Ns Ar volume Ns @ Ns Ar snap Ns
+.Oo % Ns Ar snap Ns Oo , Ns Ar snap Ns Oo % Ns Ar snap Oc Oc Oc Ns …
+.Nm zfs
+.Cm destroy
+.Ar filesystem Ns | Ns Ar volume Ns # Ns Ar bookmark
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm destroy
+.Op Fl Rfnprv
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Destroys the given dataset.
+By default, the command unshares any file systems that are currently shared,
+unmounts any file systems that are currently mounted, and refuses to destroy a
+dataset that has active dependents
+.Pq children or clones .
+.Bl -tag -width "-R"
+.It Fl R
+Recursively destroy all dependents, including cloned file systems outside the
+target hierarchy.
+.It Fl f
+Forcibly unmount file systems.
+This option has no effect on non-file systems or unmounted file systems.
+.It Fl n
+Do a dry-run
+.Pq Qq No-op
+deletion.
+No data will be deleted.
+This is useful in conjunction with the
+.Fl v
+or
+.Fl p
+flags to determine what data would be deleted.
+.It Fl p
+Print machine-parsable verbose information about the deleted data.
+.It Fl r
+Recursively destroy all children.
+.It Fl v
+Print verbose information about the deleted data.
+.El
+.Pp
+Extreme care should be taken when applying either the
+.Fl r
+or the
+.Fl R
+options, as they can destroy large portions of a pool and cause unexpected
+behavior for mounted file systems in use.
+.It Xo
+.Nm zfs
+.Cm destroy
+.Op Fl Rdnprv
+.Ar filesystem Ns | Ns Ar volume Ns @ Ns Ar snap Ns
+.Oo % Ns Ar snap Ns Oo , Ns Ar snap Ns Oo % Ns Ar snap Oc Oc Oc Ns …
+.Xc
+The given snapshots are destroyed immediately if and only if the
+.Nm zfs Cm destroy
+command without the
+.Fl d
+option would have destroyed it.
+Such immediate destruction would occur, for example, if the snapshot had no
+clones and the user-initiated reference count were zero.
+.Pp
+If a snapshot does not qualify for immediate destruction, it is marked for
+deferred deletion.
+In this state, it exists as a usable, visible snapshot until both of the
+preconditions listed above are met, at which point it is destroyed.
+.Pp
+An inclusive range of snapshots may be specified by separating the first and
+last snapshots with a percent sign.
+The first and/or last snapshots may be left blank, in which case the
+filesystem's oldest or newest snapshot will be implied.
+.Pp
+Multiple snapshots
+.Pq or ranges of snapshots
+of the same filesystem or volume may be specified in a comma-separated list of
+snapshots.
+Only the snapshot's short name
+.Po the part after the
+.Sy @
+.Pc
+should be specified when using a range or comma-separated list to identify
+multiple snapshots.
+.Bl -tag -width "-R"
+.It Fl R
+Recursively destroy all clones of these snapshots, including the clones,
+snapshots, and children.
+If this flag is specified, the
+.Fl d
+flag will have no effect.
+.It Fl d
+Destroy immediately.
+If a snapshot cannot be destroyed now, mark it for deferred destruction.
+.It Fl n
+Do a dry-run
+.Pq Qq No-op
+deletion.
+No data will be deleted.
+This is useful in conjunction with the
+.Fl p
+or
+.Fl v
+flags to determine what data would be deleted.
+.It Fl p
+Print machine-parsable verbose information about the deleted data.
+.It Fl r
+Destroy
+.Pq or mark for deferred deletion
+all snapshots with this name in descendent file systems.
+.It Fl v
+Print verbose information about the deleted data.
+.Pp
+Extreme care should be taken when applying either the
+.Fl r
+or the
+.Fl R
+options, as they can destroy large portions of a pool and cause unexpected
+behavior for mounted file systems in use.
+.El
+.It Xo
+.Nm zfs
+.Cm destroy
+.Ar filesystem Ns | Ns Ar volume Ns # Ns Ar bookmark
+.Xc
+The given bookmark is destroyed.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 3, 10, 15 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Creating and Destroying Multiple Snapshots
+The following command creates snapshots named
+.Ar yesterday No of Ar pool/home
+and all of its descendent file systems.
+Each snapshot is mounted on demand in the
+.Pa .zfs/snapshot
+directory at the root of its file system.
+The second command destroys the newly created snapshots.
+.Dl # Nm zfs Cm snapshot Fl r Ar pool/home Ns @ Ns Ar yesterday
+.Dl # Nm zfs Cm destroy Fl r Ar pool/home Ns @ Ns Ar yesterday
+.
+.Ss Example 2 : No Promoting a ZFS Clone
+The following commands illustrate how to test out changes to a file system, and
+then replace the original file system with the changed one, using clones, clone
+promotion, and renaming:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm create Ar pool/project/production
+ populate /pool/project/production with data
+.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today
+.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta
+ make changes to /pool/project/beta and test them
+.No # Nm zfs Cm promote Ar pool/project/beta
+.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy
+.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production
+ once the legacy version is no longer needed, it can be destroyed
+.No # Nm zfs Cm destroy Ar pool/project/legacy
+.Ed
+.
+.Ss Example 3 : No Performing a Rolling Snapshot
+The following example shows how to maintain a history of snapshots with a
+consistent naming scheme.
+To keep a week's worth of snapshots, the user destroys the oldest snapshot,
+renames the remaining snapshots, and then creates a new snapshot, as follows:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm destroy Fl r Ar pool/users@7daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@6daysago No @ Ns Ar 7daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@5daysago No @ Ns Ar 6daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@4daysago No @ Ns Ar 5daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@3daysago No @ Ns Ar 4daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@2daysago No @ Ns Ar 3daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@yesterday No @ Ns Ar 2daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@today No @ Ns Ar yesterday
+.No # Nm zfs Cm snapshot Fl r Ar pool/users Ns @ Ns Ar today
+.Ed
+.
+.Sh SEE ALSO
+.Xr zfs-create 8 ,
+.Xr zfs-hold 8
diff --git a/share/man/man8/zfs-diff.8 b/share/man/man8/zfs-diff.8
@@ -0,0 +1,121 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 16, 2022
+.Dt ZFS-DIFF 8
+.Os
+.
+.Sh NAME
+.Nm zfs-diff
+.Nd show difference between ZFS snapshots
+.Sh SYNOPSIS
+.Nm zfs
+.Cm diff
+.Op Fl FHth
+.Ar snapshot Ar snapshot Ns | Ns Ar filesystem
+.
+.Sh DESCRIPTION
+Display the difference between a snapshot of a given filesystem and another
+snapshot of that filesystem from a later time or the current contents of the
+filesystem.
+The first column is a character indicating the type of change, the other columns
+indicate pathname, new pathname
+.Pq in case of rename ,
+change in link count, and optionally file type and/or change time.
+The types of change are:
+.Bl -tag -compact -offset Ds -width "M"
+.It Sy -
+The path has been removed
+.It Sy +
+The path has been created
+.It Sy M
+The path has been modified
+.It Sy R
+The path has been renamed
+.El
+.Bl -tag -width "-F"
+.It Fl F
+Display an indication of the type of file, in a manner similar to the
+.Fl F
+option of
+.Xr ls 1 .
+.Bl -tag -compact -offset 2n -width "B"
+.It Sy B
+Block device
+.It Sy C
+Character device
+.It Sy /
+Directory
+.It Sy >
+Door
+.It Sy |\&
+Named pipe
+.It Sy @
+Symbolic link
+.It Sy P
+Event port
+.It Sy =
+Socket
+.It Sy F
+Regular file
+.El
+.It Fl H
+Give more parsable tab-separated output, without header lines and without
+arrows.
+.It Fl t
+Display the path's inode change time as the first column of output.
+.It Fl h
+Do not
+.Sy \e0 Ns Ar ooo Ns -escape
+non-ASCII paths.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 22 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Showing the differences between a snapshot and a ZFS Dataset
+The following example shows how to see what has changed between a prior
+snapshot of a ZFS dataset and its current state.
+The
+.Fl F
+option is used to indicate type information for the files affected.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm diff Fl F Ar tank/test@before tank/test
+M / /tank/test/
+M F /tank/test/linked (+1)
+R F /tank/test/oldname -> /tank/test/newname
+- F /tank/test/deleted
++ F /tank/test/created
+M F /tank/test/modified
+.Ed
+.
+.Sh SEE ALSO
+.Xr zfs-snapshot 8
diff --git a/share/man/man8/zfs-get.8 b/share/man/man8/zfs-get.8
@@ -0,0 +1,376 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd April 20, 2024
+.Dt ZFS-SET 8
+.Os
+.
+.Sh NAME
+.Nm zfs-set
+.Nd set properties on ZFS datasets
+.Sh SYNOPSIS
+.Nm zfs
+.Cm set
+.Op Fl u
+.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns …
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.Nm zfs
+.Cm get
+.Op Fl r Ns | Ns Fl d Ar depth
+.Op Fl Hp
+.Op Fl j Op Ar --json-int
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns … Oc
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Cm all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns …
+.Nm zfs
+.Cm inherit
+.Op Fl rS
+.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm set
+.Op Fl u
+.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns …
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.Xc
+Only some properties can be edited.
+See
+.Xr zfsprops 7
+for more information on what properties can be set and acceptable
+values.
+Numeric values can be specified as exact values, or in a human-readable form
+with a suffix of
+.Sy B , K , M , G , T , P , E , Z
+.Po for bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes,
+or zettabytes, respectively
+.Pc .
+User properties can be set on snapshots.
+For more information, see the
+.Em User Properties
+section of
+.Xr zfsprops 7 .
+.Bl -tag -width "-u"
+.It Fl u
+Update mountpoint, sharenfs, sharesmb property but do not mount or share the
+dataset.
+.El
+.It Xo
+.Nm zfs
+.Cm get
+.Op Fl r Ns | Ns Fl d Ar depth
+.Op Fl Hp
+.Op Fl j Op Ar --json-int
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns … Oc
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Cm all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns …
+.Xc
+Displays properties for the given datasets.
+If no datasets are specified, then the command displays properties for all
+datasets on the system.
+For each property, the following columns are displayed:
+.Bl -tag -compact -offset 4n -width "property"
+.It Sy name
+Dataset name
+.It Sy property
+Property name
+.It Sy value
+Property value
+.It Sy source
+Property source
+.Sy local , default , inherited , temporary , received , No or Sy - Pq none .
+.El
+.Pp
+All columns are displayed by default, though this can be controlled by using the
+.Fl o
+option.
+This command takes a comma-separated list of properties as described in the
+.Sx Native Properties
+and
+.Sx User Properties
+sections of
+.Xr zfsprops 7 .
+.Pp
+The value
+.Sy all
+can be used to display all properties that apply to the given dataset's type
+.Pq Sy filesystem , volume , snapshot , No or Sy bookmark .
+.Bl -tag -width "-s source"
+.It Fl j , -json Op Ar --json-int
+Display the output in JSON format.
+Specify
+.Sy --json-int
+to display numbers in integer format instead of strings for JSON output.
+.It Fl H
+Display output in a form more easily parsed by scripts.
+Any headers are omitted, and fields are explicitly separated by a single tab
+instead of an arbitrary amount of space.
+.It Fl d Ar depth
+Recursively display any children of the dataset, limiting the recursion to
+.Ar depth .
+A depth of
+.Sy 1
+will display only the dataset and its direct children.
+.It Fl o Ar field
+A comma-separated list of columns to display, defaults to
+.Sy name , Ns Sy property , Ns Sy value , Ns Sy source .
+.It Fl p
+Display numbers in parsable
+.Pq exact
+values.
+.It Fl r
+Recursively display properties for any children.
+.It Fl s Ar source
+A comma-separated list of sources to display.
+Those properties coming from a source other than those in this list are ignored.
+Each source must be one of the following:
+.Sy local , default , inherited , temporary , received , No or Sy none .
+The default value is all sources.
+.It Fl t Ar type
+A comma-separated list of types to display, where
+.Ar type
+is one of
+.Sy filesystem , snapshot , volume , bookmark , No or Sy all .
+.Sy fs ,
+.Sy snap ,
+or
+.Sy vol
+can be used as aliases for
+.Sy filesystem ,
+.Sy snapshot ,
+or
+.Sy volume .
+.El
+.It Xo
+.Nm zfs
+.Cm inherit
+.Op Fl rS
+.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.Xc
+Clears the specified property, causing it to be inherited from an ancestor,
+restored to default if no ancestor has the property set, or with the
+.Fl S
+option reverted to the received value if one exists.
+See
+.Xr zfsprops 7
+for a listing of default values, and details on which properties can be
+inherited.
+.Bl -tag -width "-r"
+.It Fl r
+Recursively inherit the given property for all children.
+.It Fl S
+Revert the property to the received value, if one exists;
+otherwise, for non-inheritable properties, to the default;
+otherwise, operate as if the
+.Fl S
+option was not specified.
+.El
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 1, 4, 6, 7, 11, 14, 16 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Creating a ZFS File System Hierarchy
+The following commands create a file system named
+.Ar pool/home
+and a file system named
+.Ar pool/home/bob .
+The mount point
+.Pa /export/home
+is set for the parent file system, and is automatically inherited by the child
+file system.
+.Dl # Nm zfs Cm create Ar pool/home
+.Dl # Nm zfs Cm set Sy mountpoint Ns = Ns Ar /export/home pool/home
+.Dl # Nm zfs Cm create Ar pool/home/bob
+.
+.Ss Example 2 : No Disabling and Enabling File System Compression
+The following command disables the
+.Sy compression
+property for all file systems under
+.Ar pool/home .
+The next command explicitly enables
+.Sy compression
+for
+.Ar pool/home/anne .
+.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy off Ar pool/home
+.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy on Ar pool/home/anne
+.
+.Ss Example 3 : No Setting a Quota on a ZFS File System
+The following command sets a quota of 50 Gbytes for
+.Ar pool/home/bob :
+.Dl # Nm zfs Cm set Sy quota Ns = Ns Ar 50G pool/home/bob
+.
+.Ss Example 4 : No Listing ZFS Properties
+The following command lists all properties for
+.Ar pool/home/bob :
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Sy all Ar pool/home/bob
+NAME PROPERTY VALUE SOURCE
+pool/home/bob type filesystem -
+pool/home/bob creation Tue Jul 21 15:53 2009 -
+pool/home/bob used 21K -
+pool/home/bob available 20.0G -
+pool/home/bob referenced 21K -
+pool/home/bob compressratio 1.00x -
+pool/home/bob mounted yes -
+pool/home/bob quota 20G local
+pool/home/bob reservation none default
+pool/home/bob recordsize 128K default
+pool/home/bob mountpoint /pool/home/bob default
+pool/home/bob sharenfs off default
+pool/home/bob checksum on default
+pool/home/bob compression on local
+pool/home/bob atime on default
+pool/home/bob devices on default
+pool/home/bob exec on default
+pool/home/bob setuid on default
+pool/home/bob readonly off default
+pool/home/bob zoned off default
+pool/home/bob snapdir hidden default
+pool/home/bob acltype off default
+pool/home/bob aclmode discard default
+pool/home/bob aclinherit restricted default
+pool/home/bob canmount on default
+pool/home/bob xattr on default
+pool/home/bob copies 1 default
+pool/home/bob version 4 -
+pool/home/bob utf8only off -
+pool/home/bob normalization none -
+pool/home/bob casesensitivity sensitive -
+pool/home/bob vscan off default
+pool/home/bob nbmand off default
+pool/home/bob sharesmb off default
+pool/home/bob refquota none default
+pool/home/bob refreservation none default
+pool/home/bob primarycache all default
+pool/home/bob secondarycache all default
+pool/home/bob usedbysnapshots 0 -
+pool/home/bob usedbydataset 21K -
+pool/home/bob usedbychildren 0 -
+pool/home/bob usedbyrefreservation 0 -
+.Ed
+.Pp
+The following command gets a single property value:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Fl H o Sy value compression Ar pool/home/bob
+on
+.Ed
+.Pp
+The following command gets a single property value recursively in JSON format:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Fl j Fl r Sy mountpoint Ar pool/home | Nm jq
+{
+ "output_version": {
+ "command": "zfs get",
+ "vers_major": 0,
+ "vers_minor": 1
+ },
+ "datasets": {
+ "pool/home": {
+ "name": "pool/home",
+ "type": "FILESYSTEM",
+ "pool": "pool",
+ "createtxg": "10",
+ "properties": {
+ "mountpoint": {
+ "value": "/pool/home",
+ "source": {
+ "type": "DEFAULT",
+ "data": "-"
+ }
+ }
+ }
+ },
+ "pool/home/bob": {
+ "name": "pool/home/bob",
+ "type": "FILESYSTEM",
+ "pool": "pool",
+ "createtxg": "1176",
+ "properties": {
+ "mountpoint": {
+ "value": "/pool/home/bob",
+ "source": {
+ "type": "DEFAULT",
+ "data": "-"
+ }
+ }
+ }
+ }
+ }
+}
+.Ed
+.Pp
+The following command lists all properties with local settings for
+.Ar pool/home/bob :
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Fl r s Sy local Fl o Sy name , Ns Sy property , Ns Sy value all Ar pool/home/bob
+NAME PROPERTY VALUE
+pool/home/bob quota 20G
+pool/home/bob compression on
+.Ed
+.
+.Ss Example 5 : No Inheriting ZFS Properties
+The following command causes
+.Ar pool/home/bob No and Ar pool/home/anne
+to inherit the
+.Sy checksum
+property from their parent.
+.Dl # Nm zfs Cm inherit Sy checksum Ar pool/home/bob pool/home/anne
+.
+.Ss Example 6 : No Setting User Properties
+The following example sets the user-defined
+.Ar com.example : Ns Ar department
+property for a dataset:
+.Dl # Nm zfs Cm set Ar com.example : Ns Ar department Ns = Ns Ar 12345 tank/accounting
+.
+.Ss Example 7 : No Setting sharenfs Property Options on a ZFS File System
+The following commands show how to set
+.Sy sharenfs
+property options to enable read-write
+access for a set of IP addresses and to enable root access for system
+.Qq neo
+on the
+.Ar tank/home
+file system:
+.Dl # Nm zfs Cm set Sy sharenfs Ns = Ns ' Ns Ar rw Ns =@123.123.0.0/16:[::1],root= Ns Ar neo Ns ' tank/home
+.Pp
+If you are using DNS for host name resolution,
+specify the fully-qualified hostname.
+.
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr zfs-list 8
diff --git a/share/man/man8/zfs-groupspace.8 b/share/man/man8/zfs-groupspace.8
@@ -0,0 +1,188 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd June 30, 2019
+.Dt ZFS-USERSPACE 8
+.Os
+.
+.Sh NAME
+.Nm zfs-userspace
+.Nd display space and quotas of ZFS dataset
+.Sh SYNOPSIS
+.Nm zfs
+.Cm userspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Nm zfs
+.Cm groupspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Nm zfs
+.Cm projectspace
+.Op Fl Hp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm userspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Xc
+Displays space consumed by, and quotas on, each user in the specified
+filesystem,
+snapshot, or path.
+If a path is given, the filesystem that contains that path will be used.
+This corresponds to the
+.Sy userused@ Ns Em user ,
+.Sy userobjused@ Ns Em user ,
+.Sy userquota@ Ns Em user ,
+and
+.Sy userobjquota@ Ns Em user
+properties.
+.Bl -tag -width "-S field"
+.It Fl H
+Do not print headers, use tab-delimited output.
+.It Fl S Ar field
+Sort by this field in reverse order.
+See
+.Fl s .
+.It Fl i
+Translate SID to POSIX ID.
+The POSIX ID may be ephemeral if no mapping exists.
+Normal POSIX interfaces
+.Pq like Xr stat 2 , Nm ls Fl l
+perform this translation, so the
+.Fl i
+option allows the output from
+.Nm zfs Cm userspace
+to be compared directly with those utilities.
+However,
+.Fl i
+may lead to confusion if some files were created by an SMB user before a
+SMB-to-POSIX name mapping was established.
+In such a case, some files will be owned by the SMB entity and some by the POSIX
+entity.
+However, the
+.Fl i
+option will report that the POSIX entity has the total usage and quota for both.
+.It Fl n
+Print numeric ID instead of user/group name.
+.It Fl o Ar field Ns Oo , Ns Ar field Oc Ns …
+Display only the specified fields from the following set:
+.Sy type ,
+.Sy name ,
+.Sy used ,
+.Sy quota .
+The default is to display all fields.
+.It Fl p
+Use exact
+.Pq parsable
+numeric output.
+.It Fl s Ar field
+Sort output by this field.
+The
+.Fl s
+and
+.Fl S
+flags may be specified multiple times to sort first by one field, then by
+another.
+The default is
+.Fl s Sy type Fl s Sy name .
+.It Fl t Ar type Ns Oo , Ns Ar type Oc Ns …
+Print only the specified types from the following set:
+.Sy all ,
+.Sy posixuser ,
+.Sy smbuser ,
+.Sy posixgroup ,
+.Sy smbgroup .
+The default is
+.Fl t Sy posixuser , Ns Sy smbuser .
+The default can be changed to include group types.
+.El
+.It Xo
+.Nm zfs
+.Cm groupspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot
+.Xc
+Displays space consumed by, and quotas on, each group in the specified
+filesystem or snapshot.
+This subcommand is identical to
+.Cm userspace ,
+except that the default types to display are
+.Fl t Sy posixgroup , Ns Sy smbgroup .
+.It Xo
+.Nm zfs
+.Cm projectspace
+.Op Fl Hp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Xc
+Displays space consumed by, and quotas on, each project in the specified
+filesystem or snapshot.
+This subcommand is identical to
+.Cm userspace ,
+except that the project identifier is a numeral, not a name.
+So need neither the option
+.Fl i
+for SID to POSIX ID nor
+.Fl n
+for numeric ID, nor
+.Fl t
+for types.
+.El
+.
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr zfs-set 8
diff --git a/share/man/man8/zfs-hold.8 b/share/man/man8/zfs-hold.8
@@ -0,0 +1,114 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd June 30, 2019
+.Dt ZFS-HOLD 8
+.Os
+.
+.Sh NAME
+.Nm zfs-hold
+.Nd hold ZFS snapshots to prevent their removal
+.Sh SYNOPSIS
+.Nm zfs
+.Cm hold
+.Op Fl r
+.Ar tag Ar snapshot Ns …
+.Nm zfs
+.Cm holds
+.Op Fl rHp
+.Ar snapshot Ns …
+.Nm zfs
+.Cm release
+.Op Fl r
+.Ar tag Ar snapshot Ns …
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm hold
+.Op Fl r
+.Ar tag Ar snapshot Ns …
+.Xc
+Adds a single reference, named with the
+.Ar tag
+argument, to the specified snapshots.
+Each snapshot has its own tag namespace, and tags must be unique within that
+space.
+.Pp
+If a hold exists on a snapshot, attempts to destroy that snapshot by using the
+.Nm zfs Cm destroy
+command return
+.Sy EBUSY .
+.Bl -tag -width "-r"
+.It Fl r
+Specifies that a hold with the given tag is applied recursively to the snapshots
+of all descendent file systems.
+.El
+.It Xo
+.Nm zfs
+.Cm holds
+.Op Fl rHp
+.Ar snapshot Ns …
+.Xc
+Lists all existing user references for the given snapshot or snapshots.
+.Bl -tag -width "-r"
+.It Fl r
+Lists the holds that are set on the named descendent snapshots, in addition to
+listing the holds on the named snapshot.
+.It Fl H
+Do not print headers, use tab-delimited output.
+.It Fl p
+Prints holds timestamps as unix epoch timestamps.
+.El
+.It Xo
+.Nm zfs
+.Cm release
+.Op Fl r
+.Ar tag Ar snapshot Ns …
+.Xc
+Removes a single reference, named with the
+.Ar tag
+argument, from the specified snapshot or snapshots.
+The tag must already exist for each snapshot.
+If a hold exists on a snapshot, attempts to destroy that snapshot by using the
+.Nm zfs Cm destroy
+command return
+.Sy EBUSY .
+.Bl -tag -width "-r"
+.It Fl r
+Recursively releases a hold with the given tag on the snapshots of all
+descendent file systems.
+.El
+.El
+.
+.Sh SEE ALSO
+.Xr zfs-destroy 8
diff --git a/share/man/man8/zfs-inherit.8 b/share/man/man8/zfs-inherit.8
@@ -0,0 +1,376 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd April 20, 2024
+.Dt ZFS-SET 8
+.Os
+.
+.Sh NAME
+.Nm zfs-set
+.Nd set properties on ZFS datasets
+.Sh SYNOPSIS
+.Nm zfs
+.Cm set
+.Op Fl u
+.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns …
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.Nm zfs
+.Cm get
+.Op Fl r Ns | Ns Fl d Ar depth
+.Op Fl Hp
+.Op Fl j Op Ar --json-int
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns … Oc
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Cm all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns …
+.Nm zfs
+.Cm inherit
+.Op Fl rS
+.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm set
+.Op Fl u
+.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns …
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.Xc
+Only some properties can be edited.
+See
+.Xr zfsprops 7
+for more information on what properties can be set and acceptable
+values.
+Numeric values can be specified as exact values, or in a human-readable form
+with a suffix of
+.Sy B , K , M , G , T , P , E , Z
+.Po for bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes,
+or zettabytes, respectively
+.Pc .
+User properties can be set on snapshots.
+For more information, see the
+.Em User Properties
+section of
+.Xr zfsprops 7 .
+.Bl -tag -width "-u"
+.It Fl u
+Update mountpoint, sharenfs, sharesmb property but do not mount or share the
+dataset.
+.El
+.It Xo
+.Nm zfs
+.Cm get
+.Op Fl r Ns | Ns Fl d Ar depth
+.Op Fl Hp
+.Op Fl j Op Ar --json-int
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns … Oc
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Cm all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns …
+.Xc
+Displays properties for the given datasets.
+If no datasets are specified, then the command displays properties for all
+datasets on the system.
+For each property, the following columns are displayed:
+.Bl -tag -compact -offset 4n -width "property"
+.It Sy name
+Dataset name
+.It Sy property
+Property name
+.It Sy value
+Property value
+.It Sy source
+Property source
+.Sy local , default , inherited , temporary , received , No or Sy - Pq none .
+.El
+.Pp
+All columns are displayed by default, though this can be controlled by using the
+.Fl o
+option.
+This command takes a comma-separated list of properties as described in the
+.Sx Native Properties
+and
+.Sx User Properties
+sections of
+.Xr zfsprops 7 .
+.Pp
+The value
+.Sy all
+can be used to display all properties that apply to the given dataset's type
+.Pq Sy filesystem , volume , snapshot , No or Sy bookmark .
+.Bl -tag -width "-s source"
+.It Fl j , -json Op Ar --json-int
+Display the output in JSON format.
+Specify
+.Sy --json-int
+to display numbers in integer format instead of strings for JSON output.
+.It Fl H
+Display output in a form more easily parsed by scripts.
+Any headers are omitted, and fields are explicitly separated by a single tab
+instead of an arbitrary amount of space.
+.It Fl d Ar depth
+Recursively display any children of the dataset, limiting the recursion to
+.Ar depth .
+A depth of
+.Sy 1
+will display only the dataset and its direct children.
+.It Fl o Ar field
+A comma-separated list of columns to display, defaults to
+.Sy name , Ns Sy property , Ns Sy value , Ns Sy source .
+.It Fl p
+Display numbers in parsable
+.Pq exact
+values.
+.It Fl r
+Recursively display properties for any children.
+.It Fl s Ar source
+A comma-separated list of sources to display.
+Those properties coming from a source other than those in this list are ignored.
+Each source must be one of the following:
+.Sy local , default , inherited , temporary , received , No or Sy none .
+The default value is all sources.
+.It Fl t Ar type
+A comma-separated list of types to display, where
+.Ar type
+is one of
+.Sy filesystem , snapshot , volume , bookmark , No or Sy all .
+.Sy fs ,
+.Sy snap ,
+or
+.Sy vol
+can be used as aliases for
+.Sy filesystem ,
+.Sy snapshot ,
+or
+.Sy volume .
+.El
+.It Xo
+.Nm zfs
+.Cm inherit
+.Op Fl rS
+.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.Xc
+Clears the specified property, causing it to be inherited from an ancestor,
+restored to default if no ancestor has the property set, or with the
+.Fl S
+option reverted to the received value if one exists.
+See
+.Xr zfsprops 7
+for a listing of default values, and details on which properties can be
+inherited.
+.Bl -tag -width "-r"
+.It Fl r
+Recursively inherit the given property for all children.
+.It Fl S
+Revert the property to the received value, if one exists;
+otherwise, for non-inheritable properties, to the default;
+otherwise, operate as if the
+.Fl S
+option was not specified.
+.El
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 1, 4, 6, 7, 11, 14, 16 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Creating a ZFS File System Hierarchy
+The following commands create a file system named
+.Ar pool/home
+and a file system named
+.Ar pool/home/bob .
+The mount point
+.Pa /export/home
+is set for the parent file system, and is automatically inherited by the child
+file system.
+.Dl # Nm zfs Cm create Ar pool/home
+.Dl # Nm zfs Cm set Sy mountpoint Ns = Ns Ar /export/home pool/home
+.Dl # Nm zfs Cm create Ar pool/home/bob
+.
+.Ss Example 2 : No Disabling and Enabling File System Compression
+The following command disables the
+.Sy compression
+property for all file systems under
+.Ar pool/home .
+The next command explicitly enables
+.Sy compression
+for
+.Ar pool/home/anne .
+.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy off Ar pool/home
+.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy on Ar pool/home/anne
+.
+.Ss Example 3 : No Setting a Quota on a ZFS File System
+The following command sets a quota of 50 Gbytes for
+.Ar pool/home/bob :
+.Dl # Nm zfs Cm set Sy quota Ns = Ns Ar 50G pool/home/bob
+.
+.Ss Example 4 : No Listing ZFS Properties
+The following command lists all properties for
+.Ar pool/home/bob :
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Sy all Ar pool/home/bob
+NAME PROPERTY VALUE SOURCE
+pool/home/bob type filesystem -
+pool/home/bob creation Tue Jul 21 15:53 2009 -
+pool/home/bob used 21K -
+pool/home/bob available 20.0G -
+pool/home/bob referenced 21K -
+pool/home/bob compressratio 1.00x -
+pool/home/bob mounted yes -
+pool/home/bob quota 20G local
+pool/home/bob reservation none default
+pool/home/bob recordsize 128K default
+pool/home/bob mountpoint /pool/home/bob default
+pool/home/bob sharenfs off default
+pool/home/bob checksum on default
+pool/home/bob compression on local
+pool/home/bob atime on default
+pool/home/bob devices on default
+pool/home/bob exec on default
+pool/home/bob setuid on default
+pool/home/bob readonly off default
+pool/home/bob zoned off default
+pool/home/bob snapdir hidden default
+pool/home/bob acltype off default
+pool/home/bob aclmode discard default
+pool/home/bob aclinherit restricted default
+pool/home/bob canmount on default
+pool/home/bob xattr on default
+pool/home/bob copies 1 default
+pool/home/bob version 4 -
+pool/home/bob utf8only off -
+pool/home/bob normalization none -
+pool/home/bob casesensitivity sensitive -
+pool/home/bob vscan off default
+pool/home/bob nbmand off default
+pool/home/bob sharesmb off default
+pool/home/bob refquota none default
+pool/home/bob refreservation none default
+pool/home/bob primarycache all default
+pool/home/bob secondarycache all default
+pool/home/bob usedbysnapshots 0 -
+pool/home/bob usedbydataset 21K -
+pool/home/bob usedbychildren 0 -
+pool/home/bob usedbyrefreservation 0 -
+.Ed
+.Pp
+The following command gets a single property value:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Fl H o Sy value compression Ar pool/home/bob
+on
+.Ed
+.Pp
+The following command gets a single property value recursively in JSON format:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Fl j Fl r Sy mountpoint Ar pool/home | Nm jq
+{
+ "output_version": {
+ "command": "zfs get",
+ "vers_major": 0,
+ "vers_minor": 1
+ },
+ "datasets": {
+ "pool/home": {
+ "name": "pool/home",
+ "type": "FILESYSTEM",
+ "pool": "pool",
+ "createtxg": "10",
+ "properties": {
+ "mountpoint": {
+ "value": "/pool/home",
+ "source": {
+ "type": "DEFAULT",
+ "data": "-"
+ }
+ }
+ }
+ },
+ "pool/home/bob": {
+ "name": "pool/home/bob",
+ "type": "FILESYSTEM",
+ "pool": "pool",
+ "createtxg": "1176",
+ "properties": {
+ "mountpoint": {
+ "value": "/pool/home/bob",
+ "source": {
+ "type": "DEFAULT",
+ "data": "-"
+ }
+ }
+ }
+ }
+ }
+}
+.Ed
+.Pp
+The following command lists all properties with local settings for
+.Ar pool/home/bob :
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Fl r s Sy local Fl o Sy name , Ns Sy property , Ns Sy value all Ar pool/home/bob
+NAME PROPERTY VALUE
+pool/home/bob quota 20G
+pool/home/bob compression on
+.Ed
+.
+.Ss Example 5 : No Inheriting ZFS Properties
+The following command causes
+.Ar pool/home/bob No and Ar pool/home/anne
+to inherit the
+.Sy checksum
+property from their parent.
+.Dl # Nm zfs Cm inherit Sy checksum Ar pool/home/bob pool/home/anne
+.
+.Ss Example 6 : No Setting User Properties
+The following example sets the user-defined
+.Ar com.example : Ns Ar department
+property for a dataset:
+.Dl # Nm zfs Cm set Ar com.example : Ns Ar department Ns = Ns Ar 12345 tank/accounting
+.
+.Ss Example 7 : No Setting sharenfs Property Options on a ZFS File System
+The following commands show how to set
+.Sy sharenfs
+property options to enable read-write
+access for a set of IP addresses and to enable root access for system
+.Qq neo
+on the
+.Ar tank/home
+file system:
+.Dl # Nm zfs Cm set Sy sharenfs Ns = Ns ' Ns Ar rw Ns =@123.123.0.0/16:[::1],root= Ns Ar neo Ns ' tank/home
+.Pp
+If you are using DNS for host name resolution,
+specify the fully-qualified hostname.
+.
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr zfs-list 8
diff --git a/share/man/man8/zfs-jail.8 b/share/man/man8/zfs-jail.8
@@ -0,0 +1,124 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2011, Pawel Jakub Dawidek <pjd@FreeBSD.org>
+.\" Copyright (c) 2012, Glen Barber <gjb@FreeBSD.org>
+.\" Copyright (c) 2012, Bryan Drewery <bdrewery@FreeBSD.org>
+.\" Copyright (c) 2013, Steven Hartland <smh@FreeBSD.org>
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright (c) 2014, Xin LI <delphij@FreeBSD.org>
+.\" Copyright (c) 2014-2015, The FreeBSD Foundation, All Rights Reserved.
+.\" Copyright (c) 2016 Nexenta Systems, Inc. All Rights Reserved.
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd May 27, 2021
+.Dt ZFS-JAIL 8
+.Os
+.
+.Sh NAME
+.Nm zfs-jail
+.Nd attach or detach ZFS filesystem from FreeBSD jail
+.Sh SYNOPSIS
+.Nm zfs Cm jail
+.Ar jailid Ns | Ns Ar jailname
+.Ar filesystem
+.Nm zfs Cm unjail
+.Ar jailid Ns | Ns Ar jailname
+.Ar filesystem
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm jail
+.Ar jailid Ns | Ns Ar jailname
+.Ar filesystem
+.Xc
+Attach the specified
+.Ar filesystem
+to the jail identified by JID
+.Ar jailid
+or name
+.Ar jailname .
+From now on this file system tree can be managed from within a jail if the
+.Sy jailed
+property has been set.
+To use this functionality, the jail needs the
+.Sy allow.mount
+and
+.Sy allow.mount.zfs
+parameters set to
+.Sy 1
+and the
+.Sy enforce_statfs
+parameter set to a value lower than
+.Sy 2 .
+.Pp
+You cannot attach a jailed dataset's children to another jail.
+You can also not attach the root file system
+of the jail or any dataset which needs to be mounted before the zfs rc script
+is run inside the jail, as it would be attached unmounted until it is
+mounted from the rc script inside the jail.
+.Pp
+To allow management of the dataset from within a jail, the
+.Sy jailed
+property has to be set and the jail needs access to the
+.Pa /dev/zfs
+device.
+The
+.Sy quota
+property cannot be changed from within a jail.
+.Pp
+After a dataset is attached to a jail and the
+.Sy jailed
+property is set, a jailed file system cannot be mounted outside the jail,
+since the jail administrator might have set the mount point to an unacceptable
+value.
+.Pp
+See
+.Xr jail 8
+for more information on managing jails.
+Jails are a
+.Fx
+feature and are not relevant on other platforms.
+.It Xo
+.Nm zfs
+.Cm unjail
+.Ar jailid Ns | Ns Ar jailname
+.Ar filesystem
+.Xc
+Detaches the specified
+.Ar filesystem
+from the jail identified by JID
+.Ar jailid
+or name
+.Ar jailname .
+.El
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr jail 8
diff --git a/share/man/man8/zfs-list.8 b/share/man/man8/zfs-list.8
@@ -0,0 +1,353 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd February 8, 2024
+.Dt ZFS-LIST 8
+.Os
+.
+.Sh NAME
+.Nm zfs-list
+.Nd list properties of ZFS datasets
+.Sh SYNOPSIS
+.Nm zfs
+.Cm list
+.Op Fl r Ns | Ns Fl d Ar depth
+.Op Fl Hp
+.Op Fl j Op Ar --json-int
+.Oo Fl o Ar property Ns Oo , Ns Ar property Oc Ns … Oc
+.Oo Fl s Ar property Oc Ns …
+.Oo Fl S Ar property Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Oc Ns …
+.
+.Sh DESCRIPTION
+If specified, you can list property information by the absolute pathname or the
+relative pathname.
+By default, all file systems and volumes are displayed.
+Snapshots are displayed if the
+.Sy listsnapshots
+pool property is
+.Sy on
+.Po the default is
+.Sy off
+.Pc ,
+or if the
+.Fl t Sy snapshot
+or
+.Fl t Sy all
+options are specified.
+The following fields are displayed:
+.Sy name , Sy used , Sy available , Sy referenced , Sy mountpoint .
+.Bl -tag -width "-H"
+.It Fl H
+Used for scripting mode.
+Do not print headers and separate fields by a single tab instead of arbitrary
+white space.
+.It Fl j , -json Op Ar --json-int
+Print the output in JSON format.
+Specify
+.Sy --json-int
+to print the numbers in integer format instead of strings in JSON output.
+.It Fl d Ar depth
+Recursively display any children of the dataset, limiting the recursion to
+.Ar depth .
+A
+.Ar depth
+of
+.Sy 1
+will display only the dataset and its direct children.
+.It Fl o Ar property
+A comma-separated list of properties to display.
+The property must be:
+.Bl -bullet -compact
+.It
+One of the properties described in the
+.Sx Native Properties
+section of
+.Xr zfsprops 7
+.It
+A user property
+.It
+The value
+.Sy name
+to display the dataset name
+.It
+The value
+.Sy space
+to display space usage properties on file systems and volumes.
+This is a shortcut for specifying
+.Fl o Ns \ \& Ns Sy name , Ns Sy avail , Ns Sy used , Ns Sy usedsnap , Ns
+.Sy usedds , Ns Sy usedrefreserv , Ns Sy usedchild
+.Fl t Sy filesystem , Ns Sy volume .
+.El
+.It Fl p
+Display numbers in parsable
+.Pq exact
+values.
+.It Fl r
+Recursively display any children of the dataset on the command line.
+.It Fl s Ar property
+A property for sorting the output by column in ascending order based on the
+value of the property.
+The property must be one of the properties described in the
+.Sx Properties
+section of
+.Xr zfsprops 7
+or the value
+.Sy name
+to sort by the dataset name.
+Multiple properties can be specified at one time using multiple
+.Fl s
+property options.
+Multiple
+.Fl s
+options are evaluated from left to right in decreasing order of importance.
+The following is a list of sorting criteria:
+.Bl -bullet -compact
+.It
+Numeric types sort in numeric order.
+.It
+String types sort in alphabetical order.
+.It
+Types inappropriate for a row sort that row to the literal bottom, regardless of
+the specified ordering.
+.El
+.Pp
+If no sorting options are specified the existing behavior of
+.Nm zfs Cm list
+is preserved.
+.It Fl S Ar property
+Same as
+.Fl s ,
+but sorts by property in descending order.
+.It Fl t Ar type
+A comma-separated list of types to display, where
+.Ar type
+is one of
+.Sy filesystem ,
+.Sy snapshot ,
+.Sy volume ,
+.Sy bookmark ,
+or
+.Sy all .
+For example, specifying
+.Fl t Sy snapshot
+displays only snapshots.
+.Sy fs ,
+.Sy snap ,
+or
+.Sy vol
+can be used as aliases for
+.Sy filesystem ,
+.Sy snapshot ,
+or
+.Sy volume .
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 5 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Listing ZFS Datasets
+The following command lists all active file systems and volumes in the system.
+Snapshots are displayed if
+.Sy listsnaps Ns = Ns Sy on .
+The default is
+.Sy off .
+See
+.Xr zpoolprops 7
+for more information on pool properties.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm list
+NAME USED AVAIL REFER MOUNTPOINT
+pool 450K 457G 18K /pool
+pool/home 315K 457G 21K /export/home
+pool/home/anne 18K 457G 18K /export/home/anne
+pool/home/bob 276K 457G 276K /export/home/bob
+.Ed
+.Ss Example 2 : No Listing ZFS filesystems and snapshots in JSON format
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm list Fl j Fl t Ar filesystem,snapshot | Cm jq
+{
+ "output_version": {
+ "command": "zfs list",
+ "vers_major": 0,
+ "vers_minor": 1
+ },
+ "datasets": {
+ "pool": {
+ "name": "pool",
+ "type": "FILESYSTEM",
+ "pool": "pool",
+ "properties": {
+ "used": {
+ "value": "290K",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "available": {
+ "value": "30.5G",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "referenced": {
+ "value": "24K",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "mountpoint": {
+ "value": "/pool",
+ "source": {
+ "type": "DEFAULT",
+ "data": "-"
+ }
+ }
+ }
+ },
+ "pool/home": {
+ "name": "pool/home",
+ "type": "FILESYSTEM",
+ "pool": "pool",
+ "properties": {
+ "used": {
+ "value": "48K",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "available": {
+ "value": "30.5G",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "referenced": {
+ "value": "24K",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "mountpoint": {
+ "value": "/mnt/home",
+ "source": {
+ "type": "LOCAL",
+ "data": "-"
+ }
+ }
+ }
+ },
+ "pool/home/bob": {
+ "name": "pool/home/bob",
+ "type": "FILESYSTEM",
+ "pool": "pool",
+ "properties": {
+ "used": {
+ "value": "24K",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "available": {
+ "value": "30.5G",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "referenced": {
+ "value": "24K",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "mountpoint": {
+ "value": "/mnt/home/bob",
+ "source": {
+ "type": "INHERITED",
+ "data": "pool/home"
+ }
+ }
+ }
+ },
+ "pool/home/bob@v1": {
+ "name": "pool/home/bob@v1",
+ "type": "SNAPSHOT",
+ "pool": "pool",
+ "dataset": "pool/home/bob",
+ "snapshot_name": "v1",
+ "properties": {
+ "used": {
+ "value": "0B",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "available": {
+ "value": "-",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "referenced": {
+ "value": "24K",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "mountpoint": {
+ "value": "-",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ }
+ }
+ }
+ }
+}
+.Ed
+.
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr zfs-get 8
diff --git a/share/man/man8/zfs-load-key.8 b/share/man/man8/zfs-load-key.8
@@ -0,0 +1,304 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd January 13, 2020
+.Dt ZFS-LOAD-KEY 8
+.Os
+.
+.Sh NAME
+.Nm zfs-load-key
+.Nd load, unload, or change encryption key of ZFS dataset
+.Sh SYNOPSIS
+.Nm zfs
+.Cm load-key
+.Op Fl nr
+.Op Fl L Ar keylocation
+.Fl a Ns | Ns Ar filesystem
+.Nm zfs
+.Cm unload-key
+.Op Fl r
+.Fl a Ns | Ns Ar filesystem
+.Nm zfs
+.Cm change-key
+.Op Fl l
+.Op Fl o Ar keylocation Ns = Ns Ar value
+.Op Fl o Ar keyformat Ns = Ns Ar value
+.Op Fl o Ar pbkdf2iters Ns = Ns Ar value
+.Ar filesystem
+.Nm zfs
+.Cm change-key
+.Fl i
+.Op Fl l
+.Ar filesystem
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm load-key
+.Op Fl nr
+.Op Fl L Ar keylocation
+.Fl a Ns | Ns Ar filesystem
+.Xc
+Load the key for
+.Ar filesystem ,
+allowing it and all children that inherit the
+.Sy keylocation
+property to be accessed.
+The key will be expected in the format specified by the
+.Sy keyformat
+and location specified by the
+.Sy keylocation
+property.
+Note that if the
+.Sy keylocation
+is set to
+.Sy prompt
+the terminal will interactively wait for the key to be entered.
+Loading a key will not automatically mount the dataset.
+If that functionality is desired,
+.Nm zfs Cm mount Fl l
+will ask for the key and mount the dataset
+.Po
+see
+.Xr zfs-mount 8
+.Pc .
+Once the key is loaded the
+.Sy keystatus
+property will become
+.Sy available .
+.Bl -tag -width "-r"
+.It Fl r
+Recursively loads the keys for the specified filesystem and all descendent
+encryption roots.
+.It Fl a
+Loads the keys for all encryption roots in all imported pools.
+.It Fl n
+Do a dry-run
+.Pq Qq No-op
+.Cm load-key .
+This will cause
+.Nm zfs
+to simply check that the provided key is correct.
+This command may be run even if the key is already loaded.
+.It Fl L Ar keylocation
+Use
+.Ar keylocation
+instead of the
+.Sy keylocation
+property.
+This will not change the value of the property on the dataset.
+Note that if used with either
+.Fl r
+or
+.Fl a ,
+.Ar keylocation
+may only be given as
+.Sy prompt .
+.El
+.It Xo
+.Nm zfs
+.Cm unload-key
+.Op Fl r
+.Fl a Ns | Ns Ar filesystem
+.Xc
+Unloads a key from ZFS, removing the ability to access the dataset and all of
+its children that inherit the
+.Sy keylocation
+property.
+This requires that the dataset is not currently open or mounted.
+Once the key is unloaded the
+.Sy keystatus
+property will become
+.Sy unavailable .
+.Bl -tag -width "-r"
+.It Fl r
+Recursively unloads the keys for the specified filesystem and all descendent
+encryption roots.
+.It Fl a
+Unloads the keys for all encryption roots in all imported pools.
+.El
+.It Xo
+.Nm zfs
+.Cm change-key
+.Op Fl l
+.Op Fl o Ar keylocation Ns = Ns Ar value
+.Op Fl o Ar keyformat Ns = Ns Ar value
+.Op Fl o Ar pbkdf2iters Ns = Ns Ar value
+.Ar filesystem
+.Xc
+.It Xo
+.Nm zfs
+.Cm change-key
+.Fl i
+.Op Fl l
+.Ar filesystem
+.Xc
+Changes the user's key (e.g. a passphrase) used to access a dataset.
+This command requires that the existing key for the dataset is already loaded.
+This command may also be used to change the
+.Sy keylocation ,
+.Sy keyformat ,
+and
+.Sy pbkdf2iters
+properties as needed.
+If the dataset was not previously an encryption root it will become one.
+Alternatively, the
+.Fl i
+flag may be provided to cause an encryption root to inherit the parent's key
+instead.
+.Pp
+If the user's key is compromised,
+.Nm zfs Cm change-key
+does not necessarily protect existing or newly-written data from attack.
+Newly-written data will continue to be encrypted with the same master key as
+the existing data.
+The master key is compromised if an attacker obtains a
+user key and the corresponding wrapped master key.
+Currently,
+.Nm zfs Cm change-key
+does not overwrite the previous wrapped master key on disk, so it is
+accessible via forensic analysis for an indeterminate length of time.
+.Pp
+In the event of a master key compromise, ideally the drives should be securely
+erased to remove all the old data (which is readable using the compromised
+master key), a new pool created, and the data copied back.
+This can be approximated in place by creating new datasets, copying the data
+.Pq e.g. using Nm zfs Cm send | Nm zfs Cm recv ,
+and then clearing the free space with
+.Nm zpool Cm trim Fl -secure
+if supported by your hardware, otherwise
+.Nm zpool Cm initialize .
+.Bl -tag -width "-r"
+.It Fl l
+Ensures the key is loaded before attempting to change the key.
+This is effectively equivalent to running
+.Nm zfs Cm load-key Ar filesystem ; Nm zfs Cm change-key Ar filesystem
+.It Fl o Ar property Ns = Ns Ar value
+Allows the user to set encryption key properties
+.Pq Sy keyformat , keylocation , No and Sy pbkdf2iters
+while changing the key.
+This is the only way to alter
+.Sy keyformat
+and
+.Sy pbkdf2iters
+after the dataset has been created.
+.It Fl i
+Indicates that zfs should make
+.Ar filesystem
+inherit the key of its parent.
+Note that this command can only be run on an encryption root
+that has an encrypted parent.
+.El
+.El
+.Ss Encryption
+Enabling the
+.Sy encryption
+feature allows for the creation of encrypted filesystems and volumes.
+ZFS will encrypt file and volume data, file attributes, ACLs, permission bits,
+directory listings, FUID mappings, and
+.Sy userused Ns / Ns Sy groupused
+data.
+ZFS will not encrypt metadata related to the pool structure, including
+dataset and snapshot names, dataset hierarchy, properties, file size, file
+holes, and deduplication tables (though the deduplicated data itself is
+encrypted).
+.Pp
+Key rotation is managed by ZFS.
+Changing the user's key (e.g. a passphrase)
+does not require re-encrypting the entire dataset.
+Datasets can be scrubbed,
+resilvered, renamed, and deleted without the encryption keys being loaded (see
+the
+.Cm load-key
+subcommand for more info on key loading).
+.Pp
+Creating an encrypted dataset requires specifying the
+.Sy encryption No and Sy keyformat
+properties at creation time, along with an optional
+.Sy keylocation No and Sy pbkdf2iters .
+After entering an encryption key, the
+created dataset will become an encryption root.
+Any descendant datasets will
+inherit their encryption key from the encryption root by default, meaning that
+loading, unloading, or changing the key for the encryption root will implicitly
+do the same for all inheriting datasets.
+If this inheritance is not desired, simply supply a
+.Sy keyformat
+when creating the child dataset or use
+.Nm zfs Cm change-key
+to break an existing relationship, creating a new encryption root on the child.
+Note that the child's
+.Sy keyformat
+may match that of the parent while still creating a new encryption root, and
+that changing the
+.Sy encryption
+property alone does not create a new encryption root; this would simply use a
+different cipher suite with the same key as its encryption root.
+The one exception is that clones will always use their origin's encryption key.
+As a result of this exception, some encryption-related properties
+.Pq namely Sy keystatus , keyformat , keylocation , No and Sy pbkdf2iters
+do not inherit like other ZFS properties and instead use the value determined
+by their encryption root.
+Encryption root inheritance can be tracked via the read-only
+.Sy encryptionroot
+property.
+.Pp
+Encryption changes the behavior of a few ZFS
+operations.
+Encryption is applied after compression so compression ratios are preserved.
+Normally checksums in ZFS are 256 bits long, but for encrypted data
+the checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from
+the encryption suite, which provides additional protection against maliciously
+altered data.
+Deduplication is still possible with encryption enabled but for security,
+datasets will only deduplicate against themselves, their snapshots,
+and their clones.
+.Pp
+There are a few limitations on encrypted datasets.
+Encrypted data cannot be embedded via the
+.Sy embedded_data
+feature.
+Encrypted datasets may not have
+.Sy copies Ns = Ns Em 3
+since the implementation stores some encryption metadata where the third copy
+would normally be.
+Since compression is applied before encryption, datasets may
+be vulnerable to a CRIME-like attack if applications accessing the data allow
+for it.
+Deduplication with encryption will leak information about which blocks
+are equivalent in a dataset and will incur an extra CPU cost for each block
+written.
+.
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr zfs-create 8 ,
+.Xr zfs-set 8
diff --git a/share/man/man8/zfs-mount.8 b/share/man/man8/zfs-mount.8
@@ -0,0 +1,139 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd February 16, 2019
+.Dt ZFS-MOUNT 8
+.Os
+.
+.Sh NAME
+.Nm zfs-mount
+.Nd manage mount state of ZFS filesystems
+.Sh SYNOPSIS
+.Nm zfs
+.Cm mount
+.Op Fl j
+.Nm zfs
+.Cm mount
+.Op Fl Oflv
+.Op Fl o Ar options
+.Fl a Ns | Ns Fl R Ar filesystem Ns | Ns Ar filesystem
+.Nm zfs
+.Cm unmount
+.Op Fl fu
+.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm mount
+.Op Fl j
+.Xc
+Displays all ZFS file systems currently mounted.
+.Bl -tag -width "-j"
+.It Fl j , -json
+Displays all mounted file systems in JSON format.
+.El
+.It Xo
+.Nm zfs
+.Cm mount
+.Op Fl Oflv
+.Op Fl o Ar options
+.Fl a Ns | Ns Fl R Ar filesystem Ns | Ns Ar filesystem
+.Xc
+Mount ZFS filesystem on a path described by its
+.Sy mountpoint
+property, if the path exists and is empty.
+If
+.Sy mountpoint
+is set to
+.Em legacy ,
+the filesystem should be instead mounted using
+.Xr mount 8 .
+.Bl -tag -width "-O"
+.It Fl O
+Perform an overlay mount.
+Allows mounting in non-empty
+.Sy mountpoint .
+See
+.Xr mount 8
+for more information.
+.It Fl a
+Mount all available ZFS file systems.
+Invoked automatically as part of the boot process if configured.
+.It Fl R
+Mount the specified filesystems along with all their children.
+.It Ar filesystem
+Mount the specified filesystem.
+.It Fl o Ar options
+An optional, comma-separated list of mount options to use temporarily for the
+duration of the mount.
+See the
+.Em Temporary Mount Point Properties
+section of
+.Xr zfsprops 7
+for details.
+.It Fl l
+Load keys for encrypted filesystems as they are being mounted.
+This is equivalent to executing
+.Nm zfs Cm load-key
+on each encryption root before mounting it.
+Note that if a filesystem has
+.Sy keylocation Ns = Ns Sy prompt ,
+this will cause the terminal to interactively block after asking for the key.
+.It Fl v
+Report mount progress.
+.It Fl f
+Attempt to force mounting of all filesystems, even those that couldn't normally
+be mounted (e.g. redacted datasets).
+.El
+.It Xo
+.Nm zfs
+.Cm unmount
+.Op Fl fu
+.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint
+.Xc
+Unmounts currently mounted ZFS file systems.
+.Bl -tag -width "-a"
+.It Fl a
+Unmount all available ZFS file systems.
+Invoked automatically as part of the shutdown process.
+.It Fl f
+Forcefully unmount the file system, even if it is currently in use.
+This option is not supported on Linux.
+.It Fl u
+Unload keys for any encryption roots unmounted by this command.
+.It Ar filesystem Ns | Ns Ar mountpoint
+Unmount the specified filesystem.
+The command can also be given a path to a ZFS file system mount point on the
+system.
+.El
+.El
diff --git a/share/man/man8/zfs-program.8 b/share/man/man8/zfs-program.8
@@ -0,0 +1,644 @@
+.\"
+.\" This file and its contents are supplied under the terms of the
+.\" Common Development and Distribution License ("CDDL"), version 1.0.
+.\" You may only use this file in accordance with the terms of version
+.\" 1.0 of the CDDL.
+.\"
+.\" A full copy of the text of the CDDL should have accompanied this
+.\" source. A copy of the CDDL is also available via the Internet at
+.\" http://www.illumos.org/license/CDDL.
+.\"
+.\" Copyright (c) 2016, 2019 by Delphix. All Rights Reserved.
+.\" Copyright (c) 2019, 2020 by Christian Schwarz. All Rights Reserved.
+.\" Copyright 2020 Joyent, Inc.
+.\"
+.Dd May 27, 2021
+.Dt ZFS-PROGRAM 8
+.Os
+.
+.Sh NAME
+.Nm zfs-program
+.Nd execute ZFS channel programs
+.Sh SYNOPSIS
+.Nm zfs
+.Cm program
+.Op Fl jn
+.Op Fl t Ar instruction-limit
+.Op Fl m Ar memory-limit
+.Ar pool
+.Ar script
+.Op Ar script arguments
+.
+.Sh DESCRIPTION
+The ZFS channel program interface allows ZFS administrative operations to be
+run programmatically as a Lua script.
+The entire script is executed atomically, with no other administrative
+operations taking effect concurrently.
+A library of ZFS calls is made available to channel program scripts.
+Channel programs may only be run with root privileges.
+.Pp
+A modified version of the Lua 5.2 interpreter is used to run channel program
+scripts.
+The Lua 5.2 manual can be found at
+.Lk http://www.lua.org/manual/5.2/
+.Pp
+The channel program given by
+.Ar script
+will be run on
+.Ar pool ,
+and any attempts to access or modify other pools will cause an error.
+.
+.Sh OPTIONS
+.Bl -tag -width "-t"
+.It Fl j , -json
+Display channel program output in JSON format.
+When this flag is specified and standard output is empty -
+channel program encountered an error.
+The details of such an error will be printed to standard error in plain text.
+.It Fl n
+Executes a read-only channel program, which runs faster.
+The program cannot change on-disk state by calling functions from the
+zfs.sync submodule.
+The program can be used to gather information such as properties and
+determining if changes would succeed (zfs.check.*).
+Without this flag, all pending changes must be synced to disk before a
+channel program can complete.
+.It Fl t Ar instruction-limit
+Limit the number of Lua instructions to execute.
+If a channel program executes more than the specified number of instructions,
+it will be stopped and an error will be returned.
+The default limit is 10 million instructions, and it can be set to a maximum of
+100 million instructions.
+.It Fl m Ar memory-limit
+Memory limit, in bytes.
+If a channel program attempts to allocate more memory than the given limit, it
+will be stopped and an error returned.
+The default memory limit is 10 MiB, and can be set to a maximum of 100 MiB.
+.El
+.Pp
+All remaining argument strings will be passed directly to the Lua script as
+described in the
+.Sx LUA INTERFACE
+section below.
+.
+.Sh LUA INTERFACE
+A channel program can be invoked either from the command line, or via a library
+call to
+.Fn lzc_channel_program .
+.
+.Ss Arguments
+Arguments passed to the channel program are converted to a Lua table.
+If invoked from the command line, extra arguments to the Lua script will be
+accessible as an array stored in the argument table with the key 'argv':
+.Bd -literal -compact -offset indent
+args = ...
+argv = args["argv"]
+-- argv == {1="arg1", 2="arg2", ...}
+.Ed
+.Pp
+If invoked from the libzfs interface, an arbitrary argument list can be
+passed to the channel program, which is accessible via the same
+.Qq Li ...
+syntax in Lua:
+.Bd -literal -compact -offset indent
+args = ...
+-- args == {"foo"="bar", "baz"={...}, ...}
+.Ed
+.Pp
+Note that because Lua arrays are 1-indexed, arrays passed to Lua from the
+libzfs interface will have their indices incremented by 1.
+That is, the element
+in
+.Va arr[0]
+in a C array passed to a channel program will be stored in
+.Va arr[1]
+when accessed from Lua.
+.
+.Ss Return Values
+Lua return statements take the form:
+.Dl return ret0, ret1, ret2, ...
+.Pp
+Return statements returning multiple values are permitted internally in a
+channel program script, but attempting to return more than one value from the
+top level of the channel program is not permitted and will throw an error.
+However, tables containing multiple values can still be returned.
+If invoked from the command line, a return statement:
+.Bd -literal -compact -offset indent
+a = {foo="bar", baz=2}
+return a
+.Ed
+.Pp
+Will be output formatted as:
+.Bd -literal -compact -offset indent
+Channel program fully executed with return value:
+ return:
+ baz: 2
+ foo: 'bar'
+.Ed
+.
+.Ss Fatal Errors
+If the channel program encounters a fatal error while running, a non-zero exit
+status will be returned.
+If more information about the error is available, a singleton list will be
+returned detailing the error:
+.Dl error: \&"error string, including Lua stack trace"
+.Pp
+If a fatal error is returned, the channel program may have not executed at all,
+may have partially executed, or may have fully executed but failed to pass a
+return value back to userland.
+.Pp
+If the channel program exhausts an instruction or memory limit, a fatal error
+will be generated and the program will be stopped, leaving the program partially
+executed.
+No attempt is made to reverse or undo any operations already performed.
+Note that because both the instruction count and amount of memory used by a
+channel program are deterministic when run against the same inputs and
+filesystem state, as long as a channel program has run successfully once, you
+can guarantee that it will finish successfully against a similar size system.
+.Pp
+If a channel program attempts to return too large a value, the program will
+fully execute but exit with a nonzero status code and no return value.
+.Pp
+.Em Note :
+ZFS API functions do not generate Fatal Errors when correctly invoked, they
+return an error code and the channel program continues executing.
+See the
+.Sx ZFS API
+section below for function-specific details on error return codes.
+.
+.Ss Lua to C Value Conversion
+When invoking a channel program via the libzfs interface, it is necessary to
+translate arguments and return values from Lua values to their C equivalents,
+and vice-versa.
+.Pp
+There is a correspondence between nvlist values in C and Lua tables.
+A Lua table which is returned from the channel program will be recursively
+converted to an nvlist, with table values converted to their natural
+equivalents:
+.TS
+cw3 l c l .
+ string -> string
+ number -> int64
+ boolean -> boolean_value
+ nil -> boolean (no value)
+ table -> nvlist
+.TE
+.Pp
+Likewise, table keys are replaced by string equivalents as follows:
+.TS
+cw3 l c l .
+ string -> no change
+ number -> signed decimal string ("%lld")
+ boolean -> "true" | "false"
+.TE
+.Pp
+Any collision of table key strings (for example, the string "true" and a
+true boolean value) will cause a fatal error.
+.Pp
+Lua numbers are represented internally as signed 64-bit integers.
+.
+.Sh LUA STANDARD LIBRARY
+The following Lua built-in base library functions are available:
+.TS
+cw3 l l l l .
+ assert rawlen collectgarbage rawget
+ error rawset getmetatable select
+ ipairs setmetatable next tonumber
+ pairs tostring rawequal type
+.TE
+.Pp
+All functions in the
+.Em coroutine ,
+.Em string ,
+and
+.Em table
+built-in submodules are also available.
+A complete list and documentation of these modules is available in the Lua
+manual.
+.Pp
+The following functions base library functions have been disabled and are
+not available for use in channel programs:
+.TS
+cw3 l l l l l l .
+ dofile loadfile load pcall print xpcall
+.TE
+.
+.Sh ZFS API
+.
+.Ss Function Arguments
+Each API function takes a fixed set of required positional arguments and
+optional keyword arguments.
+For example, the destroy function takes a single positional string argument
+(the name of the dataset to destroy) and an optional "defer" keyword boolean
+argument.
+When using parentheses to specify the arguments to a Lua function, only
+positional arguments can be used:
+.Dl Sy zfs.sync.destroy Ns Pq \&"rpool@snap"
+.Pp
+To use keyword arguments, functions must be called with a single argument that
+is a Lua table containing entries mapping integers to positional arguments and
+strings to keyword arguments:
+.Dl Sy zfs.sync.destroy Ns Pq {1="rpool@snap", defer=true}
+.Pp
+The Lua language allows curly braces to be used in place of parenthesis as
+syntactic sugar for this calling convention:
+.Dl Sy zfs.sync.snapshot Ns {"rpool@snap", defer=true}
+.
+.Ss Function Return Values
+If an API function succeeds, it returns 0.
+If it fails, it returns an error code and the channel program continues
+executing.
+API functions do not generate Fatal Errors except in the case of an
+unrecoverable internal file system error.
+.Pp
+In addition to returning an error code, some functions also return extra
+details describing what caused the error.
+This extra description is given as a second return value, and will always be a
+Lua table, or Nil if no error details were returned.
+Different keys will exist in the error details table depending on the function
+and error case.
+Any such function may be called expecting a single return value:
+.Dl errno = Sy zfs.sync.promote Ns Pq dataset
+.Pp
+Or, the error details can be retrieved:
+.Bd -literal -compact -offset indent
+.No errno, details = Sy zfs.sync.promote Ns Pq dataset
+if (errno == EEXIST) then
+ assert(details ~= Nil)
+ list_of_conflicting_snapshots = details
+end
+.Ed
+.Pp
+The following global aliases for API function error return codes are defined
+for use in channel programs:
+.TS
+cw3 l l l l l l l .
+ EPERM ECHILD ENODEV ENOSPC ENOENT EAGAIN ENOTDIR
+ ESPIPE ESRCH ENOMEM EISDIR EROFS EINTR EACCES
+ EINVAL EMLINK EIO EFAULT ENFILE EPIPE ENXIO
+ ENOTBLK EMFILE EDOM E2BIG EBUSY ENOTTY ERANGE
+ ENOEXEC EEXIST ETXTBSY EDQUOT EBADF EXDEV EFBIG
+.TE
+.
+.Ss API Functions
+For detailed descriptions of the exact behavior of any ZFS administrative
+operations, see the main
+.Xr zfs 8
+manual page.
+.Bl -tag -width "xx"
+.It Fn zfs.debug msg
+Record a debug message in the zfs_dbgmsg log.
+A log of these messages can be printed via mdb's "::zfs_dbgmsg" command, or
+can be monitored live by running
+.Dl dtrace -n 'zfs-dbgmsg{trace(stringof(arg0))}'
+.Pp
+.Bl -tag -compact -width "property (string)"
+.It Ar msg Pq string
+Debug message to be printed.
+.El
+.It Fn zfs.exists dataset
+Returns true if the given dataset exists, or false if it doesn't.
+A fatal error will be thrown if the dataset is not in the target pool.
+That is, in a channel program running on rpool,
+.Sy zfs.exists Ns Pq \&"rpool/nonexistent_fs"
+returns false, but
+.Sy zfs.exists Ns Pq \&"somepool/fs_that_may_exist"
+will error.
+.Pp
+.Bl -tag -compact -width "property (string)"
+.It Ar dataset Pq string
+Dataset to check for existence.
+Must be in the target pool.
+.El
+.It Fn zfs.get_prop dataset property
+Returns two values.
+First, a string, number or table containing the property value for the given
+dataset.
+Second, a string containing the source of the property (i.e. the name of the
+dataset in which it was set or nil if it is readonly).
+Throws a Lua error if the dataset is invalid or the property doesn't exist.
+Note that Lua only supports int64 number types whereas ZFS number properties
+are uint64.
+This means very large values (like GUIDs) may wrap around and appear negative.
+.Pp
+.Bl -tag -compact -width "property (string)"
+.It Ar dataset Pq string
+Filesystem or snapshot path to retrieve properties from.
+.It Ar property Pq string
+Name of property to retrieve.
+All filesystem, snapshot and volume properties are supported except for
+.Sy mounted
+and
+.Sy iscsioptions .
+Also supports the
+.Sy written@ Ns Ar snap
+and
+.Sy written# Ns Ar bookmark
+properties and the
+.Ao Sy user Ns | Ns Sy group Ac Ns Ao Sy quota Ns | Ns Sy used Ac Ns Sy @ Ns Ar id
+properties, though the id must be in numeric form.
+.El
+.El
+.Bl -tag -width "xx"
+.It Sy zfs.sync submodule
+The sync submodule contains functions that modify the on-disk state.
+They are executed in "syncing context".
+.Pp
+The available sync submodule functions are as follows:
+.Bl -tag -width "xx"
+.It Sy zfs.sync.destroy Ns Pq Ar dataset , Op Ar defer Ns = Ns Sy true Ns | Ns Sy false
+Destroy the given dataset.
+Returns 0 on successful destroy, or a nonzero error code if the dataset could
+not be destroyed (for example, if the dataset has any active children or
+clones).
+.Pp
+.Bl -tag -compact -width "newbookmark (string)"
+.It Ar dataset Pq string
+Filesystem or snapshot to be destroyed.
+.It Op Ar defer Pq boolean
+Valid only for destroying snapshots.
+If set to true, and the snapshot has holds or clones, allows the snapshot to be
+marked for deferred deletion rather than failing.
+.El
+.It Fn zfs.sync.inherit dataset property
+Clears the specified property in the given dataset, causing it to be inherited
+from an ancestor, or restored to the default if no ancestor property is set.
+The
+.Nm zfs Cm inherit Fl S
+option has not been implemented.
+Returns 0 on success, or a nonzero error code if the property could not be
+cleared.
+.Pp
+.Bl -tag -compact -width "newbookmark (string)"
+.It Ar dataset Pq string
+Filesystem or snapshot containing the property to clear.
+.It Ar property Pq string
+The property to clear.
+Allowed properties are the same as those for the
+.Nm zfs Cm inherit
+command.
+.El
+.It Fn zfs.sync.promote dataset
+Promote the given clone to a filesystem.
+Returns 0 on successful promotion, or a nonzero error code otherwise.
+If EEXIST is returned, the second return value will be an array of the clone's
+snapshots whose names collide with snapshots of the parent filesystem.
+.Pp
+.Bl -tag -compact -width "newbookmark (string)"
+.It Ar dataset Pq string
+Clone to be promoted.
+.El
+.It Fn zfs.sync.rollback filesystem
+Rollback to the previous snapshot for a dataset.
+Returns 0 on successful rollback, or a nonzero error code otherwise.
+Rollbacks can be performed on filesystems or zvols, but not on snapshots
+or mounted datasets.
+EBUSY is returned in the case where the filesystem is mounted.
+.Pp
+.Bl -tag -compact -width "newbookmark (string)"
+.It Ar filesystem Pq string
+Filesystem to rollback.
+.El
+.It Fn zfs.sync.set_prop dataset property value
+Sets the given property on a dataset.
+Currently only user properties are supported.
+Returns 0 if the property was set, or a nonzero error code otherwise.
+.Pp
+.Bl -tag -compact -width "newbookmark (string)"
+.It Ar dataset Pq string
+The dataset where the property will be set.
+.It Ar property Pq string
+The property to set.
+.It Ar value Pq string
+The value of the property to be set.
+.El
+.It Fn zfs.sync.snapshot dataset
+Create a snapshot of a filesystem.
+Returns 0 if the snapshot was successfully created,
+and a nonzero error code otherwise.
+.Pp
+Note: Taking a snapshot will fail on any pool older than legacy version 27.
+To enable taking snapshots from ZCP scripts, the pool must be upgraded.
+.Pp
+.Bl -tag -compact -width "newbookmark (string)"
+.It Ar dataset Pq string
+Name of snapshot to create.
+.El
+.It Fn zfs.sync.rename_snapshot dataset oldsnapname newsnapname
+Rename a snapshot of a filesystem or a volume.
+Returns 0 if the snapshot was successfully renamed,
+and a nonzero error code otherwise.
+.Pp
+.Bl -tag -compact -width "newbookmark (string)"
+.It Ar dataset Pq string
+Name of the snapshot's parent dataset.
+.It Ar oldsnapname Pq string
+Original name of the snapshot.
+.It Ar newsnapname Pq string
+New name of the snapshot.
+.El
+.It Fn zfs.sync.bookmark source newbookmark
+Create a bookmark of an existing source snapshot or bookmark.
+Returns 0 if the new bookmark was successfully created,
+and a nonzero error code otherwise.
+.Pp
+Note: Bookmarking requires the corresponding pool feature to be enabled.
+.Pp
+.Bl -tag -compact -width "newbookmark (string)"
+.It Ar source Pq string
+Full name of the existing snapshot or bookmark.
+.It Ar newbookmark Pq string
+Full name of the new bookmark.
+.El
+.El
+.It Sy zfs.check submodule
+For each function in the
+.Sy zfs.sync
+submodule, there is a corresponding
+.Sy zfs.check
+function which performs a "dry run" of the same operation.
+Each takes the same arguments as its
+.Sy zfs.sync
+counterpart and returns 0 if the operation would succeed,
+or a non-zero error code if it would fail, along with any other error details.
+That is, each has the same behavior as the corresponding sync function except
+for actually executing the requested change.
+For example,
+.Fn zfs.check.destroy \&"fs"
+returns 0 if
+.Fn zfs.sync.destroy \&"fs"
+would successfully destroy the dataset.
+.Pp
+The available
+.Sy zfs.check
+functions are:
+.Bl -tag -compact -width "xx"
+.It Sy zfs.check.destroy Ns Pq Ar dataset , Op Ar defer Ns = Ns Sy true Ns | Ns Sy false
+.It Fn zfs.check.promote dataset
+.It Fn zfs.check.rollback filesystem
+.It Fn zfs.check.set_property dataset property value
+.It Fn zfs.check.snapshot dataset
+.El
+.It Sy zfs.list submodule
+The zfs.list submodule provides functions for iterating over datasets and
+properties.
+Rather than returning tables, these functions act as Lua iterators, and are
+generally used as follows:
+.Bd -literal -compact -offset indent
+.No for child in Fn zfs.list.children \&"rpool" No do
+ ...
+end
+.Ed
+.Pp
+The available
+.Sy zfs.list
+functions are:
+.Bl -tag -width "xx"
+.It Fn zfs.list.clones snapshot
+Iterate through all clones of the given snapshot.
+.Pp
+.Bl -tag -compact -width "snapshot (string)"
+.It Ar snapshot Pq string
+Must be a valid snapshot path in the current pool.
+.El
+.It Fn zfs.list.snapshots dataset
+Iterate through all snapshots of the given dataset.
+Each snapshot is returned as a string containing the full dataset name,
+e.g. "pool/fs@snap".
+.Pp
+.Bl -tag -compact -width "snapshot (string)"
+.It Ar dataset Pq string
+Must be a valid filesystem or volume.
+.El
+.It Fn zfs.list.children dataset
+Iterate through all direct children of the given dataset.
+Each child is returned as a string containing the full dataset name,
+e.g. "pool/fs/child".
+.Pp
+.Bl -tag -compact -width "snapshot (string)"
+.It Ar dataset Pq string
+Must be a valid filesystem or volume.
+.El
+.It Fn zfs.list.bookmarks dataset
+Iterate through all bookmarks of the given dataset.
+Each bookmark is returned as a string containing the full dataset name,
+e.g. "pool/fs#bookmark".
+.Pp
+.Bl -tag -compact -width "snapshot (string)"
+.It Ar dataset Pq string
+Must be a valid filesystem or volume.
+.El
+.It Fn zfs.list.holds snapshot
+Iterate through all user holds on the given snapshot.
+Each hold is returned
+as a pair of the hold's tag and the timestamp (in seconds since the epoch) at
+which it was created.
+.Pp
+.Bl -tag -compact -width "snapshot (string)"
+.It Ar snapshot Pq string
+Must be a valid snapshot.
+.El
+.It Fn zfs.list.properties dataset
+An alias for zfs.list.user_properties (see relevant entry).
+.Pp
+.Bl -tag -compact -width "snapshot (string)"
+.It Ar dataset Pq string
+Must be a valid filesystem, snapshot, or volume.
+.El
+.It Fn zfs.list.user_properties dataset
+Iterate through all user properties for the given dataset.
+For each step of the iteration, output the property name, its value,
+and its source.
+Throws a Lua error if the dataset is invalid.
+.Pp
+.Bl -tag -compact -width "snapshot (string)"
+.It Ar dataset Pq string
+Must be a valid filesystem, snapshot, or volume.
+.El
+.It Fn zfs.list.system_properties dataset
+Returns an array of strings, the names of the valid system (non-user defined)
+properties for the given dataset.
+Throws a Lua error if the dataset is invalid.
+.Pp
+.Bl -tag -compact -width "snapshot (string)"
+.It Ar dataset Pq string
+Must be a valid filesystem, snapshot or volume.
+.El
+.El
+.El
+.
+.Sh EXAMPLES
+.
+.Ss Example 1
+The following channel program recursively destroys a filesystem and all its
+snapshots and children in a naive manner.
+Note that this does not involve any error handling or reporting.
+.Bd -literal -offset indent
+function destroy_recursive(root)
+ for child in zfs.list.children(root) do
+ destroy_recursive(child)
+ end
+ for snap in zfs.list.snapshots(root) do
+ zfs.sync.destroy(snap)
+ end
+ zfs.sync.destroy(root)
+end
+destroy_recursive("pool/somefs")
+.Ed
+.
+.Ss Example 2
+A more verbose and robust version of the same channel program, which
+properly detects and reports errors, and also takes the dataset to destroy
+as a command line argument, would be as follows:
+.Bd -literal -offset indent
+succeeded = {}
+failed = {}
+
+function destroy_recursive(root)
+ for child in zfs.list.children(root) do
+ destroy_recursive(child)
+ end
+ for snap in zfs.list.snapshots(root) do
+ err = zfs.sync.destroy(snap)
+ if (err ~= 0) then
+ failed[snap] = err
+ else
+ succeeded[snap] = err
+ end
+ end
+ err = zfs.sync.destroy(root)
+ if (err ~= 0) then
+ failed[root] = err
+ else
+ succeeded[root] = err
+ end
+end
+
+args = ...
+argv = args["argv"]
+
+destroy_recursive(argv[1])
+
+results = {}
+results["succeeded"] = succeeded
+results["failed"] = failed
+return results
+.Ed
+.
+.Ss Example 3
+The following function performs a forced promote operation by attempting to
+promote the given clone and destroying any conflicting snapshots.
+.Bd -literal -offset indent
+function force_promote(ds)
+ errno, details = zfs.check.promote(ds)
+ if (errno == EEXIST) then
+ assert(details ~= Nil)
+ for i, snap in ipairs(details) do
+ zfs.sync.destroy(ds .. "@" .. snap)
+ end
+ elseif (errno ~= 0) then
+ return errno
+ end
+ return zfs.sync.promote(ds)
+end
+.Ed
diff --git a/share/man/man8/zfs-project.8 b/share/man/man8/zfs-project.8
@@ -0,0 +1,142 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd May 27, 2021
+.Dt ZFS-PROJECT 8
+.Os
+.
+.Sh NAME
+.Nm zfs-project
+.Nd manage projects in ZFS filesystem
+.Sh SYNOPSIS
+.Nm zfs
+.Cm project
+.Oo Fl d Ns | Ns Fl r Ns Oc
+.Ar file Ns | Ns Ar directory Ns …
+.Nm zfs
+.Cm project
+.Fl C
+.Oo Fl kr Ns Oc
+.Ar file Ns | Ns Ar directory Ns …
+.Nm zfs
+.Cm project
+.Fl c
+.Oo Fl 0 Ns Oc
+.Oo Fl d Ns | Ns Fl r Ns Oc
+.Op Fl p Ar id
+.Ar file Ns | Ns Ar directory Ns …
+.Nm zfs
+.Cm project
+.Op Fl p Ar id
+.Oo Fl rs Ns Oc
+.Ar file Ns | Ns Ar directory Ns …
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm project
+.Oo Fl d Ns | Ns Fl r Ns Oc
+.Ar file Ns | Ns Ar directory Ns …
+.Xc
+List project identifier (ID) and inherit flag of files and directories.
+.Bl -tag -width "-d"
+.It Fl d
+Show the directory project ID and inherit flag, not its children.
+.It Fl r
+List subdirectories recursively.
+.El
+.It Xo
+.Nm zfs
+.Cm project
+.Fl C
+.Oo Fl kr Ns Oc
+.Ar file Ns | Ns Ar directory Ns …
+.Xc
+Clear project inherit flag and/or ID on the files and directories.
+.Bl -tag -width "-k"
+.It Fl k
+Keep the project ID unchanged.
+If not specified, the project ID will be reset to zero.
+.It Fl r
+Clear subdirectories' flags recursively.
+.El
+.It Xo
+.Nm zfs
+.Cm project
+.Fl c
+.Oo Fl 0 Ns Oc
+.Oo Fl d Ns | Ns Fl r Ns Oc
+.Op Fl p Ar id
+.Ar file Ns | Ns Ar directory Ns …
+.Xc
+Check project ID and inherit flag on the files and directories:
+report entries without the project inherit flag, or with project IDs different
+from the
+target directory's project ID or the one specified with
+.Fl p .
+.Bl -tag -width "-p id"
+.It Fl 0
+Delimit filenames with a NUL byte instead of newline, don't output diagnoses.
+.It Fl d
+Check the directory project ID and inherit flag, not its children.
+.It Fl p Ar id
+Compare to
+.Ar id
+instead of the target files and directories' project IDs.
+.It Fl r
+Check subdirectories recursively.
+.El
+.It Xo
+.Nm zfs
+.Cm project
+.Fl p Ar id
+.Oo Fl rs Ns Oc
+.Ar file Ns | Ns Ar directory Ns …
+.Xc
+Set project ID and/or inherit flag on the files and directories.
+.Bl -tag -width "-p id"
+.It Fl p Ar id
+Set the project ID to the given value.
+.It Fl r
+Set on subdirectories recursively.
+.It Fl s
+Set project inherit flag on the given files and directories.
+This is usually used for setting up tree quotas with
+.Fl r .
+In that case, the directory's project ID
+will be set for all its descendants, unless specified explicitly with
+.Fl p .
+.El
+.El
+.
+.Sh SEE ALSO
+.Xr zfs-projectspace 8
diff --git a/share/man/man8/zfs-projectspace.8 b/share/man/man8/zfs-projectspace.8
@@ -0,0 +1,188 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd June 30, 2019
+.Dt ZFS-USERSPACE 8
+.Os
+.
+.Sh NAME
+.Nm zfs-userspace
+.Nd display space and quotas of ZFS dataset
+.Sh SYNOPSIS
+.Nm zfs
+.Cm userspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Nm zfs
+.Cm groupspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Nm zfs
+.Cm projectspace
+.Op Fl Hp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm userspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Xc
+Displays space consumed by, and quotas on, each user in the specified
+filesystem,
+snapshot, or path.
+If a path is given, the filesystem that contains that path will be used.
+This corresponds to the
+.Sy userused@ Ns Em user ,
+.Sy userobjused@ Ns Em user ,
+.Sy userquota@ Ns Em user ,
+and
+.Sy userobjquota@ Ns Em user
+properties.
+.Bl -tag -width "-S field"
+.It Fl H
+Do not print headers, use tab-delimited output.
+.It Fl S Ar field
+Sort by this field in reverse order.
+See
+.Fl s .
+.It Fl i
+Translate SID to POSIX ID.
+The POSIX ID may be ephemeral if no mapping exists.
+Normal POSIX interfaces
+.Pq like Xr stat 2 , Nm ls Fl l
+perform this translation, so the
+.Fl i
+option allows the output from
+.Nm zfs Cm userspace
+to be compared directly with those utilities.
+However,
+.Fl i
+may lead to confusion if some files were created by an SMB user before a
+SMB-to-POSIX name mapping was established.
+In such a case, some files will be owned by the SMB entity and some by the POSIX
+entity.
+However, the
+.Fl i
+option will report that the POSIX entity has the total usage and quota for both.
+.It Fl n
+Print numeric ID instead of user/group name.
+.It Fl o Ar field Ns Oo , Ns Ar field Oc Ns …
+Display only the specified fields from the following set:
+.Sy type ,
+.Sy name ,
+.Sy used ,
+.Sy quota .
+The default is to display all fields.
+.It Fl p
+Use exact
+.Pq parsable
+numeric output.
+.It Fl s Ar field
+Sort output by this field.
+The
+.Fl s
+and
+.Fl S
+flags may be specified multiple times to sort first by one field, then by
+another.
+The default is
+.Fl s Sy type Fl s Sy name .
+.It Fl t Ar type Ns Oo , Ns Ar type Oc Ns …
+Print only the specified types from the following set:
+.Sy all ,
+.Sy posixuser ,
+.Sy smbuser ,
+.Sy posixgroup ,
+.Sy smbgroup .
+The default is
+.Fl t Sy posixuser , Ns Sy smbuser .
+The default can be changed to include group types.
+.El
+.It Xo
+.Nm zfs
+.Cm groupspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot
+.Xc
+Displays space consumed by, and quotas on, each group in the specified
+filesystem or snapshot.
+This subcommand is identical to
+.Cm userspace ,
+except that the default types to display are
+.Fl t Sy posixgroup , Ns Sy smbgroup .
+.It Xo
+.Nm zfs
+.Cm projectspace
+.Op Fl Hp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Xc
+Displays space consumed by, and quotas on, each project in the specified
+filesystem or snapshot.
+This subcommand is identical to
+.Cm userspace ,
+except that the project identifier is a numeral, not a name.
+So need neither the option
+.Fl i
+for SID to POSIX ID nor
+.Fl n
+for numeric ID, nor
+.Fl t
+for types.
+.El
+.
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr zfs-set 8
diff --git a/share/man/man8/zfs-promote.8 b/share/man/man8/zfs-promote.8
@@ -0,0 +1,85 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 16, 2022
+.Dt ZFS-PROMOTE 8
+.Os
+.
+.Sh NAME
+.Nm zfs-promote
+.Nd promote clone dataset to no longer depend on origin snapshot
+.Sh SYNOPSIS
+.Nm zfs
+.Cm promote
+.Ar clone
+.
+.Sh DESCRIPTION
+The
+.Nm zfs Cm promote
+command makes it possible to destroy the dataset that the clone was created
+from.
+The clone parent-child dependency relationship is reversed, so that the origin
+dataset becomes a clone of the specified dataset.
+.Pp
+The snapshot that was cloned, and any snapshots previous to this snapshot, are
+now owned by the promoted clone.
+The space they use moves from the origin dataset to the promoted clone, so
+enough space must be available to accommodate these snapshots.
+No new space is consumed by this operation, but the space accounting is
+adjusted.
+The promoted clone must not have any conflicting snapshot names of its own.
+The
+.Nm zfs Cm rename
+subcommand can be used to rename any conflicting snapshots.
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 10 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Promoting a ZFS Clone
+The following commands illustrate how to test out changes to a file system, and
+then replace the original file system with the changed one, using clones, clone
+promotion, and renaming:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm create Ar pool/project/production
+ populate /pool/project/production with data
+.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today
+.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta
+ make changes to /pool/project/beta and test them
+.No # Nm zfs Cm promote Ar pool/project/beta
+.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy
+.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production
+ once the legacy version is no longer needed, it can be destroyed
+.No # Nm zfs Cm destroy Ar pool/project/legacy
+.Ed
+.
+.Sh SEE ALSO
+.Xr zfs-clone 8 ,
+.Xr zfs-rename 8
diff --git a/share/man/man8/zfs-receive.8 b/share/man/man8/zfs-receive.8
@@ -0,0 +1,465 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 12, 2023
+.Dt ZFS-RECEIVE 8
+.Os
+.
+.Sh NAME
+.Nm zfs-receive
+.Nd create snapshot from backup stream
+.Sh SYNOPSIS
+.Nm zfs
+.Cm receive
+.Op Fl FhMnsuv
+.Op Fl o Sy origin Ns = Ns Ar snapshot
+.Op Fl o Ar property Ns = Ns Ar value
+.Op Fl x Ar property
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Nm zfs
+.Cm receive
+.Op Fl FhMnsuv
+.Op Fl d Ns | Ns Fl e
+.Op Fl o Sy origin Ns = Ns Ar snapshot
+.Op Fl o Ar property Ns = Ns Ar value
+.Op Fl x Ar property
+.Ar filesystem
+.Nm zfs
+.Cm receive
+.Fl A
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm receive
+.Fl c
+.Op Fl vn
+.Ar filesystem Ns | Ns Ar snapshot
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm receive
+.Op Fl FhMnsuv
+.Op Fl o Sy origin Ns = Ns Ar snapshot
+.Op Fl o Ar property Ns = Ns Ar value
+.Op Fl x Ar property
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Xc
+.It Xo
+.Nm zfs
+.Cm receive
+.Op Fl FhMnsuv
+.Op Fl d Ns | Ns Fl e
+.Op Fl o Sy origin Ns = Ns Ar snapshot
+.Op Fl o Ar property Ns = Ns Ar value
+.Op Fl x Ar property
+.Ar filesystem
+.Xc
+Creates a snapshot whose contents are as specified in the stream provided on
+standard input.
+If a full stream is received, then a new file system is created as well.
+Streams are created using the
+.Nm zfs Cm send
+subcommand, which by default creates a full stream.
+.Nm zfs Cm recv
+can be used as an alias for
+.Nm zfs Cm receive .
+.Pp
+If an incremental stream is received, then the destination file system must
+already exist, and its most recent snapshot must match the incremental stream's
+source.
+For
+.Sy zvols ,
+the destination device link is destroyed and recreated, which means the
+.Sy zvol
+cannot be accessed during the
+.Cm receive
+operation.
+.Pp
+When a snapshot replication package stream that is generated by using the
+.Nm zfs Cm send Fl R
+command is received, any snapshots that do not exist on the sending location are
+destroyed by using the
+.Nm zfs Cm destroy Fl d
+command.
+.Pp
+The ability to send and receive deduplicated send streams has been removed.
+However, a deduplicated send stream created with older software can be converted
+to a regular (non-deduplicated) stream by using the
+.Nm zstream Cm redup
+command.
+.Pp
+If
+.Fl o Em property Ns = Ns Ar value
+or
+.Fl x Em property
+is specified, it applies to the effective value of the property throughout
+the entire subtree of replicated datasets.
+Effective property values will be set
+.Pq Fl o
+or inherited
+.Pq Fl x
+on the topmost in the replicated subtree.
+In descendant datasets, if the
+property is set by the send stream, it will be overridden by forcing the
+property to be inherited from the top‐most file system.
+Received properties are retained in spite of being overridden
+and may be restored with
+.Nm zfs Cm inherit Fl S .
+Specifying
+.Fl o Sy origin Ns = Ns Em snapshot
+is a special case because, even if
+.Sy origin
+is a read-only property and cannot be set, it's allowed to receive the send
+stream as a clone of the given snapshot.
+.Pp
+Raw encrypted send streams (created with
+.Nm zfs Cm send Fl w )
+may only be received as is, and cannot be re-encrypted, decrypted, or
+recompressed by the receive process.
+Unencrypted streams can be received as
+encrypted datasets, either through inheritance or by specifying encryption
+parameters with the
+.Fl o
+options.
+Note that the
+.Sy keylocation
+property cannot be overridden to
+.Sy prompt
+during a receive.
+This is because the receive process itself is already using
+the standard input for the send stream.
+Instead, the property can be overridden after the receive completes.
+.Pp
+The added security provided by raw sends adds some restrictions to the send
+and receive process.
+ZFS will not allow a mix of raw receives and non-raw receives.
+Specifically, any raw incremental receives that are attempted after
+a non-raw receive will fail.
+Non-raw receives do not have this restriction and,
+therefore, are always possible.
+Because of this, it is best practice to always
+use either raw sends for their security benefits or non-raw sends for their
+flexibility when working with encrypted datasets, but not a combination.
+.Pp
+The reason for this restriction stems from the inherent restrictions of the
+AEAD ciphers that ZFS uses to encrypt data.
+When using ZFS native encryption,
+each block of data is encrypted against a randomly generated number known as
+the "initialization vector" (IV), which is stored in the filesystem metadata.
+This number is required by the encryption algorithms whenever the data is to
+be decrypted.
+Together, all of the IVs provided for all of the blocks in a
+given snapshot are collectively called an "IV set".
+When ZFS performs a raw send, the IV set is transferred from the source
+to the destination in the send stream.
+When ZFS performs a non-raw send, the data is decrypted by the source
+system and re-encrypted by the destination system, creating a snapshot with
+effectively the same data, but a different IV set.
+In order for decryption to work after a raw send, ZFS must ensure that
+the IV set used on both the source and destination side match.
+When an incremental raw receive is performed on
+top of an existing snapshot, ZFS will check to confirm that the "from"
+snapshot on both the source and destination were using the same IV set,
+ensuring the new IV set is consistent.
+.Pp
+The name of the snapshot
+.Pq and file system, if a full stream is received
+that this subcommand creates depends on the argument type and the use of the
+.Fl d
+or
+.Fl e
+options.
+.Pp
+If the argument is a snapshot name, the specified
+.Ar snapshot
+is created.
+If the argument is a file system or volume name, a snapshot with the same name
+as the sent snapshot is created within the specified
+.Ar filesystem
+or
+.Ar volume .
+If neither of the
+.Fl d
+or
+.Fl e
+options are specified, the provided target snapshot name is used exactly as
+provided.
+.Pp
+The
+.Fl d
+and
+.Fl e
+options cause the file system name of the target snapshot to be determined by
+appending a portion of the sent snapshot's name to the specified target
+.Ar filesystem .
+If the
+.Fl d
+option is specified, all but the first element of the sent snapshot's file
+system path
+.Pq usually the pool name
+is used and any required intermediate file systems within the specified one are
+created.
+If the
+.Fl e
+option is specified, then only the last element of the sent snapshot's file
+system name
+.Pq i.e. the name of the source file system itself
+is used as the target file system name.
+.Bl -tag -width "-F"
+.It Fl F
+Force a rollback of the file system to the most recent snapshot before
+performing the receive operation.
+If receiving an incremental replication stream
+.Po for example, one generated by
+.Nm zfs Cm send Fl R Op Fl i Ns | Ns Fl I
+.Pc ,
+destroy snapshots and file systems that do not exist on the sending side.
+.It Fl d
+Discard the first element of the sent snapshot's file system name, using the
+remaining elements to determine the name of the target file system for the new
+snapshot as described in the paragraph above.
+.It Fl e
+Discard all but the last element of the sent snapshot's file system name, using
+that element to determine the name of the target file system for the new
+snapshot as described in the paragraph above.
+.It Fl h
+Skip the receive of holds.
+There is no effect if holds are not sent.
+.It Fl M
+Force an unmount of the file system while receiving a snapshot.
+This option is not supported on Linux.
+.It Fl n
+Do not actually receive the stream.
+This can be useful in conjunction with the
+.Fl v
+option to verify the name the receive operation would use.
+.It Fl o Sy origin Ns = Ns Ar snapshot
+Forces the stream to be received as a clone of the given snapshot.
+If the stream is a full send stream, this will create the filesystem
+described by the stream as a clone of the specified snapshot.
+Which snapshot was specified will not affect the success or failure of the
+receive, as long as the snapshot does exist.
+If the stream is an incremental send stream, all the normal verification will be
+performed.
+.It Fl o Em property Ns = Ns Ar value
+Sets the specified property as if the command
+.Nm zfs Cm set Em property Ns = Ns Ar value
+was invoked immediately before the receive.
+When receiving a stream from
+.Nm zfs Cm send Fl R ,
+causes the property to be inherited by all descendant datasets, as through
+.Nm zfs Cm inherit Em property
+was run on any descendant datasets that have this property set on the
+sending system.
+.Pp
+If the send stream was sent with
+.Fl c
+then overriding the
+.Sy compression
+property will have no effect on received data but the
+.Sy compression
+property will be set.
+To have the data recompressed on receive remove the
+.Fl c
+flag from the send stream.
+.Pp
+Any editable property can be set at receive time.
+Set-once properties bound
+to the received data, such as
+.Sy normalization
+and
+.Sy casesensitivity ,
+cannot be set at receive time even when the datasets are newly created by
+.Nm zfs Cm receive .
+Additionally both settable properties
+.Sy version
+and
+.Sy volsize
+cannot be set at receive time.
+.Pp
+The
+.Fl o
+option may be specified multiple times, for different properties.
+An error results if the same property is specified in multiple
+.Fl o
+or
+.Fl x
+options.
+.Pp
+The
+.Fl o
+option may also be used to override encryption properties upon initial receive.
+This allows unencrypted streams to be received as encrypted datasets.
+To cause the received dataset (or root dataset of a recursive stream) to be
+received as an encryption root, specify encryption properties in the same
+manner as is required for
+.Nm zfs Cm create .
+For instance:
+.Dl # Nm zfs Cm send Pa tank/test@snap1 | Nm zfs Cm recv Fl o Sy encryption Ns = Ns Sy on Fl o Sy keyformat Ns = Ns Sy passphrase Fl o Sy keylocation Ns = Ns Pa file:///path/to/keyfile
+.Pp
+Note that
+.Fl o Sy keylocation Ns = Ns Sy prompt
+may not be specified here, since the standard input
+is already being utilized for the send stream.
+Once the receive has completed, you can use
+.Nm zfs Cm set
+to change this setting after the fact.
+Similarly, you can receive a dataset as an encrypted child by specifying
+.Fl x Sy encryption
+to force the property to be inherited.
+Overriding encryption properties (except for
+.Sy keylocation )
+is not possible with raw send streams.
+.It Fl s
+If the receive is interrupted, save the partially received state, rather
+than deleting it.
+Interruption may be due to premature termination of the stream
+.Po e.g. due to network failure or failure of the remote system
+if the stream is being read over a network connection
+.Pc ,
+a checksum error in the stream, termination of the
+.Nm zfs Cm receive
+process, or unclean shutdown of the system.
+.Pp
+The receive can be resumed with a stream generated by
+.Nm zfs Cm send Fl t Ar token ,
+where the
+.Ar token
+is the value of the
+.Sy receive_resume_token
+property of the filesystem or volume which is received into.
+.Pp
+To use this flag, the storage pool must have the
+.Sy extensible_dataset
+feature enabled.
+See
+.Xr zpool-features 7
+for details on ZFS feature flags.
+.It Fl u
+File system that is associated with the received stream is not mounted.
+.It Fl v
+Print verbose information about the stream and the time required to perform the
+receive operation.
+.It Fl x Em property
+Ensures that the effective value of the specified property after the
+receive is unaffected by the value of that property in the send stream (if any),
+as if the property had been excluded from the send stream.
+.Pp
+If the specified property is not present in the send stream, this option does
+nothing.
+.Pp
+If a received property needs to be overridden, the effective value will be
+set or inherited, depending on whether the property is inheritable or not.
+.Pp
+In the case of an incremental update,
+.Fl x
+leaves any existing local setting or explicit inheritance unchanged.
+.Pp
+All
+.Fl o
+restrictions (e.g. set-once) apply equally to
+.Fl x .
+.El
+.It Xo
+.Nm zfs
+.Cm receive
+.Fl A
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Abort an interrupted
+.Nm zfs Cm receive Fl s ,
+deleting its saved partially received state.
+.It Xo
+.Nm zfs
+.Cm receive
+.Fl c
+.Op Fl vn
+.Ar filesystem Ns | Ns Ar snapshot
+.Xc
+Attempt to repair data corruption in the specified dataset,
+by using the provided stream as the source of healthy data.
+This method of healing can only heal data blocks present in the stream.
+Metadata can not be healed by corrective receive.
+Running a scrub is recommended post-healing to ensure all data corruption was
+repaired.
+.Pp
+It's important to consider why corruption has happened in the first place.
+If you have slowly failing hardware - periodically repairing the data
+is not going to save you from data loss later on when the hardware fails
+completely.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 12, 13 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Remotely Replicating ZFS Data
+The following commands send a full stream and then an incremental stream to a
+remote machine, restoring them into
+.Em poolB/received/fs@a
+and
+.Em poolB/received/fs@b ,
+respectively.
+.Em poolB
+must contain the file system
+.Em poolB/received ,
+and must not initially contain
+.Em poolB/received/fs .
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm send Ar pool/fs@a |
+.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs Ns @ Ns Ar a
+.No # Nm zfs Cm send Fl i Ar a pool/fs@b |
+.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs
+.Ed
+.
+.Ss Example 2 : No Using the Nm zfs Cm receive Fl d No Option
+The following command sends a full stream of
+.Ar poolA/fsA/fsB@snap
+to a remote machine, receiving it into
+.Ar poolB/received/fsA/fsB@snap .
+The
+.Ar fsA/fsB@snap
+portion of the received snapshot's name is determined from the name of the sent
+snapshot.
+.Ar poolB
+must contain the file system
+.Ar poolB/received .
+If
+.Ar poolB/received/fsA
+does not exist, it is created as an empty file system.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm send Ar poolA/fsA/fsB@snap |
+.No " " Nm ssh Ar host Nm zfs Cm receive Fl d Ar poolB/received
+.Ed
+.
+.Sh SEE ALSO
+.Xr zfs-send 8 ,
+.Xr zstream 8
diff --git a/share/man/man8/zfs-recv.8 b/share/man/man8/zfs-recv.8
@@ -0,0 +1,465 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 12, 2023
+.Dt ZFS-RECEIVE 8
+.Os
+.
+.Sh NAME
+.Nm zfs-receive
+.Nd create snapshot from backup stream
+.Sh SYNOPSIS
+.Nm zfs
+.Cm receive
+.Op Fl FhMnsuv
+.Op Fl o Sy origin Ns = Ns Ar snapshot
+.Op Fl o Ar property Ns = Ns Ar value
+.Op Fl x Ar property
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Nm zfs
+.Cm receive
+.Op Fl FhMnsuv
+.Op Fl d Ns | Ns Fl e
+.Op Fl o Sy origin Ns = Ns Ar snapshot
+.Op Fl o Ar property Ns = Ns Ar value
+.Op Fl x Ar property
+.Ar filesystem
+.Nm zfs
+.Cm receive
+.Fl A
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm receive
+.Fl c
+.Op Fl vn
+.Ar filesystem Ns | Ns Ar snapshot
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm receive
+.Op Fl FhMnsuv
+.Op Fl o Sy origin Ns = Ns Ar snapshot
+.Op Fl o Ar property Ns = Ns Ar value
+.Op Fl x Ar property
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Xc
+.It Xo
+.Nm zfs
+.Cm receive
+.Op Fl FhMnsuv
+.Op Fl d Ns | Ns Fl e
+.Op Fl o Sy origin Ns = Ns Ar snapshot
+.Op Fl o Ar property Ns = Ns Ar value
+.Op Fl x Ar property
+.Ar filesystem
+.Xc
+Creates a snapshot whose contents are as specified in the stream provided on
+standard input.
+If a full stream is received, then a new file system is created as well.
+Streams are created using the
+.Nm zfs Cm send
+subcommand, which by default creates a full stream.
+.Nm zfs Cm recv
+can be used as an alias for
+.Nm zfs Cm receive .
+.Pp
+If an incremental stream is received, then the destination file system must
+already exist, and its most recent snapshot must match the incremental stream's
+source.
+For
+.Sy zvols ,
+the destination device link is destroyed and recreated, which means the
+.Sy zvol
+cannot be accessed during the
+.Cm receive
+operation.
+.Pp
+When a snapshot replication package stream that is generated by using the
+.Nm zfs Cm send Fl R
+command is received, any snapshots that do not exist on the sending location are
+destroyed by using the
+.Nm zfs Cm destroy Fl d
+command.
+.Pp
+The ability to send and receive deduplicated send streams has been removed.
+However, a deduplicated send stream created with older software can be converted
+to a regular (non-deduplicated) stream by using the
+.Nm zstream Cm redup
+command.
+.Pp
+If
+.Fl o Em property Ns = Ns Ar value
+or
+.Fl x Em property
+is specified, it applies to the effective value of the property throughout
+the entire subtree of replicated datasets.
+Effective property values will be set
+.Pq Fl o
+or inherited
+.Pq Fl x
+on the topmost in the replicated subtree.
+In descendant datasets, if the
+property is set by the send stream, it will be overridden by forcing the
+property to be inherited from the top‐most file system.
+Received properties are retained in spite of being overridden
+and may be restored with
+.Nm zfs Cm inherit Fl S .
+Specifying
+.Fl o Sy origin Ns = Ns Em snapshot
+is a special case because, even if
+.Sy origin
+is a read-only property and cannot be set, it's allowed to receive the send
+stream as a clone of the given snapshot.
+.Pp
+Raw encrypted send streams (created with
+.Nm zfs Cm send Fl w )
+may only be received as is, and cannot be re-encrypted, decrypted, or
+recompressed by the receive process.
+Unencrypted streams can be received as
+encrypted datasets, either through inheritance or by specifying encryption
+parameters with the
+.Fl o
+options.
+Note that the
+.Sy keylocation
+property cannot be overridden to
+.Sy prompt
+during a receive.
+This is because the receive process itself is already using
+the standard input for the send stream.
+Instead, the property can be overridden after the receive completes.
+.Pp
+The added security provided by raw sends adds some restrictions to the send
+and receive process.
+ZFS will not allow a mix of raw receives and non-raw receives.
+Specifically, any raw incremental receives that are attempted after
+a non-raw receive will fail.
+Non-raw receives do not have this restriction and,
+therefore, are always possible.
+Because of this, it is best practice to always
+use either raw sends for their security benefits or non-raw sends for their
+flexibility when working with encrypted datasets, but not a combination.
+.Pp
+The reason for this restriction stems from the inherent restrictions of the
+AEAD ciphers that ZFS uses to encrypt data.
+When using ZFS native encryption,
+each block of data is encrypted against a randomly generated number known as
+the "initialization vector" (IV), which is stored in the filesystem metadata.
+This number is required by the encryption algorithms whenever the data is to
+be decrypted.
+Together, all of the IVs provided for all of the blocks in a
+given snapshot are collectively called an "IV set".
+When ZFS performs a raw send, the IV set is transferred from the source
+to the destination in the send stream.
+When ZFS performs a non-raw send, the data is decrypted by the source
+system and re-encrypted by the destination system, creating a snapshot with
+effectively the same data, but a different IV set.
+In order for decryption to work after a raw send, ZFS must ensure that
+the IV set used on both the source and destination side match.
+When an incremental raw receive is performed on
+top of an existing snapshot, ZFS will check to confirm that the "from"
+snapshot on both the source and destination were using the same IV set,
+ensuring the new IV set is consistent.
+.Pp
+The name of the snapshot
+.Pq and file system, if a full stream is received
+that this subcommand creates depends on the argument type and the use of the
+.Fl d
+or
+.Fl e
+options.
+.Pp
+If the argument is a snapshot name, the specified
+.Ar snapshot
+is created.
+If the argument is a file system or volume name, a snapshot with the same name
+as the sent snapshot is created within the specified
+.Ar filesystem
+or
+.Ar volume .
+If neither of the
+.Fl d
+or
+.Fl e
+options are specified, the provided target snapshot name is used exactly as
+provided.
+.Pp
+The
+.Fl d
+and
+.Fl e
+options cause the file system name of the target snapshot to be determined by
+appending a portion of the sent snapshot's name to the specified target
+.Ar filesystem .
+If the
+.Fl d
+option is specified, all but the first element of the sent snapshot's file
+system path
+.Pq usually the pool name
+is used and any required intermediate file systems within the specified one are
+created.
+If the
+.Fl e
+option is specified, then only the last element of the sent snapshot's file
+system name
+.Pq i.e. the name of the source file system itself
+is used as the target file system name.
+.Bl -tag -width "-F"
+.It Fl F
+Force a rollback of the file system to the most recent snapshot before
+performing the receive operation.
+If receiving an incremental replication stream
+.Po for example, one generated by
+.Nm zfs Cm send Fl R Op Fl i Ns | Ns Fl I
+.Pc ,
+destroy snapshots and file systems that do not exist on the sending side.
+.It Fl d
+Discard the first element of the sent snapshot's file system name, using the
+remaining elements to determine the name of the target file system for the new
+snapshot as described in the paragraph above.
+.It Fl e
+Discard all but the last element of the sent snapshot's file system name, using
+that element to determine the name of the target file system for the new
+snapshot as described in the paragraph above.
+.It Fl h
+Skip the receive of holds.
+There is no effect if holds are not sent.
+.It Fl M
+Force an unmount of the file system while receiving a snapshot.
+This option is not supported on Linux.
+.It Fl n
+Do not actually receive the stream.
+This can be useful in conjunction with the
+.Fl v
+option to verify the name the receive operation would use.
+.It Fl o Sy origin Ns = Ns Ar snapshot
+Forces the stream to be received as a clone of the given snapshot.
+If the stream is a full send stream, this will create the filesystem
+described by the stream as a clone of the specified snapshot.
+Which snapshot was specified will not affect the success or failure of the
+receive, as long as the snapshot does exist.
+If the stream is an incremental send stream, all the normal verification will be
+performed.
+.It Fl o Em property Ns = Ns Ar value
+Sets the specified property as if the command
+.Nm zfs Cm set Em property Ns = Ns Ar value
+was invoked immediately before the receive.
+When receiving a stream from
+.Nm zfs Cm send Fl R ,
+causes the property to be inherited by all descendant datasets, as through
+.Nm zfs Cm inherit Em property
+was run on any descendant datasets that have this property set on the
+sending system.
+.Pp
+If the send stream was sent with
+.Fl c
+then overriding the
+.Sy compression
+property will have no effect on received data but the
+.Sy compression
+property will be set.
+To have the data recompressed on receive remove the
+.Fl c
+flag from the send stream.
+.Pp
+Any editable property can be set at receive time.
+Set-once properties bound
+to the received data, such as
+.Sy normalization
+and
+.Sy casesensitivity ,
+cannot be set at receive time even when the datasets are newly created by
+.Nm zfs Cm receive .
+Additionally both settable properties
+.Sy version
+and
+.Sy volsize
+cannot be set at receive time.
+.Pp
+The
+.Fl o
+option may be specified multiple times, for different properties.
+An error results if the same property is specified in multiple
+.Fl o
+or
+.Fl x
+options.
+.Pp
+The
+.Fl o
+option may also be used to override encryption properties upon initial receive.
+This allows unencrypted streams to be received as encrypted datasets.
+To cause the received dataset (or root dataset of a recursive stream) to be
+received as an encryption root, specify encryption properties in the same
+manner as is required for
+.Nm zfs Cm create .
+For instance:
+.Dl # Nm zfs Cm send Pa tank/test@snap1 | Nm zfs Cm recv Fl o Sy encryption Ns = Ns Sy on Fl o Sy keyformat Ns = Ns Sy passphrase Fl o Sy keylocation Ns = Ns Pa file:///path/to/keyfile
+.Pp
+Note that
+.Fl o Sy keylocation Ns = Ns Sy prompt
+may not be specified here, since the standard input
+is already being utilized for the send stream.
+Once the receive has completed, you can use
+.Nm zfs Cm set
+to change this setting after the fact.
+Similarly, you can receive a dataset as an encrypted child by specifying
+.Fl x Sy encryption
+to force the property to be inherited.
+Overriding encryption properties (except for
+.Sy keylocation )
+is not possible with raw send streams.
+.It Fl s
+If the receive is interrupted, save the partially received state, rather
+than deleting it.
+Interruption may be due to premature termination of the stream
+.Po e.g. due to network failure or failure of the remote system
+if the stream is being read over a network connection
+.Pc ,
+a checksum error in the stream, termination of the
+.Nm zfs Cm receive
+process, or unclean shutdown of the system.
+.Pp
+The receive can be resumed with a stream generated by
+.Nm zfs Cm send Fl t Ar token ,
+where the
+.Ar token
+is the value of the
+.Sy receive_resume_token
+property of the filesystem or volume which is received into.
+.Pp
+To use this flag, the storage pool must have the
+.Sy extensible_dataset
+feature enabled.
+See
+.Xr zpool-features 7
+for details on ZFS feature flags.
+.It Fl u
+File system that is associated with the received stream is not mounted.
+.It Fl v
+Print verbose information about the stream and the time required to perform the
+receive operation.
+.It Fl x Em property
+Ensures that the effective value of the specified property after the
+receive is unaffected by the value of that property in the send stream (if any),
+as if the property had been excluded from the send stream.
+.Pp
+If the specified property is not present in the send stream, this option does
+nothing.
+.Pp
+If a received property needs to be overridden, the effective value will be
+set or inherited, depending on whether the property is inheritable or not.
+.Pp
+In the case of an incremental update,
+.Fl x
+leaves any existing local setting or explicit inheritance unchanged.
+.Pp
+All
+.Fl o
+restrictions (e.g. set-once) apply equally to
+.Fl x .
+.El
+.It Xo
+.Nm zfs
+.Cm receive
+.Fl A
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Abort an interrupted
+.Nm zfs Cm receive Fl s ,
+deleting its saved partially received state.
+.It Xo
+.Nm zfs
+.Cm receive
+.Fl c
+.Op Fl vn
+.Ar filesystem Ns | Ns Ar snapshot
+.Xc
+Attempt to repair data corruption in the specified dataset,
+by using the provided stream as the source of healthy data.
+This method of healing can only heal data blocks present in the stream.
+Metadata can not be healed by corrective receive.
+Running a scrub is recommended post-healing to ensure all data corruption was
+repaired.
+.Pp
+It's important to consider why corruption has happened in the first place.
+If you have slowly failing hardware - periodically repairing the data
+is not going to save you from data loss later on when the hardware fails
+completely.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 12, 13 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Remotely Replicating ZFS Data
+The following commands send a full stream and then an incremental stream to a
+remote machine, restoring them into
+.Em poolB/received/fs@a
+and
+.Em poolB/received/fs@b ,
+respectively.
+.Em poolB
+must contain the file system
+.Em poolB/received ,
+and must not initially contain
+.Em poolB/received/fs .
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm send Ar pool/fs@a |
+.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs Ns @ Ns Ar a
+.No # Nm zfs Cm send Fl i Ar a pool/fs@b |
+.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs
+.Ed
+.
+.Ss Example 2 : No Using the Nm zfs Cm receive Fl d No Option
+The following command sends a full stream of
+.Ar poolA/fsA/fsB@snap
+to a remote machine, receiving it into
+.Ar poolB/received/fsA/fsB@snap .
+The
+.Ar fsA/fsB@snap
+portion of the received snapshot's name is determined from the name of the sent
+snapshot.
+.Ar poolB
+must contain the file system
+.Ar poolB/received .
+If
+.Ar poolB/received/fsA
+does not exist, it is created as an empty file system.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm send Ar poolA/fsA/fsB@snap |
+.No " " Nm ssh Ar host Nm zfs Cm receive Fl d Ar poolB/received
+.Ed
+.
+.Sh SEE ALSO
+.Xr zfs-send 8 ,
+.Xr zstream 8
diff --git a/share/man/man8/zfs-redact.8 b/share/man/man8/zfs-redact.8
@@ -0,0 +1,738 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\" Copyright (c) 2024, Klara, Inc.
+.\"
+.Dd October 2, 2024
+.Dt ZFS-SEND 8
+.Os
+.
+.Sh NAME
+.Nm zfs-send
+.Nd generate backup stream of ZFS dataset
+.Sh SYNOPSIS
+.Nm zfs
+.Cm send
+.Op Fl DLPVbcehnpsvw
+.Op Fl R Op Fl X Ar dataset Ns Oo , Ns Ar dataset Oc Ns …
+.Op Oo Fl I Ns | Ns Fl i Oc Ar snapshot
+.Ar snapshot
+.Nm zfs
+.Cm send
+.Op Fl DLPVcensvw
+.Op Fl i Ar snapshot Ns | Ns Ar bookmark
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Nm zfs
+.Cm send
+.Fl -redact Ar redaction_bookmark
+.Op Fl DLPVcenpv
+.Op Fl i Ar snapshot Ns | Ns Ar bookmark
+.Ar snapshot
+.Nm zfs
+.Cm send
+.Op Fl PVenv
+.Fl t
+.Ar receive_resume_token
+.Nm zfs
+.Cm send
+.Op Fl PVnv
+.Fl S Ar filesystem
+.Nm zfs
+.Cm redact
+.Ar snapshot redaction_bookmark
+.Ar redaction_snapshot Ns …
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm send
+.Op Fl DLPVbcehnpsvw
+.Op Fl R Op Fl X Ar dataset Ns Oo , Ns Ar dataset Oc Ns …
+.Op Oo Fl I Ns | Ns Fl i Oc Ar snapshot
+.Ar snapshot
+.Xc
+Creates a stream representation of the second
+.Ar snapshot ,
+which is written to standard output.
+The output can be redirected to a file or to a different system
+.Po for example, using
+.Xr ssh 1
+.Pc .
+By default, a full stream is generated.
+.Bl -tag -width "-D"
+.It Fl D , -dedup
+Deduplicated send is no longer supported.
+This flag is accepted for backwards compatibility, but a regular,
+non-deduplicated stream will be generated.
+.It Fl I Ar snapshot
+Generate a stream package that sends all intermediary snapshots from the first
+snapshot to the second snapshot.
+For example,
+.Fl I Em @a Em fs@d
+is similar to
+.Fl i Em @a Em fs@b Ns \&; Fl i Em @b Em fs@c Ns \&; Fl i Em @c Em fs@d .
+The incremental source may be specified as with the
+.Fl i
+option.
+.It Fl L , -large-block
+Generate a stream which may contain blocks larger than 128 KiB.
+This flag has no effect if the
+.Sy large_blocks
+pool feature is disabled, or if the
+.Sy recordsize
+property of this filesystem has never been set above 128 KiB.
+The receiving system must have the
+.Sy large_blocks
+pool feature enabled as well.
+This flag is required if the
+.Sy large_microzap
+pool feature is active.
+See
+.Xr zpool-features 7
+for details on ZFS feature flags and the
+.Sy large_blocks
+feature.
+.It Fl P , -parsable
+Print machine-parsable verbose information about the stream package generated.
+.It Fl R , -replicate
+Generate a replication stream package, which will replicate the specified
+file system, and all descendent file systems, up to the named snapshot.
+When received, all properties, snapshots, descendent file systems, and clones
+are preserved.
+.Pp
+If the
+.Fl i
+or
+.Fl I
+flags are used in conjunction with the
+.Fl R
+flag, an incremental replication stream is generated.
+The current values of properties, and current snapshot and file system names are
+set when the stream is received.
+If the
+.Fl F
+flag is specified when this stream is received, snapshots and file systems that
+do not exist on the sending side are destroyed.
+If the
+.Fl R
+flag is used to send encrypted datasets, then
+.Fl w
+must also be specified.
+.It Fl V , -proctitle
+Set the process title to a per-second report of how much data has been sent.
+.It Fl X , -exclude Ar dataset Ns Oo , Ns Ar dataset Oc Ns …
+With
+.Fl R ,
+.Fl X
+specifies a set of datasets (and, hence, their descendants),
+to be excluded from the send stream.
+The root dataset may not be excluded.
+.Fl X Ar a Fl X Ar b
+is equivalent to
+.Fl X Ar a , Ns Ar b .
+.It Fl e , -embed
+Generate a more compact stream by using
+.Sy WRITE_EMBEDDED
+records for blocks which are stored more compactly on disk by the
+.Sy embedded_data
+pool feature.
+This flag has no effect if the
+.Sy embedded_data
+feature is disabled.
+The receiving system must have the
+.Sy embedded_data
+feature enabled.
+If the
+.Sy lz4_compress
+feature is active on the sending system, then the receiving system must have
+that feature enabled as well.
+Datasets that are sent with this flag may not be
+received as an encrypted dataset, since encrypted datasets cannot use the
+.Sy embedded_data
+feature.
+See
+.Xr zpool-features 7
+for details on ZFS feature flags and the
+.Sy embedded_data
+feature.
+.It Fl b , -backup
+Sends only received property values whether or not they are overridden by local
+settings, but only if the dataset has ever been received.
+Use this option when you want
+.Nm zfs Cm receive
+to restore received properties backed up on the sent dataset and to avoid
+sending local settings that may have nothing to do with the source dataset,
+but only with how the data is backed up.
+.It Fl c , -compressed
+Generate a more compact stream by using compressed WRITE records for blocks
+which are compressed on disk and in memory
+.Po see the
+.Sy compression
+property for details
+.Pc .
+If the
+.Sy lz4_compress
+feature is active on the sending system, then the receiving system must have
+that feature enabled as well.
+If the
+.Sy large_blocks
+feature is enabled on the sending system but the
+.Fl L
+option is not supplied in conjunction with
+.Fl c ,
+then the data will be decompressed before sending so it can be split into
+smaller block sizes.
+Streams sent with
+.Fl c
+will not have their data recompressed on the receiver side using
+.Fl o Sy compress Ns = Ar value .
+The data will stay compressed as it was from the sender.
+The new compression property will be set for future data.
+Note that uncompressed data from the sender will still attempt to
+compress on the receiver, unless you specify
+.Fl o Sy compress Ns = Em off .
+.It Fl w , -raw
+For encrypted datasets, send data exactly as it exists on disk.
+This allows backups to be taken even if encryption keys are not currently
+loaded.
+The backup may then be received on an untrusted machine since that machine will
+not have the encryption keys to read the protected data or alter it without
+being detected.
+Upon being received, the dataset will have the same encryption
+keys as it did on the send side, although the
+.Sy keylocation
+property will be defaulted to
+.Sy prompt
+if not otherwise provided.
+For unencrypted datasets, this flag will be equivalent to
+.Fl Lec .
+Note that if you do not use this flag for sending encrypted datasets, data will
+be sent unencrypted and may be re-encrypted with a different encryption key on
+the receiving system, which will disable the ability to do a raw send to that
+system for incrementals.
+.It Fl h , -holds
+Generate a stream package that includes any snapshot holds (created with the
+.Nm zfs Cm hold
+command), and indicating to
+.Nm zfs Cm receive
+that the holds be applied to the dataset on the receiving system.
+.It Fl i Ar snapshot
+Generate an incremental stream from the first
+.Ar snapshot
+.Pq the incremental source
+to the second
+.Ar snapshot
+.Pq the incremental target .
+The incremental source can be specified as the last component of the snapshot
+name
+.Po the
+.Sy @
+character and following
+.Pc
+and it is assumed to be from the same file system as the incremental target.
+.Pp
+If the destination is a clone, the source may be the origin snapshot, which must
+be fully specified
+.Po for example,
+.Em pool/fs@origin ,
+not just
+.Em @origin
+.Pc .
+.It Fl n , -dryrun
+Do a dry-run
+.Pq Qq No-op
+send.
+Do not generate any actual send data.
+This is useful in conjunction with the
+.Fl v
+or
+.Fl P
+flags to determine what data will be sent.
+In this case, the verbose output will be written to standard output
+.Po contrast with a non-dry-run, where the stream is written to standard output
+and the verbose output goes to standard error
+.Pc .
+.It Fl p , -props
+Include the dataset's properties in the stream.
+This flag is implicit when
+.Fl R
+is specified.
+The receiving system must also support this feature.
+Sends of encrypted datasets must use
+.Fl w
+when using this flag.
+.It Fl s , -skip-missing
+Allows sending a replication stream even when there are snapshots missing in the
+hierarchy.
+When a snapshot is missing, instead of throwing an error and aborting the send,
+a warning is printed to the standard error stream and the dataset to which it
+belongs
+and its descendents are skipped.
+This flag can only be used in conjunction with
+.Fl R .
+.It Fl v , -verbose
+Print verbose information about the stream package generated.
+This information includes a per-second report of how much data has been sent.
+The same report can be requested by sending
+.Dv SIGINFO
+or
+.Dv SIGUSR1 ,
+regardless of
+.Fl v .
+.Pp
+The format of the stream is committed.
+You will be able to receive your streams on future versions of ZFS.
+.El
+.It Xo
+.Nm zfs
+.Cm send
+.Op Fl DLPVcenvw
+.Op Fl i Ar snapshot Ns | Ns Ar bookmark
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Xc
+Generate a send stream, which may be of a filesystem, and may be incremental
+from a bookmark.
+If the destination is a filesystem or volume, the pool must be read-only, or the
+filesystem must not be mounted.
+When the stream generated from a filesystem or volume is received, the default
+snapshot name will be
+.Qq --head-- .
+.Bl -tag -width "-D"
+.It Fl D , -dedup
+Deduplicated send is no longer supported.
+This flag is accepted for backwards compatibility, but a regular,
+non-deduplicated stream will be generated.
+.It Fl L , -large-block
+Generate a stream which may contain blocks larger than 128 KiB.
+This flag has no effect if the
+.Sy large_blocks
+pool feature is disabled, or if the
+.Sy recordsize
+property of this filesystem has never been set above 128 KiB.
+The receiving system must have the
+.Sy large_blocks
+pool feature enabled as well.
+See
+.Xr zpool-features 7
+for details on ZFS feature flags and the
+.Sy large_blocks
+feature.
+.It Fl P , -parsable
+Print machine-parsable verbose information about the stream package generated.
+.It Fl c , -compressed
+Generate a more compact stream by using compressed WRITE records for blocks
+which are compressed on disk and in memory
+.Po see the
+.Sy compression
+property for details
+.Pc .
+If the
+.Sy lz4_compress
+feature is active on the sending system, then the receiving system must have
+that feature enabled as well.
+If the
+.Sy large_blocks
+feature is enabled on the sending system but the
+.Fl L
+option is not supplied in conjunction with
+.Fl c ,
+then the data will be decompressed before sending so it can be split into
+smaller block sizes.
+.It Fl w , -raw
+For encrypted datasets, send data exactly as it exists on disk.
+This allows backups to be taken even if encryption keys are not currently
+loaded.
+The backup may then be received on an untrusted machine since that machine will
+not have the encryption keys to read the protected data or alter it without
+being detected.
+Upon being received, the dataset will have the same encryption
+keys as it did on the send side, although the
+.Sy keylocation
+property will be defaulted to
+.Sy prompt
+if not otherwise provided.
+For unencrypted datasets, this flag will be equivalent to
+.Fl Lec .
+Note that if you do not use this flag for sending encrypted datasets, data will
+be sent unencrypted and may be re-encrypted with a different encryption key on
+the receiving system, which will disable the ability to do a raw send to that
+system for incrementals.
+.It Fl e , -embed
+Generate a more compact stream by using
+.Sy WRITE_EMBEDDED
+records for blocks which are stored more compactly on disk by the
+.Sy embedded_data
+pool feature.
+This flag has no effect if the
+.Sy embedded_data
+feature is disabled.
+The receiving system must have the
+.Sy embedded_data
+feature enabled.
+If the
+.Sy lz4_compress
+feature is active on the sending system, then the receiving system must have
+that feature enabled as well.
+Datasets that are sent with this flag may not be received as an encrypted
+dataset,
+since encrypted datasets cannot use the
+.Sy embedded_data
+feature.
+See
+.Xr zpool-features 7
+for details on ZFS feature flags and the
+.Sy embedded_data
+feature.
+.It Fl i Ar snapshot Ns | Ns Ar bookmark
+Generate an incremental send stream.
+The incremental source must be an earlier snapshot in the destination's history.
+It will commonly be an earlier snapshot in the destination's file system, in
+which case it can be specified as the last component of the name
+.Po the
+.Sy #
+or
+.Sy @
+character and following
+.Pc .
+.Pp
+If the incremental target is a clone, the incremental source can be the origin
+snapshot, or an earlier snapshot in the origin's filesystem, or the origin's
+origin, etc.
+.It Fl n , -dryrun
+Do a dry-run
+.Pq Qq No-op
+send.
+Do not generate any actual send data.
+This is useful in conjunction with the
+.Fl v
+or
+.Fl P
+flags to determine what data will be sent.
+In this case, the verbose output will be written to standard output
+.Po contrast with a non-dry-run, where the stream is written to standard output
+and the verbose output goes to standard error
+.Pc .
+.It Fl v , -verbose
+Print verbose information about the stream package generated.
+This information includes a per-second report of how much data has been sent.
+The same report can be requested by sending
+.Dv SIGINFO
+or
+.Dv SIGUSR1 ,
+regardless of
+.Fl v .
+.El
+.It Xo
+.Nm zfs
+.Cm send
+.Fl -redact Ar redaction_bookmark
+.Op Fl DLPVcenpv
+.Op Fl i Ar snapshot Ns | Ns Ar bookmark
+.Ar snapshot
+.Xc
+Generate a redacted send stream.
+This send stream contains all blocks from the snapshot being sent that aren't
+included in the redaction list contained in the bookmark specified by the
+.Fl -redact
+(or
+.Fl d )
+flag.
+The resulting send stream is said to be redacted with respect to the snapshots
+the bookmark specified by the
+.Fl -redact No flag was created with .
+The bookmark must have been created by running
+.Nm zfs Cm redact
+on the snapshot being sent.
+.Pp
+This feature can be used to allow clones of a filesystem to be made available on
+a remote system, in the case where their parent need not (or needs to not) be
+usable.
+For example, if a filesystem contains sensitive data, and it has clones where
+that sensitive data has been secured or replaced with dummy data, redacted sends
+can be used to replicate the secured data without replicating the original
+sensitive data, while still sharing all possible blocks.
+A snapshot that has been redacted with respect to a set of snapshots will
+contain all blocks referenced by at least one snapshot in the set, but will
+contain none of the blocks referenced by none of the snapshots in the set.
+In other words, if all snapshots in the set have modified a given block in the
+parent, that block will not be sent; but if one or more snapshots have not
+modified a block in the parent, they will still reference the parent's block, so
+that block will be sent.
+Note that only user data will be redacted.
+.Pp
+When the redacted send stream is received, we will generate a redacted
+snapshot.
+Due to the nature of redaction, a redacted dataset can only be used in the
+following ways:
+.Bl -enum -width "a."
+.It
+To receive, as a clone, an incremental send from the original snapshot to one
+of the snapshots it was redacted with respect to.
+In this case, the stream will produce a valid dataset when received because all
+blocks that were redacted in the parent are guaranteed to be present in the
+child's send stream.
+This use case will produce a normal snapshot, which can be used just like other
+snapshots.
+.
+.It
+To receive an incremental send from the original snapshot to something
+redacted with respect to a subset of the set of snapshots the initial snapshot
+was redacted with respect to.
+In this case, each block that was redacted in the original is still redacted
+(redacting with respect to additional snapshots causes less data to be redacted
+(because the snapshots define what is permitted, and everything else is
+redacted)).
+This use case will produce a new redacted snapshot.
+.It
+To receive an incremental send from a redaction bookmark of the original
+snapshot that was created when redacting with respect to a subset of the set of
+snapshots the initial snapshot was created with respect to
+anything else.
+A send stream from such a redaction bookmark will contain all of the blocks
+necessary to fill in any redacted data, should it be needed, because the sending
+system is aware of what blocks were originally redacted.
+This will either produce a normal snapshot or a redacted one, depending on
+whether the new send stream is redacted.
+.It
+To receive an incremental send from a redacted version of the initial
+snapshot that is redacted with respect to a subject of the set of snapshots the
+initial snapshot was created with respect to.
+A send stream from a compatible redacted dataset will contain all of the blocks
+necessary to fill in any redacted data.
+This will either produce a normal snapshot or a redacted one, depending on
+whether the new send stream is redacted.
+.It
+To receive a full send as a clone of the redacted snapshot.
+Since the stream is a full send, it definitionally contains all the data needed
+to create a new dataset.
+This use case will either produce a normal snapshot or a redacted one, depending
+on whether the full send stream was redacted.
+.El
+.Pp
+These restrictions are detected and enforced by
+.Nm zfs Cm receive ;
+a redacted send stream will contain the list of snapshots that the stream is
+redacted with respect to.
+These are stored with the redacted snapshot, and are used to detect and
+correctly handle the cases above.
+Note that for technical reasons,
+raw sends and redacted sends cannot be combined at this time.
+.It Xo
+.Nm zfs
+.Cm send
+.Op Fl PVenv
+.Fl t
+.Ar receive_resume_token
+.Xc
+Creates a send stream which resumes an interrupted receive.
+The
+.Ar receive_resume_token
+is the value of this property on the filesystem or volume that was being
+received into.
+See the documentation for
+.Nm zfs Cm receive Fl s
+for more details.
+.It Xo
+.Nm zfs
+.Cm send
+.Op Fl PVnv
+.Op Fl i Ar snapshot Ns | Ns Ar bookmark
+.Fl S
+.Ar filesystem
+.Xc
+Generate a send stream from a dataset that has been partially received.
+.Bl -tag -width "-L"
+.It Fl S , -saved
+This flag requires that the specified filesystem previously received a resumable
+send that did not finish and was interrupted.
+In such scenarios this flag
+enables the user to send this partially received state.
+Using this flag will always use the last fully received snapshot
+as the incremental source if it exists.
+.El
+.It Xo
+.Nm zfs
+.Cm redact
+.Ar snapshot redaction_bookmark
+.Ar redaction_snapshot Ns …
+.Xc
+Generate a new redaction bookmark.
+In addition to the typical bookmark information, a redaction bookmark contains
+the list of redacted blocks and the list of redaction snapshots specified.
+The redacted blocks are blocks in the snapshot which are not referenced by any
+of the redaction snapshots.
+These blocks are found by iterating over the metadata in each redaction snapshot
+to determine what has been changed since the target snapshot.
+Redaction is designed to support redacted zfs sends; see the entry for
+.Nm zfs Cm send
+for more information on the purpose of this operation.
+If a redact operation fails partway through (due to an error or a system
+failure), the redaction can be resumed by rerunning the same command.
+.El
+.Ss Redaction
+ZFS has support for a limited version of data subsetting, in the form of
+redaction.
+Using the
+.Nm zfs Cm redact
+command, a
+.Sy redaction bookmark
+can be created that stores a list of blocks containing sensitive information.
+When provided to
+.Nm zfs Cm send ,
+this causes a
+.Sy redacted send
+to occur.
+Redacted sends omit the blocks containing sensitive information,
+replacing them with REDACT records.
+When these send streams are received, a
+.Sy redacted dataset
+is created.
+A redacted dataset cannot be mounted by default, since it is incomplete.
+It can be used to receive other send streams.
+In this way datasets can be used for data backup and replication,
+with all the benefits that zfs send and receive have to offer,
+while protecting sensitive information from being
+stored on less-trusted machines or services.
+.Pp
+For the purposes of redaction, there are two steps to the process.
+A redact step, and a send/receive step.
+First, a redaction bookmark is created.
+This is done by providing the
+.Nm zfs Cm redact
+command with a parent snapshot, a bookmark to be created, and a number of
+redaction snapshots.
+These redaction snapshots must be descendants of the parent snapshot,
+and they should modify data that is considered sensitive in some way.
+Any blocks of data modified by all of the redaction snapshots will
+be listed in the redaction bookmark, because it represents the truly sensitive
+information.
+When it comes to the send step, the send process will not send
+the blocks listed in the redaction bookmark, instead replacing them with
+REDACT records.
+When received on the target system, this will create a
+redacted dataset, missing the data that corresponds to the blocks in the
+redaction bookmark on the sending system.
+The incremental send streams from
+the original parent to the redaction snapshots can then also be received on
+the target system, and this will produce a complete snapshot that can be used
+normally.
+Incrementals from one snapshot on the parent filesystem and another
+can also be done by sending from the redaction bookmark, rather than the
+snapshots themselves.
+.Pp
+In order to make the purpose of the feature more clear, an example is provided.
+Consider a zfs filesystem containing four files.
+These files represent information for an online shopping service.
+One file contains a list of usernames and passwords, another contains purchase
+histories,
+a third contains click tracking data, and a fourth contains user preferences.
+The owner of this data wants to make it available for their development teams to
+test against, and their market research teams to do analysis on.
+The development teams need information about user preferences and the click
+tracking data, while the market research teams need information about purchase
+histories and user preferences.
+Neither needs access to the usernames and passwords.
+However, because all of this data is stored in one ZFS filesystem,
+it must all be sent and received together.
+In addition, the owner of the data
+wants to take advantage of features like compression, checksumming, and
+snapshots, so they do want to continue to use ZFS to store and transmit their
+data.
+Redaction can help them do so.
+First, they would make two clones of a snapshot of the data on the source.
+In one clone, they create the setup they want their market research team to see;
+they delete the usernames and passwords file,
+and overwrite the click tracking data with dummy information.
+In another, they create the setup they want the development teams
+to see, by replacing the passwords with fake information and replacing the
+purchase histories with randomly generated ones.
+They would then create a redaction bookmark on the parent snapshot,
+using snapshots on the two clones as redaction snapshots.
+The parent can then be sent, redacted, to the target
+server where the research and development teams have access.
+Finally, incremental sends from the parent snapshot to each of the clones can be
+sent
+to and received on the target server; these snapshots are identical to the
+ones on the source, and are ready to be used, while the parent snapshot on the
+target contains none of the username and password data present on the source,
+because it was removed by the redacted send operation.
+.
+.Sh SIGNALS
+See
+.Fl v .
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 12, 13 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Remotely Replicating ZFS Data
+The following commands send a full stream and then an incremental stream to a
+remote machine, restoring them into
+.Em poolB/received/fs@a
+and
+.Em poolB/received/fs@b ,
+respectively.
+.Em poolB
+must contain the file system
+.Em poolB/received ,
+and must not initially contain
+.Em poolB/received/fs .
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm send Ar pool/fs@a |
+.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs Ns @ Ns Ar a
+.No # Nm zfs Cm send Fl i Ar a pool/fs@b |
+.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs
+.Ed
+.
+.Ss Example 2 : No Using the Nm zfs Cm receive Fl d No Option
+The following command sends a full stream of
+.Ar poolA/fsA/fsB@snap
+to a remote machine, receiving it into
+.Ar poolB/received/fsA/fsB@snap .
+The
+.Ar fsA/fsB@snap
+portion of the received snapshot's name is determined from the name of the sent
+snapshot.
+.Ar poolB
+must contain the file system
+.Ar poolB/received .
+If
+.Ar poolB/received/fsA
+does not exist, it is created as an empty file system.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm send Ar poolA/fsA/fsB@snap |
+.No " " Nm ssh Ar host Nm zfs Cm receive Fl d Ar poolB/received
+.Ed
+.
+.Sh SEE ALSO
+.Xr zfs-bookmark 8 ,
+.Xr zfs-receive 8 ,
+.Xr zfs-redact 8 ,
+.Xr zfs-snapshot 8
diff --git a/share/man/man8/zfs-release.8 b/share/man/man8/zfs-release.8
@@ -0,0 +1,114 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd June 30, 2019
+.Dt ZFS-HOLD 8
+.Os
+.
+.Sh NAME
+.Nm zfs-hold
+.Nd hold ZFS snapshots to prevent their removal
+.Sh SYNOPSIS
+.Nm zfs
+.Cm hold
+.Op Fl r
+.Ar tag Ar snapshot Ns …
+.Nm zfs
+.Cm holds
+.Op Fl rHp
+.Ar snapshot Ns …
+.Nm zfs
+.Cm release
+.Op Fl r
+.Ar tag Ar snapshot Ns …
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm hold
+.Op Fl r
+.Ar tag Ar snapshot Ns …
+.Xc
+Adds a single reference, named with the
+.Ar tag
+argument, to the specified snapshots.
+Each snapshot has its own tag namespace, and tags must be unique within that
+space.
+.Pp
+If a hold exists on a snapshot, attempts to destroy that snapshot by using the
+.Nm zfs Cm destroy
+command return
+.Sy EBUSY .
+.Bl -tag -width "-r"
+.It Fl r
+Specifies that a hold with the given tag is applied recursively to the snapshots
+of all descendent file systems.
+.El
+.It Xo
+.Nm zfs
+.Cm holds
+.Op Fl rHp
+.Ar snapshot Ns …
+.Xc
+Lists all existing user references for the given snapshot or snapshots.
+.Bl -tag -width "-r"
+.It Fl r
+Lists the holds that are set on the named descendent snapshots, in addition to
+listing the holds on the named snapshot.
+.It Fl H
+Do not print headers, use tab-delimited output.
+.It Fl p
+Prints holds timestamps as unix epoch timestamps.
+.El
+.It Xo
+.Nm zfs
+.Cm release
+.Op Fl r
+.Ar tag Ar snapshot Ns …
+.Xc
+Removes a single reference, named with the
+.Ar tag
+argument, from the specified snapshot or snapshots.
+The tag must already exist for each snapshot.
+If a hold exists on a snapshot, attempts to destroy that snapshot by using the
+.Nm zfs Cm destroy
+command return
+.Sy EBUSY .
+.Bl -tag -width "-r"
+.It Fl r
+Recursively releases a hold with the given tag on the snapshots of all
+descendent file systems.
+.El
+.El
+.
+.Sh SEE ALSO
+.Xr zfs-destroy 8
diff --git a/share/man/man8/zfs-rename.8 b/share/man/man8/zfs-rename.8
@@ -0,0 +1,160 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 16, 2022
+.Dt ZFS-RENAME 8
+.Os
+.
+.Sh NAME
+.Nm zfs-rename
+.Nd rename ZFS dataset
+.Sh SYNOPSIS
+.Nm zfs
+.Cm rename
+.Op Fl f
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Nm zfs
+.Cm rename
+.Fl p
+.Op Fl f
+.Ar filesystem Ns | Ns Ar volume
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm rename
+.Fl u
+.Op Fl f
+.Ar filesystem Ar filesystem
+.Nm zfs
+.Cm rename
+.Fl r
+.Ar snapshot Ar snapshot
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm rename
+.Op Fl f
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Xc
+.It Xo
+.Nm zfs
+.Cm rename
+.Fl p
+.Op Fl f
+.Ar filesystem Ns | Ns Ar volume
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+.It Xo
+.Nm zfs
+.Cm rename
+.Fl u
+.Op Fl f
+.Ar filesystem
+.Ar filesystem
+.Xc
+Renames the given dataset.
+The new target can be located anywhere in the ZFS hierarchy, with the exception
+of snapshots.
+Snapshots can only be renamed within the parent file system or volume.
+When renaming a snapshot, the parent file system of the snapshot does not need
+to be specified as part of the second argument.
+Renamed file systems can inherit new mount points, in which case they are
+unmounted and remounted at the new mount point.
+.Bl -tag -width "-a"
+.It Fl f
+Force unmount any file systems that need to be unmounted in the process.
+This flag has no effect if used together with the
+.Fl u
+flag.
+.It Fl p
+Creates all the nonexistent parent datasets.
+Datasets created in this manner are automatically mounted according to the
+.Sy mountpoint
+property inherited from their parent.
+.It Fl u
+Do not remount file systems during rename.
+If a file system's
+.Sy mountpoint
+property is set to
+.Sy legacy
+or
+.Sy none ,
+the file system is not unmounted even if this option is not given.
+.El
+.It Xo
+.Nm zfs
+.Cm rename
+.Fl r
+.Ar snapshot Ar snapshot
+.Xc
+Recursively rename the snapshots of all descendent datasets.
+Snapshots are the only dataset that can be renamed recursively.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 10, 15 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Promoting a ZFS Clone
+The following commands illustrate how to test out changes to a file system, and
+then replace the original file system with the changed one, using clones, clone
+promotion, and renaming:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm create Ar pool/project/production
+ populate /pool/project/production with data
+.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today
+.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta
+ make changes to /pool/project/beta and test them
+.No # Nm zfs Cm promote Ar pool/project/beta
+.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy
+.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production
+ once the legacy version is no longer needed, it can be destroyed
+.No # Nm zfs Cm destroy Ar pool/project/legacy
+.Ed
+.
+.Ss Example 2 : No Performing a Rolling Snapshot
+The following example shows how to maintain a history of snapshots with a
+consistent naming scheme.
+To keep a week's worth of snapshots, the user destroys the oldest snapshot,
+renames the remaining snapshots, and then creates a new snapshot, as follows:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm destroy Fl r Ar pool/users@7daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@6daysago No @ Ns Ar 7daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@5daysago No @ Ns Ar 6daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@4daysago No @ Ns Ar 5daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@3daysago No @ Ns Ar 4daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@2daysago No @ Ns Ar 3daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@yesterday No @ Ns Ar 2daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@today No @ Ns Ar yesterday
+.No # Nm zfs Cm snapshot Fl r Ar pool/users Ns @ Ns Ar today
+.Ed
diff --git a/share/man/man8/zfs-rollback.8 b/share/man/man8/zfs-rollback.8
@@ -0,0 +1,86 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 16, 2022
+.Dt ZFS-ROLLBACK 8
+.Os
+.
+.Sh NAME
+.Nm zfs-rollback
+.Nd roll ZFS dataset back to snapshot
+.Sh SYNOPSIS
+.Nm zfs
+.Cm rollback
+.Op Fl Rfr
+.Ar snapshot
+.
+.Sh DESCRIPTION
+When a dataset is rolled back, all data that has changed since the snapshot is
+discarded, and the dataset reverts to the state at the time of the snapshot.
+By default, the command refuses to roll back to a snapshot other than the most
+recent one.
+In order to do so, all intermediate snapshots and bookmarks must be destroyed by
+specifying the
+.Fl r
+option.
+.Pp
+The
+.Fl rR
+options do not recursively destroy the child snapshots of a recursive snapshot.
+Only direct snapshots of the specified filesystem are destroyed by either of
+these options.
+To completely roll back a recursive snapshot, you must roll back the individual
+child snapshots.
+.Bl -tag -width "-R"
+.It Fl R
+Destroy any more recent snapshots and bookmarks, as well as any clones of those
+snapshots.
+.It Fl f
+Used with the
+.Fl R
+option to force an unmount of any clone file systems that are to be destroyed.
+.It Fl r
+Destroy any snapshots and bookmarks more recent than the one specified.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 8 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 8 : No Rolling Back a ZFS File System
+The following command reverts the contents of
+.Ar pool/home/anne
+to the snapshot named
+.Ar yesterday ,
+deleting all intermediate snapshots:
+.Dl # Nm zfs Cm rollback Fl r Ar pool/home/anne Ns @ Ns Ar yesterday
+.
+.Sh SEE ALSO
+.Xr zfs-snapshot 8
diff --git a/share/man/man8/zfs-send.8 b/share/man/man8/zfs-send.8
@@ -0,0 +1,738 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\" Copyright (c) 2024, Klara, Inc.
+.\"
+.Dd October 2, 2024
+.Dt ZFS-SEND 8
+.Os
+.
+.Sh NAME
+.Nm zfs-send
+.Nd generate backup stream of ZFS dataset
+.Sh SYNOPSIS
+.Nm zfs
+.Cm send
+.Op Fl DLPVbcehnpsvw
+.Op Fl R Op Fl X Ar dataset Ns Oo , Ns Ar dataset Oc Ns …
+.Op Oo Fl I Ns | Ns Fl i Oc Ar snapshot
+.Ar snapshot
+.Nm zfs
+.Cm send
+.Op Fl DLPVcensvw
+.Op Fl i Ar snapshot Ns | Ns Ar bookmark
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Nm zfs
+.Cm send
+.Fl -redact Ar redaction_bookmark
+.Op Fl DLPVcenpv
+.Op Fl i Ar snapshot Ns | Ns Ar bookmark
+.Ar snapshot
+.Nm zfs
+.Cm send
+.Op Fl PVenv
+.Fl t
+.Ar receive_resume_token
+.Nm zfs
+.Cm send
+.Op Fl PVnv
+.Fl S Ar filesystem
+.Nm zfs
+.Cm redact
+.Ar snapshot redaction_bookmark
+.Ar redaction_snapshot Ns …
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm send
+.Op Fl DLPVbcehnpsvw
+.Op Fl R Op Fl X Ar dataset Ns Oo , Ns Ar dataset Oc Ns …
+.Op Oo Fl I Ns | Ns Fl i Oc Ar snapshot
+.Ar snapshot
+.Xc
+Creates a stream representation of the second
+.Ar snapshot ,
+which is written to standard output.
+The output can be redirected to a file or to a different system
+.Po for example, using
+.Xr ssh 1
+.Pc .
+By default, a full stream is generated.
+.Bl -tag -width "-D"
+.It Fl D , -dedup
+Deduplicated send is no longer supported.
+This flag is accepted for backwards compatibility, but a regular,
+non-deduplicated stream will be generated.
+.It Fl I Ar snapshot
+Generate a stream package that sends all intermediary snapshots from the first
+snapshot to the second snapshot.
+For example,
+.Fl I Em @a Em fs@d
+is similar to
+.Fl i Em @a Em fs@b Ns \&; Fl i Em @b Em fs@c Ns \&; Fl i Em @c Em fs@d .
+The incremental source may be specified as with the
+.Fl i
+option.
+.It Fl L , -large-block
+Generate a stream which may contain blocks larger than 128 KiB.
+This flag has no effect if the
+.Sy large_blocks
+pool feature is disabled, or if the
+.Sy recordsize
+property of this filesystem has never been set above 128 KiB.
+The receiving system must have the
+.Sy large_blocks
+pool feature enabled as well.
+This flag is required if the
+.Sy large_microzap
+pool feature is active.
+See
+.Xr zpool-features 7
+for details on ZFS feature flags and the
+.Sy large_blocks
+feature.
+.It Fl P , -parsable
+Print machine-parsable verbose information about the stream package generated.
+.It Fl R , -replicate
+Generate a replication stream package, which will replicate the specified
+file system, and all descendent file systems, up to the named snapshot.
+When received, all properties, snapshots, descendent file systems, and clones
+are preserved.
+.Pp
+If the
+.Fl i
+or
+.Fl I
+flags are used in conjunction with the
+.Fl R
+flag, an incremental replication stream is generated.
+The current values of properties, and current snapshot and file system names are
+set when the stream is received.
+If the
+.Fl F
+flag is specified when this stream is received, snapshots and file systems that
+do not exist on the sending side are destroyed.
+If the
+.Fl R
+flag is used to send encrypted datasets, then
+.Fl w
+must also be specified.
+.It Fl V , -proctitle
+Set the process title to a per-second report of how much data has been sent.
+.It Fl X , -exclude Ar dataset Ns Oo , Ns Ar dataset Oc Ns …
+With
+.Fl R ,
+.Fl X
+specifies a set of datasets (and, hence, their descendants),
+to be excluded from the send stream.
+The root dataset may not be excluded.
+.Fl X Ar a Fl X Ar b
+is equivalent to
+.Fl X Ar a , Ns Ar b .
+.It Fl e , -embed
+Generate a more compact stream by using
+.Sy WRITE_EMBEDDED
+records for blocks which are stored more compactly on disk by the
+.Sy embedded_data
+pool feature.
+This flag has no effect if the
+.Sy embedded_data
+feature is disabled.
+The receiving system must have the
+.Sy embedded_data
+feature enabled.
+If the
+.Sy lz4_compress
+feature is active on the sending system, then the receiving system must have
+that feature enabled as well.
+Datasets that are sent with this flag may not be
+received as an encrypted dataset, since encrypted datasets cannot use the
+.Sy embedded_data
+feature.
+See
+.Xr zpool-features 7
+for details on ZFS feature flags and the
+.Sy embedded_data
+feature.
+.It Fl b , -backup
+Sends only received property values whether or not they are overridden by local
+settings, but only if the dataset has ever been received.
+Use this option when you want
+.Nm zfs Cm receive
+to restore received properties backed up on the sent dataset and to avoid
+sending local settings that may have nothing to do with the source dataset,
+but only with how the data is backed up.
+.It Fl c , -compressed
+Generate a more compact stream by using compressed WRITE records for blocks
+which are compressed on disk and in memory
+.Po see the
+.Sy compression
+property for details
+.Pc .
+If the
+.Sy lz4_compress
+feature is active on the sending system, then the receiving system must have
+that feature enabled as well.
+If the
+.Sy large_blocks
+feature is enabled on the sending system but the
+.Fl L
+option is not supplied in conjunction with
+.Fl c ,
+then the data will be decompressed before sending so it can be split into
+smaller block sizes.
+Streams sent with
+.Fl c
+will not have their data recompressed on the receiver side using
+.Fl o Sy compress Ns = Ar value .
+The data will stay compressed as it was from the sender.
+The new compression property will be set for future data.
+Note that uncompressed data from the sender will still attempt to
+compress on the receiver, unless you specify
+.Fl o Sy compress Ns = Em off .
+.It Fl w , -raw
+For encrypted datasets, send data exactly as it exists on disk.
+This allows backups to be taken even if encryption keys are not currently
+loaded.
+The backup may then be received on an untrusted machine since that machine will
+not have the encryption keys to read the protected data or alter it without
+being detected.
+Upon being received, the dataset will have the same encryption
+keys as it did on the send side, although the
+.Sy keylocation
+property will be defaulted to
+.Sy prompt
+if not otherwise provided.
+For unencrypted datasets, this flag will be equivalent to
+.Fl Lec .
+Note that if you do not use this flag for sending encrypted datasets, data will
+be sent unencrypted and may be re-encrypted with a different encryption key on
+the receiving system, which will disable the ability to do a raw send to that
+system for incrementals.
+.It Fl h , -holds
+Generate a stream package that includes any snapshot holds (created with the
+.Nm zfs Cm hold
+command), and indicating to
+.Nm zfs Cm receive
+that the holds be applied to the dataset on the receiving system.
+.It Fl i Ar snapshot
+Generate an incremental stream from the first
+.Ar snapshot
+.Pq the incremental source
+to the second
+.Ar snapshot
+.Pq the incremental target .
+The incremental source can be specified as the last component of the snapshot
+name
+.Po the
+.Sy @
+character and following
+.Pc
+and it is assumed to be from the same file system as the incremental target.
+.Pp
+If the destination is a clone, the source may be the origin snapshot, which must
+be fully specified
+.Po for example,
+.Em pool/fs@origin ,
+not just
+.Em @origin
+.Pc .
+.It Fl n , -dryrun
+Do a dry-run
+.Pq Qq No-op
+send.
+Do not generate any actual send data.
+This is useful in conjunction with the
+.Fl v
+or
+.Fl P
+flags to determine what data will be sent.
+In this case, the verbose output will be written to standard output
+.Po contrast with a non-dry-run, where the stream is written to standard output
+and the verbose output goes to standard error
+.Pc .
+.It Fl p , -props
+Include the dataset's properties in the stream.
+This flag is implicit when
+.Fl R
+is specified.
+The receiving system must also support this feature.
+Sends of encrypted datasets must use
+.Fl w
+when using this flag.
+.It Fl s , -skip-missing
+Allows sending a replication stream even when there are snapshots missing in the
+hierarchy.
+When a snapshot is missing, instead of throwing an error and aborting the send,
+a warning is printed to the standard error stream and the dataset to which it
+belongs
+and its descendents are skipped.
+This flag can only be used in conjunction with
+.Fl R .
+.It Fl v , -verbose
+Print verbose information about the stream package generated.
+This information includes a per-second report of how much data has been sent.
+The same report can be requested by sending
+.Dv SIGINFO
+or
+.Dv SIGUSR1 ,
+regardless of
+.Fl v .
+.Pp
+The format of the stream is committed.
+You will be able to receive your streams on future versions of ZFS.
+.El
+.It Xo
+.Nm zfs
+.Cm send
+.Op Fl DLPVcenvw
+.Op Fl i Ar snapshot Ns | Ns Ar bookmark
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot
+.Xc
+Generate a send stream, which may be of a filesystem, and may be incremental
+from a bookmark.
+If the destination is a filesystem or volume, the pool must be read-only, or the
+filesystem must not be mounted.
+When the stream generated from a filesystem or volume is received, the default
+snapshot name will be
+.Qq --head-- .
+.Bl -tag -width "-D"
+.It Fl D , -dedup
+Deduplicated send is no longer supported.
+This flag is accepted for backwards compatibility, but a regular,
+non-deduplicated stream will be generated.
+.It Fl L , -large-block
+Generate a stream which may contain blocks larger than 128 KiB.
+This flag has no effect if the
+.Sy large_blocks
+pool feature is disabled, or if the
+.Sy recordsize
+property of this filesystem has never been set above 128 KiB.
+The receiving system must have the
+.Sy large_blocks
+pool feature enabled as well.
+See
+.Xr zpool-features 7
+for details on ZFS feature flags and the
+.Sy large_blocks
+feature.
+.It Fl P , -parsable
+Print machine-parsable verbose information about the stream package generated.
+.It Fl c , -compressed
+Generate a more compact stream by using compressed WRITE records for blocks
+which are compressed on disk and in memory
+.Po see the
+.Sy compression
+property for details
+.Pc .
+If the
+.Sy lz4_compress
+feature is active on the sending system, then the receiving system must have
+that feature enabled as well.
+If the
+.Sy large_blocks
+feature is enabled on the sending system but the
+.Fl L
+option is not supplied in conjunction with
+.Fl c ,
+then the data will be decompressed before sending so it can be split into
+smaller block sizes.
+.It Fl w , -raw
+For encrypted datasets, send data exactly as it exists on disk.
+This allows backups to be taken even if encryption keys are not currently
+loaded.
+The backup may then be received on an untrusted machine since that machine will
+not have the encryption keys to read the protected data or alter it without
+being detected.
+Upon being received, the dataset will have the same encryption
+keys as it did on the send side, although the
+.Sy keylocation
+property will be defaulted to
+.Sy prompt
+if not otherwise provided.
+For unencrypted datasets, this flag will be equivalent to
+.Fl Lec .
+Note that if you do not use this flag for sending encrypted datasets, data will
+be sent unencrypted and may be re-encrypted with a different encryption key on
+the receiving system, which will disable the ability to do a raw send to that
+system for incrementals.
+.It Fl e , -embed
+Generate a more compact stream by using
+.Sy WRITE_EMBEDDED
+records for blocks which are stored more compactly on disk by the
+.Sy embedded_data
+pool feature.
+This flag has no effect if the
+.Sy embedded_data
+feature is disabled.
+The receiving system must have the
+.Sy embedded_data
+feature enabled.
+If the
+.Sy lz4_compress
+feature is active on the sending system, then the receiving system must have
+that feature enabled as well.
+Datasets that are sent with this flag may not be received as an encrypted
+dataset,
+since encrypted datasets cannot use the
+.Sy embedded_data
+feature.
+See
+.Xr zpool-features 7
+for details on ZFS feature flags and the
+.Sy embedded_data
+feature.
+.It Fl i Ar snapshot Ns | Ns Ar bookmark
+Generate an incremental send stream.
+The incremental source must be an earlier snapshot in the destination's history.
+It will commonly be an earlier snapshot in the destination's file system, in
+which case it can be specified as the last component of the name
+.Po the
+.Sy #
+or
+.Sy @
+character and following
+.Pc .
+.Pp
+If the incremental target is a clone, the incremental source can be the origin
+snapshot, or an earlier snapshot in the origin's filesystem, or the origin's
+origin, etc.
+.It Fl n , -dryrun
+Do a dry-run
+.Pq Qq No-op
+send.
+Do not generate any actual send data.
+This is useful in conjunction with the
+.Fl v
+or
+.Fl P
+flags to determine what data will be sent.
+In this case, the verbose output will be written to standard output
+.Po contrast with a non-dry-run, where the stream is written to standard output
+and the verbose output goes to standard error
+.Pc .
+.It Fl v , -verbose
+Print verbose information about the stream package generated.
+This information includes a per-second report of how much data has been sent.
+The same report can be requested by sending
+.Dv SIGINFO
+or
+.Dv SIGUSR1 ,
+regardless of
+.Fl v .
+.El
+.It Xo
+.Nm zfs
+.Cm send
+.Fl -redact Ar redaction_bookmark
+.Op Fl DLPVcenpv
+.Op Fl i Ar snapshot Ns | Ns Ar bookmark
+.Ar snapshot
+.Xc
+Generate a redacted send stream.
+This send stream contains all blocks from the snapshot being sent that aren't
+included in the redaction list contained in the bookmark specified by the
+.Fl -redact
+(or
+.Fl d )
+flag.
+The resulting send stream is said to be redacted with respect to the snapshots
+the bookmark specified by the
+.Fl -redact No flag was created with .
+The bookmark must have been created by running
+.Nm zfs Cm redact
+on the snapshot being sent.
+.Pp
+This feature can be used to allow clones of a filesystem to be made available on
+a remote system, in the case where their parent need not (or needs to not) be
+usable.
+For example, if a filesystem contains sensitive data, and it has clones where
+that sensitive data has been secured or replaced with dummy data, redacted sends
+can be used to replicate the secured data without replicating the original
+sensitive data, while still sharing all possible blocks.
+A snapshot that has been redacted with respect to a set of snapshots will
+contain all blocks referenced by at least one snapshot in the set, but will
+contain none of the blocks referenced by none of the snapshots in the set.
+In other words, if all snapshots in the set have modified a given block in the
+parent, that block will not be sent; but if one or more snapshots have not
+modified a block in the parent, they will still reference the parent's block, so
+that block will be sent.
+Note that only user data will be redacted.
+.Pp
+When the redacted send stream is received, we will generate a redacted
+snapshot.
+Due to the nature of redaction, a redacted dataset can only be used in the
+following ways:
+.Bl -enum -width "a."
+.It
+To receive, as a clone, an incremental send from the original snapshot to one
+of the snapshots it was redacted with respect to.
+In this case, the stream will produce a valid dataset when received because all
+blocks that were redacted in the parent are guaranteed to be present in the
+child's send stream.
+This use case will produce a normal snapshot, which can be used just like other
+snapshots.
+.
+.It
+To receive an incremental send from the original snapshot to something
+redacted with respect to a subset of the set of snapshots the initial snapshot
+was redacted with respect to.
+In this case, each block that was redacted in the original is still redacted
+(redacting with respect to additional snapshots causes less data to be redacted
+(because the snapshots define what is permitted, and everything else is
+redacted)).
+This use case will produce a new redacted snapshot.
+.It
+To receive an incremental send from a redaction bookmark of the original
+snapshot that was created when redacting with respect to a subset of the set of
+snapshots the initial snapshot was created with respect to
+anything else.
+A send stream from such a redaction bookmark will contain all of the blocks
+necessary to fill in any redacted data, should it be needed, because the sending
+system is aware of what blocks were originally redacted.
+This will either produce a normal snapshot or a redacted one, depending on
+whether the new send stream is redacted.
+.It
+To receive an incremental send from a redacted version of the initial
+snapshot that is redacted with respect to a subject of the set of snapshots the
+initial snapshot was created with respect to.
+A send stream from a compatible redacted dataset will contain all of the blocks
+necessary to fill in any redacted data.
+This will either produce a normal snapshot or a redacted one, depending on
+whether the new send stream is redacted.
+.It
+To receive a full send as a clone of the redacted snapshot.
+Since the stream is a full send, it definitionally contains all the data needed
+to create a new dataset.
+This use case will either produce a normal snapshot or a redacted one, depending
+on whether the full send stream was redacted.
+.El
+.Pp
+These restrictions are detected and enforced by
+.Nm zfs Cm receive ;
+a redacted send stream will contain the list of snapshots that the stream is
+redacted with respect to.
+These are stored with the redacted snapshot, and are used to detect and
+correctly handle the cases above.
+Note that for technical reasons,
+raw sends and redacted sends cannot be combined at this time.
+.It Xo
+.Nm zfs
+.Cm send
+.Op Fl PVenv
+.Fl t
+.Ar receive_resume_token
+.Xc
+Creates a send stream which resumes an interrupted receive.
+The
+.Ar receive_resume_token
+is the value of this property on the filesystem or volume that was being
+received into.
+See the documentation for
+.Nm zfs Cm receive Fl s
+for more details.
+.It Xo
+.Nm zfs
+.Cm send
+.Op Fl PVnv
+.Op Fl i Ar snapshot Ns | Ns Ar bookmark
+.Fl S
+.Ar filesystem
+.Xc
+Generate a send stream from a dataset that has been partially received.
+.Bl -tag -width "-L"
+.It Fl S , -saved
+This flag requires that the specified filesystem previously received a resumable
+send that did not finish and was interrupted.
+In such scenarios this flag
+enables the user to send this partially received state.
+Using this flag will always use the last fully received snapshot
+as the incremental source if it exists.
+.El
+.It Xo
+.Nm zfs
+.Cm redact
+.Ar snapshot redaction_bookmark
+.Ar redaction_snapshot Ns …
+.Xc
+Generate a new redaction bookmark.
+In addition to the typical bookmark information, a redaction bookmark contains
+the list of redacted blocks and the list of redaction snapshots specified.
+The redacted blocks are blocks in the snapshot which are not referenced by any
+of the redaction snapshots.
+These blocks are found by iterating over the metadata in each redaction snapshot
+to determine what has been changed since the target snapshot.
+Redaction is designed to support redacted zfs sends; see the entry for
+.Nm zfs Cm send
+for more information on the purpose of this operation.
+If a redact operation fails partway through (due to an error or a system
+failure), the redaction can be resumed by rerunning the same command.
+.El
+.Ss Redaction
+ZFS has support for a limited version of data subsetting, in the form of
+redaction.
+Using the
+.Nm zfs Cm redact
+command, a
+.Sy redaction bookmark
+can be created that stores a list of blocks containing sensitive information.
+When provided to
+.Nm zfs Cm send ,
+this causes a
+.Sy redacted send
+to occur.
+Redacted sends omit the blocks containing sensitive information,
+replacing them with REDACT records.
+When these send streams are received, a
+.Sy redacted dataset
+is created.
+A redacted dataset cannot be mounted by default, since it is incomplete.
+It can be used to receive other send streams.
+In this way datasets can be used for data backup and replication,
+with all the benefits that zfs send and receive have to offer,
+while protecting sensitive information from being
+stored on less-trusted machines or services.
+.Pp
+For the purposes of redaction, there are two steps to the process.
+A redact step, and a send/receive step.
+First, a redaction bookmark is created.
+This is done by providing the
+.Nm zfs Cm redact
+command with a parent snapshot, a bookmark to be created, and a number of
+redaction snapshots.
+These redaction snapshots must be descendants of the parent snapshot,
+and they should modify data that is considered sensitive in some way.
+Any blocks of data modified by all of the redaction snapshots will
+be listed in the redaction bookmark, because it represents the truly sensitive
+information.
+When it comes to the send step, the send process will not send
+the blocks listed in the redaction bookmark, instead replacing them with
+REDACT records.
+When received on the target system, this will create a
+redacted dataset, missing the data that corresponds to the blocks in the
+redaction bookmark on the sending system.
+The incremental send streams from
+the original parent to the redaction snapshots can then also be received on
+the target system, and this will produce a complete snapshot that can be used
+normally.
+Incrementals from one snapshot on the parent filesystem and another
+can also be done by sending from the redaction bookmark, rather than the
+snapshots themselves.
+.Pp
+In order to make the purpose of the feature more clear, an example is provided.
+Consider a zfs filesystem containing four files.
+These files represent information for an online shopping service.
+One file contains a list of usernames and passwords, another contains purchase
+histories,
+a third contains click tracking data, and a fourth contains user preferences.
+The owner of this data wants to make it available for their development teams to
+test against, and their market research teams to do analysis on.
+The development teams need information about user preferences and the click
+tracking data, while the market research teams need information about purchase
+histories and user preferences.
+Neither needs access to the usernames and passwords.
+However, because all of this data is stored in one ZFS filesystem,
+it must all be sent and received together.
+In addition, the owner of the data
+wants to take advantage of features like compression, checksumming, and
+snapshots, so they do want to continue to use ZFS to store and transmit their
+data.
+Redaction can help them do so.
+First, they would make two clones of a snapshot of the data on the source.
+In one clone, they create the setup they want their market research team to see;
+they delete the usernames and passwords file,
+and overwrite the click tracking data with dummy information.
+In another, they create the setup they want the development teams
+to see, by replacing the passwords with fake information and replacing the
+purchase histories with randomly generated ones.
+They would then create a redaction bookmark on the parent snapshot,
+using snapshots on the two clones as redaction snapshots.
+The parent can then be sent, redacted, to the target
+server where the research and development teams have access.
+Finally, incremental sends from the parent snapshot to each of the clones can be
+sent
+to and received on the target server; these snapshots are identical to the
+ones on the source, and are ready to be used, while the parent snapshot on the
+target contains none of the username and password data present on the source,
+because it was removed by the redacted send operation.
+.
+.Sh SIGNALS
+See
+.Fl v .
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 12, 13 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Remotely Replicating ZFS Data
+The following commands send a full stream and then an incremental stream to a
+remote machine, restoring them into
+.Em poolB/received/fs@a
+and
+.Em poolB/received/fs@b ,
+respectively.
+.Em poolB
+must contain the file system
+.Em poolB/received ,
+and must not initially contain
+.Em poolB/received/fs .
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm send Ar pool/fs@a |
+.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs Ns @ Ns Ar a
+.No # Nm zfs Cm send Fl i Ar a pool/fs@b |
+.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs
+.Ed
+.
+.Ss Example 2 : No Using the Nm zfs Cm receive Fl d No Option
+The following command sends a full stream of
+.Ar poolA/fsA/fsB@snap
+to a remote machine, receiving it into
+.Ar poolB/received/fsA/fsB@snap .
+The
+.Ar fsA/fsB@snap
+portion of the received snapshot's name is determined from the name of the sent
+snapshot.
+.Ar poolB
+must contain the file system
+.Ar poolB/received .
+If
+.Ar poolB/received/fsA
+does not exist, it is created as an empty file system.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm send Ar poolA/fsA/fsB@snap |
+.No " " Nm ssh Ar host Nm zfs Cm receive Fl d Ar poolB/received
+.Ed
+.
+.Sh SEE ALSO
+.Xr zfs-bookmark 8 ,
+.Xr zfs-receive 8 ,
+.Xr zfs-redact 8 ,
+.Xr zfs-snapshot 8
diff --git a/share/man/man8/zfs-set.8 b/share/man/man8/zfs-set.8
@@ -0,0 +1,376 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd April 20, 2024
+.Dt ZFS-SET 8
+.Os
+.
+.Sh NAME
+.Nm zfs-set
+.Nd set properties on ZFS datasets
+.Sh SYNOPSIS
+.Nm zfs
+.Cm set
+.Op Fl u
+.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns …
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.Nm zfs
+.Cm get
+.Op Fl r Ns | Ns Fl d Ar depth
+.Op Fl Hp
+.Op Fl j Op Ar --json-int
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns … Oc
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Cm all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns …
+.Nm zfs
+.Cm inherit
+.Op Fl rS
+.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm set
+.Op Fl u
+.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns …
+.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.Xc
+Only some properties can be edited.
+See
+.Xr zfsprops 7
+for more information on what properties can be set and acceptable
+values.
+Numeric values can be specified as exact values, or in a human-readable form
+with a suffix of
+.Sy B , K , M , G , T , P , E , Z
+.Po for bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes,
+or zettabytes, respectively
+.Pc .
+User properties can be set on snapshots.
+For more information, see the
+.Em User Properties
+section of
+.Xr zfsprops 7 .
+.Bl -tag -width "-u"
+.It Fl u
+Update mountpoint, sharenfs, sharesmb property but do not mount or share the
+dataset.
+.El
+.It Xo
+.Nm zfs
+.Cm get
+.Op Fl r Ns | Ns Fl d Ar depth
+.Op Fl Hp
+.Op Fl j Op Ar --json-int
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns … Oc
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Cm all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns …
+.Xc
+Displays properties for the given datasets.
+If no datasets are specified, then the command displays properties for all
+datasets on the system.
+For each property, the following columns are displayed:
+.Bl -tag -compact -offset 4n -width "property"
+.It Sy name
+Dataset name
+.It Sy property
+Property name
+.It Sy value
+Property value
+.It Sy source
+Property source
+.Sy local , default , inherited , temporary , received , No or Sy - Pq none .
+.El
+.Pp
+All columns are displayed by default, though this can be controlled by using the
+.Fl o
+option.
+This command takes a comma-separated list of properties as described in the
+.Sx Native Properties
+and
+.Sx User Properties
+sections of
+.Xr zfsprops 7 .
+.Pp
+The value
+.Sy all
+can be used to display all properties that apply to the given dataset's type
+.Pq Sy filesystem , volume , snapshot , No or Sy bookmark .
+.Bl -tag -width "-s source"
+.It Fl j , -json Op Ar --json-int
+Display the output in JSON format.
+Specify
+.Sy --json-int
+to display numbers in integer format instead of strings for JSON output.
+.It Fl H
+Display output in a form more easily parsed by scripts.
+Any headers are omitted, and fields are explicitly separated by a single tab
+instead of an arbitrary amount of space.
+.It Fl d Ar depth
+Recursively display any children of the dataset, limiting the recursion to
+.Ar depth .
+A depth of
+.Sy 1
+will display only the dataset and its direct children.
+.It Fl o Ar field
+A comma-separated list of columns to display, defaults to
+.Sy name , Ns Sy property , Ns Sy value , Ns Sy source .
+.It Fl p
+Display numbers in parsable
+.Pq exact
+values.
+.It Fl r
+Recursively display properties for any children.
+.It Fl s Ar source
+A comma-separated list of sources to display.
+Those properties coming from a source other than those in this list are ignored.
+Each source must be one of the following:
+.Sy local , default , inherited , temporary , received , No or Sy none .
+The default value is all sources.
+.It Fl t Ar type
+A comma-separated list of types to display, where
+.Ar type
+is one of
+.Sy filesystem , snapshot , volume , bookmark , No or Sy all .
+.Sy fs ,
+.Sy snap ,
+or
+.Sy vol
+can be used as aliases for
+.Sy filesystem ,
+.Sy snapshot ,
+or
+.Sy volume .
+.El
+.It Xo
+.Nm zfs
+.Cm inherit
+.Op Fl rS
+.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns …
+.Xc
+Clears the specified property, causing it to be inherited from an ancestor,
+restored to default if no ancestor has the property set, or with the
+.Fl S
+option reverted to the received value if one exists.
+See
+.Xr zfsprops 7
+for a listing of default values, and details on which properties can be
+inherited.
+.Bl -tag -width "-r"
+.It Fl r
+Recursively inherit the given property for all children.
+.It Fl S
+Revert the property to the received value, if one exists;
+otherwise, for non-inheritable properties, to the default;
+otherwise, operate as if the
+.Fl S
+option was not specified.
+.El
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 1, 4, 6, 7, 11, 14, 16 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Creating a ZFS File System Hierarchy
+The following commands create a file system named
+.Ar pool/home
+and a file system named
+.Ar pool/home/bob .
+The mount point
+.Pa /export/home
+is set for the parent file system, and is automatically inherited by the child
+file system.
+.Dl # Nm zfs Cm create Ar pool/home
+.Dl # Nm zfs Cm set Sy mountpoint Ns = Ns Ar /export/home pool/home
+.Dl # Nm zfs Cm create Ar pool/home/bob
+.
+.Ss Example 2 : No Disabling and Enabling File System Compression
+The following command disables the
+.Sy compression
+property for all file systems under
+.Ar pool/home .
+The next command explicitly enables
+.Sy compression
+for
+.Ar pool/home/anne .
+.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy off Ar pool/home
+.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy on Ar pool/home/anne
+.
+.Ss Example 3 : No Setting a Quota on a ZFS File System
+The following command sets a quota of 50 Gbytes for
+.Ar pool/home/bob :
+.Dl # Nm zfs Cm set Sy quota Ns = Ns Ar 50G pool/home/bob
+.
+.Ss Example 4 : No Listing ZFS Properties
+The following command lists all properties for
+.Ar pool/home/bob :
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Sy all Ar pool/home/bob
+NAME PROPERTY VALUE SOURCE
+pool/home/bob type filesystem -
+pool/home/bob creation Tue Jul 21 15:53 2009 -
+pool/home/bob used 21K -
+pool/home/bob available 20.0G -
+pool/home/bob referenced 21K -
+pool/home/bob compressratio 1.00x -
+pool/home/bob mounted yes -
+pool/home/bob quota 20G local
+pool/home/bob reservation none default
+pool/home/bob recordsize 128K default
+pool/home/bob mountpoint /pool/home/bob default
+pool/home/bob sharenfs off default
+pool/home/bob checksum on default
+pool/home/bob compression on local
+pool/home/bob atime on default
+pool/home/bob devices on default
+pool/home/bob exec on default
+pool/home/bob setuid on default
+pool/home/bob readonly off default
+pool/home/bob zoned off default
+pool/home/bob snapdir hidden default
+pool/home/bob acltype off default
+pool/home/bob aclmode discard default
+pool/home/bob aclinherit restricted default
+pool/home/bob canmount on default
+pool/home/bob xattr on default
+pool/home/bob copies 1 default
+pool/home/bob version 4 -
+pool/home/bob utf8only off -
+pool/home/bob normalization none -
+pool/home/bob casesensitivity sensitive -
+pool/home/bob vscan off default
+pool/home/bob nbmand off default
+pool/home/bob sharesmb off default
+pool/home/bob refquota none default
+pool/home/bob refreservation none default
+pool/home/bob primarycache all default
+pool/home/bob secondarycache all default
+pool/home/bob usedbysnapshots 0 -
+pool/home/bob usedbydataset 21K -
+pool/home/bob usedbychildren 0 -
+pool/home/bob usedbyrefreservation 0 -
+.Ed
+.Pp
+The following command gets a single property value:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Fl H o Sy value compression Ar pool/home/bob
+on
+.Ed
+.Pp
+The following command gets a single property value recursively in JSON format:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Fl j Fl r Sy mountpoint Ar pool/home | Nm jq
+{
+ "output_version": {
+ "command": "zfs get",
+ "vers_major": 0,
+ "vers_minor": 1
+ },
+ "datasets": {
+ "pool/home": {
+ "name": "pool/home",
+ "type": "FILESYSTEM",
+ "pool": "pool",
+ "createtxg": "10",
+ "properties": {
+ "mountpoint": {
+ "value": "/pool/home",
+ "source": {
+ "type": "DEFAULT",
+ "data": "-"
+ }
+ }
+ }
+ },
+ "pool/home/bob": {
+ "name": "pool/home/bob",
+ "type": "FILESYSTEM",
+ "pool": "pool",
+ "createtxg": "1176",
+ "properties": {
+ "mountpoint": {
+ "value": "/pool/home/bob",
+ "source": {
+ "type": "DEFAULT",
+ "data": "-"
+ }
+ }
+ }
+ }
+ }
+}
+.Ed
+.Pp
+The following command lists all properties with local settings for
+.Ar pool/home/bob :
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Fl r s Sy local Fl o Sy name , Ns Sy property , Ns Sy value all Ar pool/home/bob
+NAME PROPERTY VALUE
+pool/home/bob quota 20G
+pool/home/bob compression on
+.Ed
+.
+.Ss Example 5 : No Inheriting ZFS Properties
+The following command causes
+.Ar pool/home/bob No and Ar pool/home/anne
+to inherit the
+.Sy checksum
+property from their parent.
+.Dl # Nm zfs Cm inherit Sy checksum Ar pool/home/bob pool/home/anne
+.
+.Ss Example 6 : No Setting User Properties
+The following example sets the user-defined
+.Ar com.example : Ns Ar department
+property for a dataset:
+.Dl # Nm zfs Cm set Ar com.example : Ns Ar department Ns = Ns Ar 12345 tank/accounting
+.
+.Ss Example 7 : No Setting sharenfs Property Options on a ZFS File System
+The following commands show how to set
+.Sy sharenfs
+property options to enable read-write
+access for a set of IP addresses and to enable root access for system
+.Qq neo
+on the
+.Ar tank/home
+file system:
+.Dl # Nm zfs Cm set Sy sharenfs Ns = Ns ' Ns Ar rw Ns =@123.123.0.0/16:[::1],root= Ns Ar neo Ns ' tank/home
+.Pp
+If you are using DNS for host name resolution,
+specify the fully-qualified hostname.
+.
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr zfs-list 8
diff --git a/share/man/man8/zfs-share.8 b/share/man/man8/zfs-share.8
@@ -0,0 +1,100 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd May 17, 2021
+.Dt ZFS-SHARE 8
+.Os
+.
+.Sh NAME
+.Nm zfs-share
+.Nd share and unshare ZFS filesystems
+.Sh SYNOPSIS
+.Nm zfs
+.Cm share
+.Op Fl l
+.Fl a Ns | Ns Ar filesystem
+.Nm zfs
+.Cm unshare
+.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm share
+.Op Fl l
+.Fl a Ns | Ns Ar filesystem
+.Xc
+Shares available ZFS file systems.
+.Bl -tag -width "-a"
+.It Fl l
+Load keys for encrypted filesystems as they are being mounted.
+This is equivalent to executing
+.Nm zfs Cm load-key
+on each encryption root before mounting it.
+Note that if a filesystem has
+.Sy keylocation Ns = Ns Sy prompt ,
+this will cause the terminal to interactively block after asking for the key.
+.It Fl a
+Share all available ZFS file systems.
+Invoked automatically as part of the boot process.
+.It Ar filesystem
+Share the specified filesystem according to the
+.Sy sharenfs
+and
+.Sy sharesmb
+properties.
+File systems are shared when the
+.Sy sharenfs
+or
+.Sy sharesmb
+property is set.
+.El
+.It Xo
+.Nm zfs
+.Cm unshare
+.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint
+.Xc
+Unshares currently shared ZFS file systems.
+.Bl -tag -width "-a"
+.It Fl a
+Unshare all available ZFS file systems.
+Invoked automatically as part of the shutdown process.
+.It Ar filesystem Ns | Ns Ar mountpoint
+Unshare the specified filesystem.
+The command can also be given a path to a ZFS file system shared on the system.
+.El
+.El
+.
+.Sh SEE ALSO
+.Xr exports 5 ,
+.Xr smb.conf 5 ,
+.Xr zfsprops 7
diff --git a/share/man/man8/zfs-snapshot.8 b/share/man/man8/zfs-snapshot.8
@@ -0,0 +1,142 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 16, 2022
+.Dt ZFS-SNAPSHOT 8
+.Os
+.
+.Sh NAME
+.Nm zfs-snapshot
+.Nd create snapshots of ZFS datasets
+.Sh SYNOPSIS
+.Nm zfs
+.Cm snapshot
+.Op Fl r
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Ar dataset Ns @ Ns Ar snapname Ns …
+.
+.Sh DESCRIPTION
+Creates a snapshot of a dataset or multiple snapshots of different
+datasets.
+.Pp
+Snapshots are created atomically.
+That is, a snapshot is a consistent image of a dataset at a specific
+point in time; it includes all modifications to the dataset made by
+system calls that have successfully completed before that point in time.
+Recursive snapshots created through the
+.Fl r
+option are all created at the same time.
+.Pp
+.Nm zfs Cm snap
+can be used as an alias for
+.Nm zfs Cm snapshot .
+.Pp
+See the
+.Sx Snapshots
+section of
+.Xr zfsconcepts 7
+for details.
+.Bl -tag -width "-o"
+.It Fl o Ar property Ns = Ns Ar value
+Set the specified property; see
+.Nm zfs Cm create
+for details.
+.It Fl r
+Recursively create snapshots of all descendent datasets
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 2, 3, 10, 15 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Creating a ZFS Snapshot
+The following command creates a snapshot named
+.Ar yesterday .
+This snapshot is mounted on demand in the
+.Pa .zfs/snapshot
+directory at the root of the
+.Ar pool/home/bob
+file system.
+.Dl # Nm zfs Cm snapshot Ar pool/home/bob Ns @ Ns Ar yesterday
+.
+.Ss Example 2 : No Creating and Destroying Multiple Snapshots
+The following command creates snapshots named
+.Ar yesterday No of Ar pool/home
+and all of its descendent file systems.
+Each snapshot is mounted on demand in the
+.Pa .zfs/snapshot
+directory at the root of its file system.
+The second command destroys the newly created snapshots.
+.Dl # Nm zfs Cm snapshot Fl r Ar pool/home Ns @ Ns Ar yesterday
+.Dl # Nm zfs Cm destroy Fl r Ar pool/home Ns @ Ns Ar yesterday
+.
+.Ss Example 3 : No Promoting a ZFS Clone
+The following commands illustrate how to test out changes to a file system, and
+then replace the original file system with the changed one, using clones, clone
+promotion, and renaming:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm create Ar pool/project/production
+ populate /pool/project/production with data
+.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today
+.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta
+ make changes to /pool/project/beta and test them
+.No # Nm zfs Cm promote Ar pool/project/beta
+.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy
+.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production
+ once the legacy version is no longer needed, it can be destroyed
+.No # Nm zfs Cm destroy Ar pool/project/legacy
+.Ed
+.
+.Ss Example 4 : No Performing a Rolling Snapshot
+The following example shows how to maintain a history of snapshots with a
+consistent naming scheme.
+To keep a week's worth of snapshots, the user destroys the oldest snapshot,
+renames the remaining snapshots, and then creates a new snapshot, as follows:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm destroy Fl r Ar pool/users@7daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@6daysago No @ Ns Ar 7daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@5daysago No @ Ns Ar 6daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@4daysago No @ Ns Ar 5daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@3daysago No @ Ns Ar 4daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@2daysago No @ Ns Ar 3daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@yesterday No @ Ns Ar 2daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@today No @ Ns Ar yesterday
+.No # Nm zfs Cm snapshot Fl r Ar pool/users Ns @ Ns Ar today
+.Ed
+.
+.Sh SEE ALSO
+.Xr zfs-bookmark 8 ,
+.Xr zfs-clone 8 ,
+.Xr zfs-destroy 8 ,
+.Xr zfs-diff 8 ,
+.Xr zfs-hold 8 ,
+.Xr zfs-rename 8 ,
+.Xr zfs-rollback 8 ,
+.Xr zfs-send 8
diff --git a/share/man/man8/zfs-unallow.8 b/share/man/man8/zfs-unallow.8
@@ -0,0 +1,492 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd March 16, 2022
+.Dt ZFS-ALLOW 8
+.Os
+.
+.Sh NAME
+.Nm zfs-allow
+.Nd delegate ZFS administration permissions to unprivileged users
+.Sh SYNOPSIS
+.Nm zfs
+.Cm allow
+.Op Fl dglu
+.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns …
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm allow
+.Op Fl dl
+.Fl e Ns | Ns Sy everyone
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm allow
+.Fl c
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm allow
+.Fl s No @ Ns Ar setname
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm unallow
+.Op Fl dglru
+.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns …
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm unallow
+.Op Fl dlr
+.Fl e Ns | Ns Sy everyone
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm unallow
+.Op Fl r
+.Fl c
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Nm zfs
+.Cm unallow
+.Op Fl r
+.Fl s No @ Ns Ar setname
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm allow
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Displays permissions that have been delegated on the specified filesystem or
+volume.
+See the other forms of
+.Nm zfs Cm allow
+for more information.
+.Pp
+Delegations are supported under Linux with the exception of
+.Sy mount ,
+.Sy unmount ,
+.Sy mountpoint ,
+.Sy canmount ,
+.Sy rename ,
+and
+.Sy share .
+These permissions cannot be delegated because the Linux
+.Xr mount 8
+command restricts modifications of the global namespace to the root user.
+.It Xo
+.Nm zfs
+.Cm allow
+.Op Fl dglu
+.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns …
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+.It Xo
+.Nm zfs
+.Cm allow
+.Op Fl dl
+.Fl e Ns | Ns Sy everyone
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Delegates ZFS administration permission for the file systems to non-privileged
+users.
+.Bl -tag -width "-d"
+.It Fl d
+Allow only for the descendent file systems.
+.It Fl e Ns | Ns Sy everyone
+Specifies that the permissions be delegated to everyone.
+.It Fl g Ar group Ns Oo , Ns Ar group Oc Ns …
+Explicitly specify that permissions are delegated to the group.
+.It Fl l
+Allow
+.Qq locally
+only for the specified file system.
+.It Fl u Ar user Ns Oo , Ns Ar user Oc Ns …
+Explicitly specify that permissions are delegated to the user.
+.It Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns …
+Specifies to whom the permissions are delegated.
+Multiple entities can be specified as a comma-separated list.
+If neither of the
+.Fl gu
+options are specified, then the argument is interpreted preferentially as the
+keyword
+.Sy everyone ,
+then as a user name, and lastly as a group name.
+To specify a user or group named
+.Qq everyone ,
+use the
+.Fl g
+or
+.Fl u
+options.
+To specify a group with the same name as a user, use the
+.Fl g
+options.
+.It Xo
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Xc
+The permissions to delegate.
+Multiple permissions may be specified as a comma-separated list.
+Permission names are the same as ZFS subcommand and property names.
+See the property list below.
+Property set names, which begin with
+.Sy @ ,
+may be specified.
+See the
+.Fl s
+form below for details.
+.El
+.Pp
+If neither of the
+.Fl dl
+options are specified, or both are, then the permissions are allowed for the
+file system or volume, and all of its descendents.
+.Pp
+Permissions are generally the ability to use a ZFS subcommand or change a ZFS
+property.
+The following permissions are available:
+.TS
+l l l .
+NAME TYPE NOTES
+_ _ _
+allow subcommand Must also have the permission that is being allowed
+bookmark subcommand
+clone subcommand Must also have the \fBcreate\fR ability and \fBmount\fR ability in the origin file system
+create subcommand Must also have the \fBmount\fR ability. Must also have the \fBrefreservation\fR ability to create a non-sparse volume.
+destroy subcommand Must also have the \fBmount\fR ability
+diff subcommand Allows lookup of paths within a dataset given an object number, and the ability to create snapshots necessary to \fBzfs diff\fR.
+hold subcommand Allows adding a user hold to a snapshot
+load-key subcommand Allows loading and unloading of encryption key (see \fBzfs load-key\fR and \fBzfs unload-key\fR).
+change-key subcommand Allows changing an encryption key via \fBzfs change-key\fR.
+mount subcommand Allows mounting/umounting ZFS datasets
+promote subcommand Must also have the \fBmount\fR and \fBpromote\fR ability in the origin file system
+receive subcommand Must also have the \fBmount\fR and \fBcreate\fR ability
+release subcommand Allows releasing a user hold which might destroy the snapshot
+rename subcommand Must also have the \fBmount\fR and \fBcreate\fR ability in the new parent
+rollback subcommand Must also have the \fBmount\fR ability
+send subcommand
+share subcommand Allows sharing file systems over NFS or SMB protocols
+snapshot subcommand Must also have the \fBmount\fR ability
+
+groupquota other Allows accessing any \fBgroupquota@\fI…\fR property
+groupobjquota other Allows accessing any \fBgroupobjquota@\fI…\fR property
+groupused other Allows reading any \fBgroupused@\fI…\fR property
+groupobjused other Allows reading any \fBgroupobjused@\fI…\fR property
+userprop other Allows changing any user property
+userquota other Allows accessing any \fBuserquota@\fI…\fR property
+userobjquota other Allows accessing any \fBuserobjquota@\fI…\fR property
+userused other Allows reading any \fBuserused@\fI…\fR property
+userobjused other Allows reading any \fBuserobjused@\fI…\fR property
+projectobjquota other Allows accessing any \fBprojectobjquota@\fI…\fR property
+projectquota other Allows accessing any \fBprojectquota@\fI…\fR property
+projectobjused other Allows reading any \fBprojectobjused@\fI…\fR property
+projectused other Allows reading any \fBprojectused@\fI…\fR property
+
+aclinherit property
+aclmode property
+acltype property
+atime property
+canmount property
+casesensitivity property
+checksum property
+compression property
+context property
+copies property
+dedup property
+defcontext property
+devices property
+dnodesize property
+encryption property
+exec property
+filesystem_limit property
+fscontext property
+keyformat property
+keylocation property
+logbias property
+mlslabel property
+mountpoint property
+nbmand property
+normalization property
+overlay property
+pbkdf2iters property
+primarycache property
+quota property
+readonly property
+recordsize property
+redundant_metadata property
+refquota property
+refreservation property
+relatime property
+reservation property
+rootcontext property
+secondarycache property
+setuid property
+sharenfs property
+sharesmb property
+snapdev property
+snapdir property
+snapshot_limit property
+special_small_blocks property
+sync property
+utf8only property
+version property
+volblocksize property
+volmode property
+volsize property
+vscan property
+xattr property
+zoned property
+.TE
+.It Xo
+.Nm zfs
+.Cm allow
+.Fl c
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Sets
+.Qq create time
+permissions.
+These permissions are granted
+.Pq locally
+to the creator of any newly-created descendent file system.
+.It Xo
+.Nm zfs
+.Cm allow
+.Fl s No @ Ns Ar setname
+.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns …
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Defines or adds permissions to a permission set.
+The set can be used by other
+.Nm zfs Cm allow
+commands for the specified file system and its descendents.
+Sets are evaluated dynamically, so changes to a set are immediately reflected.
+Permission sets follow the same naming restrictions as ZFS file systems, but the
+name must begin with
+.Sy @ ,
+and can be no more than 64 characters long.
+.It Xo
+.Nm zfs
+.Cm unallow
+.Op Fl dglru
+.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns …
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+.It Xo
+.Nm zfs
+.Cm unallow
+.Op Fl dlr
+.Fl e Ns | Ns Sy everyone
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+.It Xo
+.Nm zfs
+.Cm unallow
+.Op Fl r
+.Fl c
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Removes permissions that were granted with the
+.Nm zfs Cm allow
+command.
+No permissions are explicitly denied, so other permissions granted are still in
+effect.
+For example, if the permission is granted by an ancestor.
+If no permissions are specified, then all permissions for the specified
+.Ar user ,
+.Ar group ,
+or
+.Sy everyone
+are removed.
+Specifying
+.Sy everyone
+.Po or using the
+.Fl e
+option
+.Pc
+only removes the permissions that were granted to everyone, not all permissions
+for every user and group.
+See the
+.Nm zfs Cm allow
+command for a description of the
+.Fl ldugec
+options.
+.Bl -tag -width "-r"
+.It Fl r
+Recursively remove the permissions from this file system and all descendents.
+.El
+.It Xo
+.Nm zfs
+.Cm unallow
+.Op Fl r
+.Fl s No @ Ns Ar setname
+.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns
+.Ar setname Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar volume
+.Xc
+Removes permissions from a permission set.
+If no permissions are specified, then all permissions are removed, thus removing
+the set entirely.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 17, 18, 19, 20 from zfs.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Delegating ZFS Administration Permissions on a ZFS Dataset
+The following example shows how to set permissions so that user
+.Ar cindys
+can create, destroy, mount, and take snapshots on
+.Ar tank/cindys .
+The permissions on
+.Ar tank/cindys
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Sy cindys create , Ns Sy destroy , Ns Sy mount , Ns Sy snapshot Ar tank/cindys
+.No # Nm zfs Cm allow Ar tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+ user cindys create,destroy,mount,snapshot
+.Ed
+.Pp
+Because the
+.Ar tank/cindys
+mount point permission is set to 755 by default, user
+.Ar cindys
+will be unable to mount file systems under
+.Ar tank/cindys .
+Add an ACE similar to the following syntax to provide mount point access:
+.Dl # Cm chmod No A+user : Ns Ar cindys Ns :add_subdirectory:allow Ar /tank/cindys
+.
+.Ss Example 2 : No Delegating Create Time Permissions on a ZFS Dataset
+The following example shows how to grant anyone in the group
+.Ar staff
+to create file systems in
+.Ar tank/users .
+This syntax also allows staff members to destroy their own file systems, but not
+destroy anyone else's file system.
+The permissions on
+.Ar tank/users
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Ar staff Sy create , Ns Sy mount Ar tank/users
+.No # Nm zfs Cm allow Fl c Sy destroy Ar tank/users
+.No # Nm zfs Cm allow Ar tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+ destroy
+Local+Descendent permissions:
+ group staff create,mount
+.Ed
+.
+.Ss Example 3 : No Defining and Granting a Permission Set on a ZFS Dataset
+The following example shows how to define and grant a permission set on the
+.Ar tank/users
+file system.
+The permissions on
+.Ar tank/users
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Fl s No @ Ns Ar pset Sy create , Ns Sy destroy , Ns Sy snapshot , Ns Sy mount Ar tank/users
+.No # Nm zfs Cm allow staff No @ Ns Ar pset tank/users
+.No # Nm zfs Cm allow Ar tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+ @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+ group staff @pset
+.Ed
+.
+.Ss Example 4 : No Delegating Property Permissions on a ZFS Dataset
+The following example shows to grant the ability to set quotas and reservations
+on the
+.Ar users/home
+file system.
+The permissions on
+.Ar users/home
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Ar cindys Sy quota , Ns Sy reservation Ar users/home
+.No # Nm zfs Cm allow Ar users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+ user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME PROPERTY VALUE SOURCE
+users/home/marks quota 10G local
+.Ed
+.
+.Ss Example 5 : No Removing ZFS Delegated Permissions on a ZFS Dataset
+The following example shows how to remove the snapshot permission from the
+.Ar staff
+group on the
+.Sy tank/users
+file system.
+The permissions on
+.Sy tank/users
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm unallow Ar staff Sy snapshot Ar tank/users
+.No # Nm zfs Cm allow Ar tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+ @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+ group staff @pset
+.Ed
diff --git a/share/man/man8/zfs-unjail.8 b/share/man/man8/zfs-unjail.8
@@ -0,0 +1,124 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2011, Pawel Jakub Dawidek <pjd@FreeBSD.org>
+.\" Copyright (c) 2012, Glen Barber <gjb@FreeBSD.org>
+.\" Copyright (c) 2012, Bryan Drewery <bdrewery@FreeBSD.org>
+.\" Copyright (c) 2013, Steven Hartland <smh@FreeBSD.org>
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright (c) 2014, Xin LI <delphij@FreeBSD.org>
+.\" Copyright (c) 2014-2015, The FreeBSD Foundation, All Rights Reserved.
+.\" Copyright (c) 2016 Nexenta Systems, Inc. All Rights Reserved.
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd May 27, 2021
+.Dt ZFS-JAIL 8
+.Os
+.
+.Sh NAME
+.Nm zfs-jail
+.Nd attach or detach ZFS filesystem from FreeBSD jail
+.Sh SYNOPSIS
+.Nm zfs Cm jail
+.Ar jailid Ns | Ns Ar jailname
+.Ar filesystem
+.Nm zfs Cm unjail
+.Ar jailid Ns | Ns Ar jailname
+.Ar filesystem
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm jail
+.Ar jailid Ns | Ns Ar jailname
+.Ar filesystem
+.Xc
+Attach the specified
+.Ar filesystem
+to the jail identified by JID
+.Ar jailid
+or name
+.Ar jailname .
+From now on this file system tree can be managed from within a jail if the
+.Sy jailed
+property has been set.
+To use this functionality, the jail needs the
+.Sy allow.mount
+and
+.Sy allow.mount.zfs
+parameters set to
+.Sy 1
+and the
+.Sy enforce_statfs
+parameter set to a value lower than
+.Sy 2 .
+.Pp
+You cannot attach a jailed dataset's children to another jail.
+You can also not attach the root file system
+of the jail or any dataset which needs to be mounted before the zfs rc script
+is run inside the jail, as it would be attached unmounted until it is
+mounted from the rc script inside the jail.
+.Pp
+To allow management of the dataset from within a jail, the
+.Sy jailed
+property has to be set and the jail needs access to the
+.Pa /dev/zfs
+device.
+The
+.Sy quota
+property cannot be changed from within a jail.
+.Pp
+After a dataset is attached to a jail and the
+.Sy jailed
+property is set, a jailed file system cannot be mounted outside the jail,
+since the jail administrator might have set the mount point to an unacceptable
+value.
+.Pp
+See
+.Xr jail 8
+for more information on managing jails.
+Jails are a
+.Fx
+feature and are not relevant on other platforms.
+.It Xo
+.Nm zfs
+.Cm unjail
+.Ar jailid Ns | Ns Ar jailname
+.Ar filesystem
+.Xc
+Detaches the specified
+.Ar filesystem
+from the jail identified by JID
+.Ar jailid
+or name
+.Ar jailname .
+.El
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr jail 8
diff --git a/share/man/man8/zfs-unload-key.8 b/share/man/man8/zfs-unload-key.8
@@ -0,0 +1,304 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd January 13, 2020
+.Dt ZFS-LOAD-KEY 8
+.Os
+.
+.Sh NAME
+.Nm zfs-load-key
+.Nd load, unload, or change encryption key of ZFS dataset
+.Sh SYNOPSIS
+.Nm zfs
+.Cm load-key
+.Op Fl nr
+.Op Fl L Ar keylocation
+.Fl a Ns | Ns Ar filesystem
+.Nm zfs
+.Cm unload-key
+.Op Fl r
+.Fl a Ns | Ns Ar filesystem
+.Nm zfs
+.Cm change-key
+.Op Fl l
+.Op Fl o Ar keylocation Ns = Ns Ar value
+.Op Fl o Ar keyformat Ns = Ns Ar value
+.Op Fl o Ar pbkdf2iters Ns = Ns Ar value
+.Ar filesystem
+.Nm zfs
+.Cm change-key
+.Fl i
+.Op Fl l
+.Ar filesystem
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm load-key
+.Op Fl nr
+.Op Fl L Ar keylocation
+.Fl a Ns | Ns Ar filesystem
+.Xc
+Load the key for
+.Ar filesystem ,
+allowing it and all children that inherit the
+.Sy keylocation
+property to be accessed.
+The key will be expected in the format specified by the
+.Sy keyformat
+and location specified by the
+.Sy keylocation
+property.
+Note that if the
+.Sy keylocation
+is set to
+.Sy prompt
+the terminal will interactively wait for the key to be entered.
+Loading a key will not automatically mount the dataset.
+If that functionality is desired,
+.Nm zfs Cm mount Fl l
+will ask for the key and mount the dataset
+.Po
+see
+.Xr zfs-mount 8
+.Pc .
+Once the key is loaded the
+.Sy keystatus
+property will become
+.Sy available .
+.Bl -tag -width "-r"
+.It Fl r
+Recursively loads the keys for the specified filesystem and all descendent
+encryption roots.
+.It Fl a
+Loads the keys for all encryption roots in all imported pools.
+.It Fl n
+Do a dry-run
+.Pq Qq No-op
+.Cm load-key .
+This will cause
+.Nm zfs
+to simply check that the provided key is correct.
+This command may be run even if the key is already loaded.
+.It Fl L Ar keylocation
+Use
+.Ar keylocation
+instead of the
+.Sy keylocation
+property.
+This will not change the value of the property on the dataset.
+Note that if used with either
+.Fl r
+or
+.Fl a ,
+.Ar keylocation
+may only be given as
+.Sy prompt .
+.El
+.It Xo
+.Nm zfs
+.Cm unload-key
+.Op Fl r
+.Fl a Ns | Ns Ar filesystem
+.Xc
+Unloads a key from ZFS, removing the ability to access the dataset and all of
+its children that inherit the
+.Sy keylocation
+property.
+This requires that the dataset is not currently open or mounted.
+Once the key is unloaded the
+.Sy keystatus
+property will become
+.Sy unavailable .
+.Bl -tag -width "-r"
+.It Fl r
+Recursively unloads the keys for the specified filesystem and all descendent
+encryption roots.
+.It Fl a
+Unloads the keys for all encryption roots in all imported pools.
+.El
+.It Xo
+.Nm zfs
+.Cm change-key
+.Op Fl l
+.Op Fl o Ar keylocation Ns = Ns Ar value
+.Op Fl o Ar keyformat Ns = Ns Ar value
+.Op Fl o Ar pbkdf2iters Ns = Ns Ar value
+.Ar filesystem
+.Xc
+.It Xo
+.Nm zfs
+.Cm change-key
+.Fl i
+.Op Fl l
+.Ar filesystem
+.Xc
+Changes the user's key (e.g. a passphrase) used to access a dataset.
+This command requires that the existing key for the dataset is already loaded.
+This command may also be used to change the
+.Sy keylocation ,
+.Sy keyformat ,
+and
+.Sy pbkdf2iters
+properties as needed.
+If the dataset was not previously an encryption root it will become one.
+Alternatively, the
+.Fl i
+flag may be provided to cause an encryption root to inherit the parent's key
+instead.
+.Pp
+If the user's key is compromised,
+.Nm zfs Cm change-key
+does not necessarily protect existing or newly-written data from attack.
+Newly-written data will continue to be encrypted with the same master key as
+the existing data.
+The master key is compromised if an attacker obtains a
+user key and the corresponding wrapped master key.
+Currently,
+.Nm zfs Cm change-key
+does not overwrite the previous wrapped master key on disk, so it is
+accessible via forensic analysis for an indeterminate length of time.
+.Pp
+In the event of a master key compromise, ideally the drives should be securely
+erased to remove all the old data (which is readable using the compromised
+master key), a new pool created, and the data copied back.
+This can be approximated in place by creating new datasets, copying the data
+.Pq e.g. using Nm zfs Cm send | Nm zfs Cm recv ,
+and then clearing the free space with
+.Nm zpool Cm trim Fl -secure
+if supported by your hardware, otherwise
+.Nm zpool Cm initialize .
+.Bl -tag -width "-r"
+.It Fl l
+Ensures the key is loaded before attempting to change the key.
+This is effectively equivalent to running
+.Nm zfs Cm load-key Ar filesystem ; Nm zfs Cm change-key Ar filesystem
+.It Fl o Ar property Ns = Ns Ar value
+Allows the user to set encryption key properties
+.Pq Sy keyformat , keylocation , No and Sy pbkdf2iters
+while changing the key.
+This is the only way to alter
+.Sy keyformat
+and
+.Sy pbkdf2iters
+after the dataset has been created.
+.It Fl i
+Indicates that zfs should make
+.Ar filesystem
+inherit the key of its parent.
+Note that this command can only be run on an encryption root
+that has an encrypted parent.
+.El
+.El
+.Ss Encryption
+Enabling the
+.Sy encryption
+feature allows for the creation of encrypted filesystems and volumes.
+ZFS will encrypt file and volume data, file attributes, ACLs, permission bits,
+directory listings, FUID mappings, and
+.Sy userused Ns / Ns Sy groupused
+data.
+ZFS will not encrypt metadata related to the pool structure, including
+dataset and snapshot names, dataset hierarchy, properties, file size, file
+holes, and deduplication tables (though the deduplicated data itself is
+encrypted).
+.Pp
+Key rotation is managed by ZFS.
+Changing the user's key (e.g. a passphrase)
+does not require re-encrypting the entire dataset.
+Datasets can be scrubbed,
+resilvered, renamed, and deleted without the encryption keys being loaded (see
+the
+.Cm load-key
+subcommand for more info on key loading).
+.Pp
+Creating an encrypted dataset requires specifying the
+.Sy encryption No and Sy keyformat
+properties at creation time, along with an optional
+.Sy keylocation No and Sy pbkdf2iters .
+After entering an encryption key, the
+created dataset will become an encryption root.
+Any descendant datasets will
+inherit their encryption key from the encryption root by default, meaning that
+loading, unloading, or changing the key for the encryption root will implicitly
+do the same for all inheriting datasets.
+If this inheritance is not desired, simply supply a
+.Sy keyformat
+when creating the child dataset or use
+.Nm zfs Cm change-key
+to break an existing relationship, creating a new encryption root on the child.
+Note that the child's
+.Sy keyformat
+may match that of the parent while still creating a new encryption root, and
+that changing the
+.Sy encryption
+property alone does not create a new encryption root; this would simply use a
+different cipher suite with the same key as its encryption root.
+The one exception is that clones will always use their origin's encryption key.
+As a result of this exception, some encryption-related properties
+.Pq namely Sy keystatus , keyformat , keylocation , No and Sy pbkdf2iters
+do not inherit like other ZFS properties and instead use the value determined
+by their encryption root.
+Encryption root inheritance can be tracked via the read-only
+.Sy encryptionroot
+property.
+.Pp
+Encryption changes the behavior of a few ZFS
+operations.
+Encryption is applied after compression so compression ratios are preserved.
+Normally checksums in ZFS are 256 bits long, but for encrypted data
+the checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from
+the encryption suite, which provides additional protection against maliciously
+altered data.
+Deduplication is still possible with encryption enabled but for security,
+datasets will only deduplicate against themselves, their snapshots,
+and their clones.
+.Pp
+There are a few limitations on encrypted datasets.
+Encrypted data cannot be embedded via the
+.Sy embedded_data
+feature.
+Encrypted datasets may not have
+.Sy copies Ns = Ns Em 3
+since the implementation stores some encryption metadata where the third copy
+would normally be.
+Since compression is applied before encryption, datasets may
+be vulnerable to a CRIME-like attack if applications accessing the data allow
+for it.
+Deduplication with encryption will leak information about which blocks
+are equivalent in a dataset and will incur an extra CPU cost for each block
+written.
+.
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr zfs-create 8 ,
+.Xr zfs-set 8
diff --git a/share/man/man8/zfs-unmount.8 b/share/man/man8/zfs-unmount.8
@@ -0,0 +1,139 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd February 16, 2019
+.Dt ZFS-MOUNT 8
+.Os
+.
+.Sh NAME
+.Nm zfs-mount
+.Nd manage mount state of ZFS filesystems
+.Sh SYNOPSIS
+.Nm zfs
+.Cm mount
+.Op Fl j
+.Nm zfs
+.Cm mount
+.Op Fl Oflv
+.Op Fl o Ar options
+.Fl a Ns | Ns Fl R Ar filesystem Ns | Ns Ar filesystem
+.Nm zfs
+.Cm unmount
+.Op Fl fu
+.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm mount
+.Op Fl j
+.Xc
+Displays all ZFS file systems currently mounted.
+.Bl -tag -width "-j"
+.It Fl j , -json
+Displays all mounted file systems in JSON format.
+.El
+.It Xo
+.Nm zfs
+.Cm mount
+.Op Fl Oflv
+.Op Fl o Ar options
+.Fl a Ns | Ns Fl R Ar filesystem Ns | Ns Ar filesystem
+.Xc
+Mount ZFS filesystem on a path described by its
+.Sy mountpoint
+property, if the path exists and is empty.
+If
+.Sy mountpoint
+is set to
+.Em legacy ,
+the filesystem should be instead mounted using
+.Xr mount 8 .
+.Bl -tag -width "-O"
+.It Fl O
+Perform an overlay mount.
+Allows mounting in non-empty
+.Sy mountpoint .
+See
+.Xr mount 8
+for more information.
+.It Fl a
+Mount all available ZFS file systems.
+Invoked automatically as part of the boot process if configured.
+.It Fl R
+Mount the specified filesystems along with all their children.
+.It Ar filesystem
+Mount the specified filesystem.
+.It Fl o Ar options
+An optional, comma-separated list of mount options to use temporarily for the
+duration of the mount.
+See the
+.Em Temporary Mount Point Properties
+section of
+.Xr zfsprops 7
+for details.
+.It Fl l
+Load keys for encrypted filesystems as they are being mounted.
+This is equivalent to executing
+.Nm zfs Cm load-key
+on each encryption root before mounting it.
+Note that if a filesystem has
+.Sy keylocation Ns = Ns Sy prompt ,
+this will cause the terminal to interactively block after asking for the key.
+.It Fl v
+Report mount progress.
+.It Fl f
+Attempt to force mounting of all filesystems, even those that couldn't normally
+be mounted (e.g. redacted datasets).
+.El
+.It Xo
+.Nm zfs
+.Cm unmount
+.Op Fl fu
+.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint
+.Xc
+Unmounts currently mounted ZFS file systems.
+.Bl -tag -width "-a"
+.It Fl a
+Unmount all available ZFS file systems.
+Invoked automatically as part of the shutdown process.
+.It Fl f
+Forcefully unmount the file system, even if it is currently in use.
+This option is not supported on Linux.
+.It Fl u
+Unload keys for any encryption roots unmounted by this command.
+.It Ar filesystem Ns | Ns Ar mountpoint
+Unmount the specified filesystem.
+The command can also be given a path to a ZFS file system mount point on the
+system.
+.El
+.El
diff --git a/share/man/man8/zfs-upgrade.8 b/share/man/man8/zfs-upgrade.8
@@ -0,0 +1,103 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd June 30, 2019
+.Dt ZFS-UPGRADE 8
+.Os
+.
+.Sh NAME
+.Nm zfs-upgrade
+.Nd manage on-disk version of ZFS filesystems
+.Sh SYNOPSIS
+.Nm zfs
+.Cm upgrade
+.Nm zfs
+.Cm upgrade
+.Fl v
+.Nm zfs
+.Cm upgrade
+.Op Fl r
+.Op Fl V Ar version
+.Fl a Ns | Ns Ar filesystem
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm upgrade
+.Xc
+Displays a list of file systems that are not the most recent version.
+.It Xo
+.Nm zfs
+.Cm upgrade
+.Fl v
+.Xc
+Displays a list of currently supported file system versions.
+.It Xo
+.Nm zfs
+.Cm upgrade
+.Op Fl r
+.Op Fl V Ar version
+.Fl a Ns | Ns Ar filesystem
+.Xc
+Upgrades file systems to a new on-disk version.
+Once this is done, the file systems will no longer be accessible on systems
+running older versions of ZFS.
+.Nm zfs Cm send
+streams generated from new snapshots of these file systems cannot be accessed on
+systems running older versions of ZFS.
+.Pp
+In general, the file system version is independent of the pool version.
+See
+.Xr zpool-features 7
+for information on features of ZFS storage pools.
+.Pp
+In some cases, the file system version and the pool version are interrelated and
+the pool version must be upgraded before the file system version can be
+upgraded.
+.Bl -tag -width "filesystem"
+.It Fl V Ar version
+Upgrade to
+.Ar version .
+If not specified, upgrade to the most recent version.
+This
+option can only be used to increase the version number, and only up to the most
+recent version supported by this version of ZFS.
+.It Fl a
+Upgrade all file systems on all imported pools.
+.It Ar filesystem
+Upgrade the specified file system.
+.It Fl r
+Upgrade the specified file system and all descendent file systems.
+.El
+.El
+.Sh SEE ALSO
+.Xr zpool-upgrade 8
diff --git a/share/man/man8/zfs-userspace.8 b/share/man/man8/zfs-userspace.8
@@ -0,0 +1,188 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd June 30, 2019
+.Dt ZFS-USERSPACE 8
+.Os
+.
+.Sh NAME
+.Nm zfs-userspace
+.Nd display space and quotas of ZFS dataset
+.Sh SYNOPSIS
+.Nm zfs
+.Cm userspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Nm zfs
+.Cm groupspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Nm zfs
+.Cm projectspace
+.Op Fl Hp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.
+.Sh DESCRIPTION
+.Bl -tag -width ""
+.It Xo
+.Nm zfs
+.Cm userspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Xc
+Displays space consumed by, and quotas on, each user in the specified
+filesystem,
+snapshot, or path.
+If a path is given, the filesystem that contains that path will be used.
+This corresponds to the
+.Sy userused@ Ns Em user ,
+.Sy userobjused@ Ns Em user ,
+.Sy userquota@ Ns Em user ,
+and
+.Sy userobjquota@ Ns Em user
+properties.
+.Bl -tag -width "-S field"
+.It Fl H
+Do not print headers, use tab-delimited output.
+.It Fl S Ar field
+Sort by this field in reverse order.
+See
+.Fl s .
+.It Fl i
+Translate SID to POSIX ID.
+The POSIX ID may be ephemeral if no mapping exists.
+Normal POSIX interfaces
+.Pq like Xr stat 2 , Nm ls Fl l
+perform this translation, so the
+.Fl i
+option allows the output from
+.Nm zfs Cm userspace
+to be compared directly with those utilities.
+However,
+.Fl i
+may lead to confusion if some files were created by an SMB user before a
+SMB-to-POSIX name mapping was established.
+In such a case, some files will be owned by the SMB entity and some by the POSIX
+entity.
+However, the
+.Fl i
+option will report that the POSIX entity has the total usage and quota for both.
+.It Fl n
+Print numeric ID instead of user/group name.
+.It Fl o Ar field Ns Oo , Ns Ar field Oc Ns …
+Display only the specified fields from the following set:
+.Sy type ,
+.Sy name ,
+.Sy used ,
+.Sy quota .
+The default is to display all fields.
+.It Fl p
+Use exact
+.Pq parsable
+numeric output.
+.It Fl s Ar field
+Sort output by this field.
+The
+.Fl s
+and
+.Fl S
+flags may be specified multiple times to sort first by one field, then by
+another.
+The default is
+.Fl s Sy type Fl s Sy name .
+.It Fl t Ar type Ns Oo , Ns Ar type Oc Ns …
+Print only the specified types from the following set:
+.Sy all ,
+.Sy posixuser ,
+.Sy smbuser ,
+.Sy posixgroup ,
+.Sy smbgroup .
+The default is
+.Fl t Sy posixuser , Ns Sy smbuser .
+The default can be changed to include group types.
+.El
+.It Xo
+.Nm zfs
+.Cm groupspace
+.Op Fl Hinp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc
+.Ar filesystem Ns | Ns Ar snapshot
+.Xc
+Displays space consumed by, and quotas on, each group in the specified
+filesystem or snapshot.
+This subcommand is identical to
+.Cm userspace ,
+except that the default types to display are
+.Fl t Sy posixgroup , Ns Sy smbgroup .
+.It Xo
+.Nm zfs
+.Cm projectspace
+.Op Fl Hp
+.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc
+.Oo Fl s Ar field Oc Ns …
+.Oo Fl S Ar field Oc Ns …
+.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path
+.Xc
+Displays space consumed by, and quotas on, each project in the specified
+filesystem or snapshot.
+This subcommand is identical to
+.Cm userspace ,
+except that the project identifier is a numeral, not a name.
+So need neither the option
+.Fl i
+for SID to POSIX ID nor
+.Fl n
+for numeric ID, nor
+.Fl t
+for types.
+.El
+.
+.Sh SEE ALSO
+.Xr zfsprops 7 ,
+.Xr zfs-set 8
diff --git a/share/man/man8/zfs-wait.8 b/share/man/man8/zfs-wait.8
@@ -0,0 +1,65 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd May 31, 2021
+.Dt ZFS-WAIT 8
+.Os
+.
+.Sh NAME
+.Nm zfs-wait
+.Nd wait for activity in ZFS filesystem to stop
+.Sh SYNOPSIS
+.Nm zfs
+.Cm wait
+.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns …
+.Ar filesystem
+.
+.Sh DESCRIPTION
+Waits until all background activity of the given types has ceased in the given
+filesystem.
+The activity could cease because it has completed or because the filesystem has
+been destroyed or unmounted.
+If no activities are specified, the command waits until background activity of
+every type listed below has ceased.
+If there is no activity of the given types in progress, the command returns
+immediately.
+.Pp
+These are the possible values for
+.Ar activity ,
+along with what each one waits for:
+.Bl -tag -compact -offset Ds -width "deleteq"
+.It Sy deleteq
+The filesystem's internal delete queue to empty
+.El
+.Pp
+Note that the internal delete queue does not finish draining until
+all large files have had time to be fully destroyed and all open file
+handles to unlinked files are closed.
+.
+.Sh SEE ALSO
+.Xr lsof 8
diff --git a/share/man/man8/zfs.8 b/share/man/man8/zfs.8
@@ -0,0 +1,838 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org>
+.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved.
+.\" Copyright (c) 2011, Pawel Jakub Dawidek <pjd@FreeBSD.org>
+.\" Copyright (c) 2012, Glen Barber <gjb@FreeBSD.org>
+.\" Copyright (c) 2012, Bryan Drewery <bdrewery@FreeBSD.org>
+.\" Copyright (c) 2013, Steven Hartland <smh@FreeBSD.org>
+.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
+.\" Copyright (c) 2014, Joyent, Inc. All rights reserved.
+.\" Copyright (c) 2014 by Adam Stevko. All rights reserved.
+.\" Copyright (c) 2014 Integros [integros.com]
+.\" Copyright (c) 2014, Xin LI <delphij@FreeBSD.org>
+.\" Copyright (c) 2014-2015, The FreeBSD Foundation, All Rights Reserved.
+.\" Copyright (c) 2016 Nexenta Systems, Inc. All Rights Reserved.
+.\" Copyright 2019 Richard Laager. All rights reserved.
+.\" Copyright 2018 Nexenta Systems, Inc.
+.\" Copyright 2019 Joyent, Inc.
+.\"
+.Dd May 12, 2022
+.Dt ZFS 8
+.Os
+.
+.Sh NAME
+.Nm zfs
+.Nd configure ZFS datasets
+.Sh SYNOPSIS
+.Nm
+.Fl ?V
+.Nm
+.Cm version
+.Op Fl j
+.Nm
+.Cm subcommand
+.Op Ar arguments
+.
+.Sh DESCRIPTION
+The
+.Nm
+command configures ZFS datasets within a ZFS storage pool, as described in
+.Xr zpool 8 .
+A dataset is identified by a unique path within the ZFS namespace:
+.Pp
+.D1 Ar pool Ns Oo Sy / Ns Ar component Oc Ns Sy / Ns Ar component
+.Pp
+for example:
+.Pp
+.Dl rpool/var/log
+.Pp
+The maximum length of a dataset name is
+.Sy ZFS_MAX_DATASET_NAME_LEN No - 1
+ASCII characters (currently 255) satisfying
+.Sy [A-Za-z_.:/ -] .
+Additionally snapshots are allowed to contain a single
+.Sy @
+character, while bookmarks are allowed to contain a single
+.Sy #
+character.
+.Sy /
+is used as separator between components.
+The maximum amount of nesting allowed in a path is
+.Sy zfs_max_dataset_nesting
+levels deep.
+ZFS tunables
+.Pq Sy zfs_*
+are explained in
+.Xr zfs 4 .
+.Pp
+A dataset can be one of the following:
+.Bl -tag -offset Ds -width "file system"
+.It Sy file system
+Can be mounted within the standard system namespace and behaves like other file
+systems.
+While ZFS file systems are designed to be POSIX-compliant, known issues exist
+that prevent compliance in some cases.
+Applications that depend on standards conformance might fail due to non-standard
+behavior when checking file system free space.
+.It Sy volume
+A logical volume exported as a raw or block device.
+This type of dataset should only be used when a block device is required.
+File systems are typically used in most environments.
+.It Sy snapshot
+A read-only version of a file system or volume at a given point in time.
+It is specified as
+.Ar filesystem Ns @ Ns Ar name
+or
+.Ar volume Ns @ Ns Ar name .
+.It Sy bookmark
+Much like a
+.Sy snapshot ,
+but without the hold on on-disk data.
+It can be used as the source of a send (but not for a receive).
+It is specified as
+.Ar filesystem Ns # Ns Ar name
+or
+.Ar volume Ns # Ns Ar name .
+.El
+.Pp
+See
+.Xr zfsconcepts 7
+for details.
+.
+.Ss Properties
+Properties are divided into two types: native properties and user-defined
+.Pq or Qq user
+properties.
+Native properties either export internal statistics or control ZFS behavior.
+In addition, native properties are either editable or read-only.
+User properties have no effect on ZFS behavior, but you can use them to annotate
+datasets in a way that is meaningful in your environment.
+For more information about properties, see
+.Xr zfsprops 7 .
+.
+.Ss Encryption
+Enabling the
+.Sy encryption
+feature allows for the creation of encrypted filesystems and volumes.
+ZFS will encrypt file and zvol data, file attributes, ACLs, permission bits,
+directory listings, FUID mappings, and
+.Sy userused Ns / Ns Sy groupused Ns / Ns Sy projectused
+data.
+For an overview of encryption, see
+.Xr zfs-load-key 8 .
+.
+.Sh SUBCOMMANDS
+All subcommands that modify state are logged persistently to the pool in their
+original form.
+.Bl -tag -width ""
+.It Nm Fl ?
+Displays a help message.
+.It Xo
+.Nm
+.Fl V , -version
+.Xc
+.It Xo
+.Nm
+.Cm version
+.Op Fl j
+.Xc
+Displays the software version of the
+.Nm
+userland utility and the zfs kernel module.
+Use
+.Fl j
+option to output in JSON format.
+.El
+.
+.Ss Dataset Management
+.Bl -tag -width ""
+.It Xr zfs-list 8
+Lists the property information for the given datasets in tabular form.
+.It Xr zfs-create 8
+Creates a new ZFS file system or volume.
+.It Xr zfs-destroy 8
+Destroys the given dataset(s), snapshot(s), or bookmark.
+.It Xr zfs-rename 8
+Renames the given dataset (filesystem or snapshot).
+.It Xr zfs-upgrade 8
+Manage upgrading the on-disk version of filesystems.
+.El
+.
+.Ss Snapshots
+.Bl -tag -width ""
+.It Xr zfs-snapshot 8
+Creates snapshots with the given names.
+.It Xr zfs-rollback 8
+Roll back the given dataset to a previous snapshot.
+.It Xr zfs-hold 8 Ns / Ns Xr zfs-release 8
+Add or remove a hold reference to the specified snapshot or snapshots.
+If a hold exists on a snapshot, attempts to destroy that snapshot by using the
+.Nm zfs Cm destroy
+command return
+.Sy EBUSY .
+.It Xr zfs-diff 8
+Display the difference between a snapshot of a given filesystem and another
+snapshot of that filesystem from a later time or the current contents of the
+filesystem.
+.El
+.
+.Ss Clones
+.Bl -tag -width ""
+.It Xr zfs-clone 8
+Creates a clone of the given snapshot.
+.It Xr zfs-promote 8
+Promotes a clone file system to no longer be dependent on its
+.Qq origin
+snapshot.
+.El
+.
+.Ss Send & Receive
+.Bl -tag -width ""
+.It Xr zfs-send 8
+Generate a send stream, which may be of a filesystem, and may be incremental
+from a bookmark.
+.It Xr zfs-receive 8
+Creates a snapshot whose contents are as specified in the stream provided on
+standard input.
+If a full stream is received, then a new file system is created as well.
+Streams are created using the
+.Xr zfs-send 8
+subcommand, which by default creates a full stream.
+.It Xr zfs-bookmark 8
+Creates a new bookmark of the given snapshot or bookmark.
+Bookmarks mark the point in time when the snapshot was created, and can be used
+as the incremental source for a
+.Nm zfs Cm send
+command.
+.It Xr zfs-redact 8
+Generate a new redaction bookmark.
+This feature can be used to allow clones of a filesystem to be made available on
+a remote system, in the case where their parent need not (or needs to not) be
+usable.
+.El
+.
+.Ss Properties
+.Bl -tag -width ""
+.It Xr zfs-get 8
+Displays properties for the given datasets.
+.It Xr zfs-set 8
+Sets the property or list of properties to the given value(s) for each dataset.
+.It Xr zfs-inherit 8
+Clears the specified property, causing it to be inherited from an ancestor,
+restored to default if no ancestor has the property set, or with the
+.Fl S
+option reverted to the received value if one exists.
+.El
+.
+.Ss Quotas
+.Bl -tag -width ""
+.It Xr zfs-userspace 8 Ns / Ns Xr zfs-groupspace 8 Ns / Ns Xr zfs-projectspace 8
+Displays space consumed by, and quotas on, each user, group, or project
+in the specified filesystem or snapshot.
+.It Xr zfs-project 8
+List, set, or clear project ID and/or inherit flag on the files or directories.
+.El
+.
+.Ss Mountpoints
+.Bl -tag -width ""
+.It Xr zfs-mount 8
+Displays all ZFS file systems currently mounted, or mount ZFS filesystem
+on a path described by its
+.Sy mountpoint
+property.
+.It Xr zfs-unmount 8
+Unmounts currently mounted ZFS file systems.
+.El
+.
+.Ss Shares
+.Bl -tag -width ""
+.It Xr zfs-share 8
+Shares available ZFS file systems.
+.It Xr zfs-unshare 8
+Unshares currently shared ZFS file systems.
+.El
+.
+.Ss Delegated Administration
+.Bl -tag -width ""
+.It Xr zfs-allow 8
+Delegate permissions on the specified filesystem or volume.
+.It Xr zfs-unallow 8
+Remove delegated permissions on the specified filesystem or volume.
+.El
+.
+.Ss Encryption
+.Bl -tag -width ""
+.It Xr zfs-change-key 8
+Add or change an encryption key on the specified dataset.
+.It Xr zfs-load-key 8
+Load the key for the specified encrypted dataset, enabling access.
+.It Xr zfs-unload-key 8
+Unload a key for the specified dataset,
+removing the ability to access the dataset.
+.El
+.
+.Ss Channel Programs
+.Bl -tag -width ""
+.It Xr zfs-program 8
+Execute ZFS administrative operations
+programmatically via a Lua script-language channel program.
+.El
+.
+.Ss Jails
+.Bl -tag -width ""
+.It Xr zfs-jail 8
+Attaches a filesystem to a jail.
+.It Xr zfs-unjail 8
+Detaches a filesystem from a jail.
+.El
+.
+.Ss Waiting
+.Bl -tag -width ""
+.It Xr zfs-wait 8
+Wait for background activity in a filesystem to complete.
+.El
+.
+.Sh EXIT STATUS
+The
+.Nm
+utility exits
+.Sy 0
+on success,
+.Sy 1
+if an error occurs, and
+.Sy 2
+if invalid command line options were specified.
+.
+.Sh EXAMPLES
+.\" Examples 1, 4, 6, 7, 11, 14, 16 are shared with zfs-set.8.
+.\" Examples 1, 10 are shared with zfs-create.8.
+.\" Examples 2, 3, 10, 15 are also shared with zfs-snapshot.8.
+.\" Examples 3, 10, 15 are shared with zfs-destroy.8.
+.\" Examples 5 are shared with zfs-list.8.
+.\" Examples 8 are shared with zfs-rollback.8.
+.\" Examples 9, 10 are shared with zfs-clone.8.
+.\" Examples 10 are also shared with zfs-promote.8.
+.\" Examples 10, 15 also are shared with zfs-rename.8.
+.\" Examples 12, 13 are shared with zfs-send.8.
+.\" Examples 12, 13 are also shared with zfs-receive.8.
+.\" Examples 17, 18, 19, 20, 21 are shared with zfs-allow.8.
+.\" Examples 22 are shared with zfs-diff.8.
+.\" Examples 23 are shared with zfs-bookmark.8.
+.\" Make sure to update them omnidirectionally
+.Ss Example 1 : No Creating a ZFS File System Hierarchy
+The following commands create a file system named
+.Ar pool/home
+and a file system named
+.Ar pool/home/bob .
+The mount point
+.Pa /export/home
+is set for the parent file system, and is automatically inherited by the child
+file system.
+.Dl # Nm zfs Cm create Ar pool/home
+.Dl # Nm zfs Cm set Sy mountpoint Ns = Ns Ar /export/home pool/home
+.Dl # Nm zfs Cm create Ar pool/home/bob
+.
+.Ss Example 2 : No Creating a ZFS Snapshot
+The following command creates a snapshot named
+.Ar yesterday .
+This snapshot is mounted on demand in the
+.Pa .zfs/snapshot
+directory at the root of the
+.Ar pool/home/bob
+file system.
+.Dl # Nm zfs Cm snapshot Ar pool/home/bob Ns @ Ns Ar yesterday
+.
+.Ss Example 3 : No Creating and Destroying Multiple Snapshots
+The following command creates snapshots named
+.Ar yesterday No of Ar pool/home
+and all of its descendent file systems.
+Each snapshot is mounted on demand in the
+.Pa .zfs/snapshot
+directory at the root of its file system.
+The second command destroys the newly created snapshots.
+.Dl # Nm zfs Cm snapshot Fl r Ar pool/home Ns @ Ns Ar yesterday
+.Dl # Nm zfs Cm destroy Fl r Ar pool/home Ns @ Ns Ar yesterday
+.
+.Ss Example 4 : No Disabling and Enabling File System Compression
+The following command disables the
+.Sy compression
+property for all file systems under
+.Ar pool/home .
+The next command explicitly enables
+.Sy compression
+for
+.Ar pool/home/anne .
+.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy off Ar pool/home
+.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy on Ar pool/home/anne
+.
+.Ss Example 5 : No Listing ZFS Datasets
+The following command lists all active file systems and volumes in the system.
+Snapshots are displayed if
+.Sy listsnaps Ns = Ns Sy on .
+The default is
+.Sy off .
+See
+.Xr zpoolprops 7
+for more information on pool properties.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm list
+NAME USED AVAIL REFER MOUNTPOINT
+pool 450K 457G 18K /pool
+pool/home 315K 457G 21K /export/home
+pool/home/anne 18K 457G 18K /export/home/anne
+pool/home/bob 276K 457G 276K /export/home/bob
+.Ed
+.
+.Ss Example 6 : No Setting a Quota on a ZFS File System
+The following command sets a quota of 50 Gbytes for
+.Ar pool/home/bob :
+.Dl # Nm zfs Cm set Sy quota Ns = Ns Ar 50G pool/home/bob
+.
+.Ss Example 7 : No Listing ZFS Properties
+The following command lists all properties for
+.Ar pool/home/bob :
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Sy all Ar pool/home/bob
+NAME PROPERTY VALUE SOURCE
+pool/home/bob type filesystem -
+pool/home/bob creation Tue Jul 21 15:53 2009 -
+pool/home/bob used 21K -
+pool/home/bob available 20.0G -
+pool/home/bob referenced 21K -
+pool/home/bob compressratio 1.00x -
+pool/home/bob mounted yes -
+pool/home/bob quota 20G local
+pool/home/bob reservation none default
+pool/home/bob recordsize 128K default
+pool/home/bob mountpoint /pool/home/bob default
+pool/home/bob sharenfs off default
+pool/home/bob checksum on default
+pool/home/bob compression on local
+pool/home/bob atime on default
+pool/home/bob devices on default
+pool/home/bob exec on default
+pool/home/bob setuid on default
+pool/home/bob readonly off default
+pool/home/bob zoned off default
+pool/home/bob snapdir hidden default
+pool/home/bob acltype off default
+pool/home/bob aclmode discard default
+pool/home/bob aclinherit restricted default
+pool/home/bob canmount on default
+pool/home/bob xattr on default
+pool/home/bob copies 1 default
+pool/home/bob version 4 -
+pool/home/bob utf8only off -
+pool/home/bob normalization none -
+pool/home/bob casesensitivity sensitive -
+pool/home/bob vscan off default
+pool/home/bob nbmand off default
+pool/home/bob sharesmb off default
+pool/home/bob refquota none default
+pool/home/bob refreservation none default
+pool/home/bob primarycache all default
+pool/home/bob secondarycache all default
+pool/home/bob usedbysnapshots 0 -
+pool/home/bob usedbydataset 21K -
+pool/home/bob usedbychildren 0 -
+pool/home/bob usedbyrefreservation 0 -
+.Ed
+.Pp
+The following command gets a single property value:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Fl H o Sy value compression Ar pool/home/bob
+on
+.Ed
+.Pp
+The following command lists all properties with local settings for
+.Ar pool/home/bob :
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm get Fl r s Sy local Fl o Sy name , Ns Sy property , Ns Sy value all Ar pool/home/bob
+NAME PROPERTY VALUE
+pool/home/bob quota 20G
+pool/home/bob compression on
+.Ed
+.
+.Ss Example 8 : No Rolling Back a ZFS File System
+The following command reverts the contents of
+.Ar pool/home/anne
+to the snapshot named
+.Ar yesterday ,
+deleting all intermediate snapshots:
+.Dl # Nm zfs Cm rollback Fl r Ar pool/home/anne Ns @ Ns Ar yesterday
+.
+.Ss Example 9 : No Creating a ZFS Clone
+The following command creates a writable file system whose initial contents are
+the same as
+.Ar pool/home/bob@yesterday .
+.Dl # Nm zfs Cm clone Ar pool/home/bob@yesterday pool/clone
+.
+.Ss Example 10 : No Promoting a ZFS Clone
+The following commands illustrate how to test out changes to a file system, and
+then replace the original file system with the changed one, using clones, clone
+promotion, and renaming:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm create Ar pool/project/production
+ populate /pool/project/production with data
+.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today
+.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta
+ make changes to /pool/project/beta and test them
+.No # Nm zfs Cm promote Ar pool/project/beta
+.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy
+.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production
+ once the legacy version is no longer needed, it can be destroyed
+.No # Nm zfs Cm destroy Ar pool/project/legacy
+.Ed
+.
+.Ss Example 11 : No Inheriting ZFS Properties
+The following command causes
+.Ar pool/home/bob No and Ar pool/home/anne
+to inherit the
+.Sy checksum
+property from their parent.
+.Dl # Nm zfs Cm inherit Sy checksum Ar pool/home/bob pool/home/anne
+.
+.Ss Example 12 : No Remotely Replicating ZFS Data
+The following commands send a full stream and then an incremental stream to a
+remote machine, restoring them into
+.Em poolB/received/fs@a
+and
+.Em poolB/received/fs@b ,
+respectively.
+.Em poolB
+must contain the file system
+.Em poolB/received ,
+and must not initially contain
+.Em poolB/received/fs .
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm send Ar pool/fs@a |
+.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs Ns @ Ns Ar a
+.No # Nm zfs Cm send Fl i Ar a pool/fs@b |
+.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs
+.Ed
+.
+.Ss Example 13 : No Using the Nm zfs Cm receive Fl d No Option
+The following command sends a full stream of
+.Ar poolA/fsA/fsB@snap
+to a remote machine, receiving it into
+.Ar poolB/received/fsA/fsB@snap .
+The
+.Ar fsA/fsB@snap
+portion of the received snapshot's name is determined from the name of the sent
+snapshot.
+.Ar poolB
+must contain the file system
+.Ar poolB/received .
+If
+.Ar poolB/received/fsA
+does not exist, it is created as an empty file system.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm send Ar poolA/fsA/fsB@snap |
+.No " " Nm ssh Ar host Nm zfs Cm receive Fl d Ar poolB/received
+.Ed
+.
+.Ss Example 14 : No Setting User Properties
+The following example sets the user-defined
+.Ar com.example : Ns Ar department
+property for a dataset:
+.Dl # Nm zfs Cm set Ar com.example : Ns Ar department Ns = Ns Ar 12345 tank/accounting
+.
+.Ss Example 15 : No Performing a Rolling Snapshot
+The following example shows how to maintain a history of snapshots with a
+consistent naming scheme.
+To keep a week's worth of snapshots, the user destroys the oldest snapshot,
+renames the remaining snapshots, and then creates a new snapshot, as follows:
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm destroy Fl r Ar pool/users@7daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@6daysago No @ Ns Ar 7daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@5daysago No @ Ns Ar 6daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@4daysago No @ Ns Ar 5daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@3daysago No @ Ns Ar 4daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@2daysago No @ Ns Ar 3daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@yesterday No @ Ns Ar 2daysago
+.No # Nm zfs Cm rename Fl r Ar pool/users@today No @ Ns Ar yesterday
+.No # Nm zfs Cm snapshot Fl r Ar pool/users Ns @ Ns Ar today
+.Ed
+.
+.Ss Example 16 : No Setting sharenfs Property Options on a ZFS File System
+The following commands show how to set
+.Sy sharenfs
+property options to enable read-write
+access for a set of IP addresses and to enable root access for system
+.Qq neo
+on the
+.Ar tank/home
+file system:
+.Dl # Nm zfs Cm set Sy sharenfs Ns = Ns ' Ns Ar rw Ns =@123.123.0.0/16:[::1],root= Ns Ar neo Ns ' tank/home
+.Pp
+If you are using DNS for host name resolution,
+specify the fully-qualified hostname.
+.
+.Ss Example 17 : No Delegating ZFS Administration Permissions on a ZFS Dataset
+The following example shows how to set permissions so that user
+.Ar cindys
+can create, destroy, mount, and take snapshots on
+.Ar tank/cindys .
+The permissions on
+.Ar tank/cindys
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Sy cindys create , Ns Sy destroy , Ns Sy mount , Ns Sy snapshot Ar tank/cindys
+.No # Nm zfs Cm allow Ar tank/cindys
+---- Permissions on tank/cindys --------------------------------------
+Local+Descendent permissions:
+ user cindys create,destroy,mount,snapshot
+.Ed
+.Pp
+Because the
+.Ar tank/cindys
+mount point permission is set to 755 by default, user
+.Ar cindys
+will be unable to mount file systems under
+.Ar tank/cindys .
+Add an ACE similar to the following syntax to provide mount point access:
+.Dl # Cm chmod No A+user : Ns Ar cindys Ns :add_subdirectory:allow Ar /tank/cindys
+.
+.Ss Example 18 : No Delegating Create Time Permissions on a ZFS Dataset
+The following example shows how to grant anyone in the group
+.Ar staff
+to create file systems in
+.Ar tank/users .
+This syntax also allows staff members to destroy their own file systems, but not
+destroy anyone else's file system.
+The permissions on
+.Ar tank/users
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Ar staff Sy create , Ns Sy mount Ar tank/users
+.No # Nm zfs Cm allow Fl c Sy destroy Ar tank/users
+.No # Nm zfs Cm allow Ar tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+ destroy
+Local+Descendent permissions:
+ group staff create,mount
+.Ed
+.
+.Ss Example 19 : No Defining and Granting a Permission Set on a ZFS Dataset
+The following example shows how to define and grant a permission set on the
+.Ar tank/users
+file system.
+The permissions on
+.Ar tank/users
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Fl s No @ Ns Ar pset Sy create , Ns Sy destroy , Ns Sy snapshot , Ns Sy mount Ar tank/users
+.No # Nm zfs Cm allow staff No @ Ns Ar pset tank/users
+.No # Nm zfs Cm allow Ar tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+ @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+ group staff @pset
+.Ed
+.
+.Ss Example 20 : No Delegating Property Permissions on a ZFS Dataset
+The following example shows to grant the ability to set quotas and reservations
+on the
+.Ar users/home
+file system.
+The permissions on
+.Ar users/home
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm allow Ar cindys Sy quota , Ns Sy reservation Ar users/home
+.No # Nm zfs Cm allow Ar users/home
+---- Permissions on users/home ---------------------------------------
+Local+Descendent permissions:
+ user cindys quota,reservation
+cindys% zfs set quota=10G users/home/marks
+cindys% zfs get quota users/home/marks
+NAME PROPERTY VALUE SOURCE
+users/home/marks quota 10G local
+.Ed
+.
+.Ss Example 21 : No Removing ZFS Delegated Permissions on a ZFS Dataset
+The following example shows how to remove the snapshot permission from the
+.Ar staff
+group on the
+.Sy tank/users
+file system.
+The permissions on
+.Sy tank/users
+are also displayed.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm unallow Ar staff Sy snapshot Ar tank/users
+.No # Nm zfs Cm allow Ar tank/users
+---- Permissions on tank/users ---------------------------------------
+Permission sets:
+ @pset create,destroy,mount,snapshot
+Local+Descendent permissions:
+ group staff @pset
+.Ed
+.
+.Ss Example 22 : No Showing the differences between a snapshot and a ZFS Dataset
+The following example shows how to see what has changed between a prior
+snapshot of a ZFS dataset and its current state.
+The
+.Fl F
+option is used to indicate type information for the files affected.
+.Bd -literal -compact -offset Ds
+.No # Nm zfs Cm diff Fl F Ar tank/test@before tank/test
+M / /tank/test/
+M F /tank/test/linked (+1)
+R F /tank/test/oldname -> /tank/test/newname
+- F /tank/test/deleted
++ F /tank/test/created
+M F /tank/test/modified
+.Ed
+.
+.Ss Example 23 : No Creating a bookmark
+The following example creates a bookmark to a snapshot.
+This bookmark can then be used instead of a snapshot in send streams.
+.Dl # Nm zfs Cm bookmark Ar rpool Ns @ Ns Ar snapshot rpool Ns # Ns Ar bookmark
+.
+.Ss Example 24 : No Setting Sy sharesmb No Property Options on a ZFS File System
+The following example show how to share SMB filesystem through ZFS.
+Note that a user and their password must be given.
+.Dl # Nm smbmount Ar //127.0.0.1/share_tmp /mnt/tmp Fl o No user=workgroup/turbo,password=obrut,uid=1000
+.Pp
+Minimal
+.Pa /etc/samba/smb.conf
+configuration is required, as follows.
+.Pp
+Samba will need to bind to the loopback interface for the ZFS utilities to
+communicate with Samba.
+This is the default behavior for most Linux distributions.
+.Pp
+Samba must be able to authenticate a user.
+This can be done in a number of ways
+.Pq Xr passwd 5 , LDAP , Xr smbpasswd 5 , &c.\& .
+How to do this is outside the scope of this document – refer to
+.Xr smb.conf 5
+for more information.
+.Pp
+See the
+.Sx USERSHARES
+section for all configuration options,
+in case you need to modify any options of the share afterwards.
+Do note that any changes done with the
+.Xr net 8
+command will be undone if the share is ever unshared (like via a reboot).
+.
+.Sh ENVIRONMENT VARIABLES
+.Bl -tag -width "ZFS_MODULE_TIMEOUT"
+.It Sy ZFS_COLOR
+Use ANSI color in
+.Nm zfs Cm diff
+and
+.Nm zfs Cm list
+output.
+.It Sy ZFS_MOUNT_HELPER
+Cause
+.Nm zfs Cm mount
+to use
+.Xr mount 8
+to mount ZFS datasets.
+This option is provided for backwards compatibility with older ZFS versions.
+.
+.It Sy ZFS_SET_PIPE_MAX
+Tells
+.Nm zfs
+to set the maximum pipe size for sends/recieves.
+Disabled by default on Linux
+due to an unfixed deadlock in Linux's pipe size handling code.
+.
+.\" Shared with zpool.8
+.It Sy ZFS_MODULE_TIMEOUT
+Time, in seconds, to wait for
+.Pa /dev/zfs
+to appear.
+Defaults to
+.Sy 10 ,
+max
+.Sy 600 Pq 10 minutes .
+If
+.Pf < Sy 0 ,
+wait forever; if
+.Sy 0 ,
+don't wait.
+.El
+.
+.Sh INTERFACE STABILITY
+.Sy Committed .
+.
+.Sh SEE ALSO
+.Xr attr 1 ,
+.Xr gzip 1 ,
+.Xr ssh 1 ,
+.Xr chmod 2 ,
+.Xr fsync 2 ,
+.Xr stat 2 ,
+.Xr write 2 ,
+.Xr acl 5 ,
+.Xr attributes 5 ,
+.Xr exports 5 ,
+.Xr zfsconcepts 7 ,
+.Xr zfsprops 7 ,
+.Xr exportfs 8 ,
+.Xr mount 8 ,
+.Xr net 8 ,
+.Xr selinux 8 ,
+.Xr zfs-allow 8 ,
+.Xr zfs-bookmark 8 ,
+.Xr zfs-change-key 8 ,
+.Xr zfs-clone 8 ,
+.Xr zfs-create 8 ,
+.Xr zfs-destroy 8 ,
+.Xr zfs-diff 8 ,
+.Xr zfs-get 8 ,
+.Xr zfs-groupspace 8 ,
+.Xr zfs-hold 8 ,
+.Xr zfs-inherit 8 ,
+.Xr zfs-jail 8 ,
+.Xr zfs-list 8 ,
+.Xr zfs-load-key 8 ,
+.Xr zfs-mount 8 ,
+.Xr zfs-program 8 ,
+.Xr zfs-project 8 ,
+.Xr zfs-projectspace 8 ,
+.Xr zfs-promote 8 ,
+.Xr zfs-receive 8 ,
+.Xr zfs-redact 8 ,
+.Xr zfs-release 8 ,
+.Xr zfs-rename 8 ,
+.Xr zfs-rollback 8 ,
+.Xr zfs-send 8 ,
+.Xr zfs-set 8 ,
+.Xr zfs-share 8 ,
+.Xr zfs-snapshot 8 ,
+.Xr zfs-unallow 8 ,
+.Xr zfs-unjail 8 ,
+.Xr zfs-unload-key 8 ,
+.Xr zfs-unmount 8 ,
+.Xr zfs-unshare 8 ,
+.Xr zfs-upgrade 8 ,
+.Xr zfs-userspace 8 ,
+.Xr zfs-wait 8 ,
+.Xr zpool 8
diff --git a/share/man/man8/zpool-add.8 b/share/man/man8/zpool-add.8
@@ -0,0 +1,138 @@
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\" Copyright (c) 2024 by Delphix. All Rights Reserved.
+.\"
+.Dd March 8, 2024
+.Dt ZPOOL-ADD 8
+.Os
+.
+.Sh NAME
+.Nm zpool-add
+.Nd add vdevs to ZFS storage pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm add
+.Op Fl fgLnP
+.Op Fl -allow-in-use -allow-replication-mismatch -allow-ashift-mismatch
+.Oo Fl o Ar property Ns = Ns Ar value Oc
+.Ar pool vdev Ns …
+.
+.Sh DESCRIPTION
+Adds the specified virtual devices to the given pool.
+The
+.Ar vdev
+specification is described in the
+.Em Virtual Devices
+section of
+.Xr zpoolconcepts 7 .
+The behavior of the
+.Fl f
+option, and the device checks performed are described in the
+.Nm zpool Cm create
+subcommand.
+.Bl -tag -width Ds
+.It Fl f
+Forces use of
+.Ar vdev Ns s ,
+even if they appear in use, have conflicting ashift values, or specify
+a conflicting replication level.
+Not all devices can be overridden in this manner.
+.It Fl g
+Display
+.Ar vdev ,
+GUIDs instead of the normal device names.
+These GUIDs can be used in place of
+device names for the zpool detach/offline/remove/replace commands.
+.It Fl L
+Display real paths for
+.Ar vdev Ns s
+resolving all symbolic links.
+This can be used to look up the current block
+device name regardless of the
+.Pa /dev/disk
+path used to open it.
+.It Fl n
+Displays the configuration that would be used without actually adding the
+.Ar vdev Ns s .
+The actual pool creation can still fail due to insufficient privileges or
+device sharing.
+.It Fl P
+Display real paths for
+.Ar vdev Ns s
+instead of only the last component of the path.
+This can be used in conjunction with the
+.Fl L
+flag.
+.It Fl o Ar property Ns = Ns Ar value
+Sets the given pool properties.
+See the
+.Xr zpoolprops 7
+manual page for a list of valid properties that can be set.
+The only property supported at the moment is
+.Sy ashift .
+.It Fl -allow-ashift-mismatch
+Disable the ashift validation which allows mismatched ashift values in the
+pool.
+Adding top-level
+.Ar vdev Ns s
+with different sector sizes will prohibit future device removal operations, see
+.Xr zpool-remove 8 .
+.It Fl -allow-in-use
+Allow vdevs to be added even if they might be in use in another pool.
+.It Fl -allow-replication-mismatch
+Allow vdevs with conflicting replication levels to be added to the pool.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 5, 13 from zpool.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Adding a Mirror to a ZFS Storage Pool
+The following command adds two mirrored disks to the pool
+.Ar tank ,
+assuming the pool is already made up of two-way mirrors.
+The additional space is immediately available to any datasets within the pool.
+.Dl # Nm zpool Cm add Ar tank Sy mirror Pa sda sdb
+.
+.Ss Example 2 : No Adding Cache Devices to a ZFS Pool
+The following command adds two disks for use as cache devices to a ZFS storage
+pool:
+.Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd
+.Pp
+Once added, the cache devices gradually fill with content from main memory.
+Depending on the size of your cache devices, it could take over an hour for
+them to fill.
+Capacity and reads can be monitored using the
+.Cm iostat
+subcommand as follows:
+.Dl # Nm zpool Cm iostat Fl v Ar pool 5
+.
+.Sh SEE ALSO
+.Xr zpool-attach 8 ,
+.Xr zpool-import 8 ,
+.Xr zpool-initialize 8 ,
+.Xr zpool-online 8 ,
+.Xr zpool-remove 8
diff --git a/share/man/man8/zpool-attach.8 b/share/man/man8/zpool-attach.8
@@ -0,0 +1,141 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd June 28, 2023
+.Dt ZPOOL-ATTACH 8
+.Os
+.
+.Sh NAME
+.Nm zpool-attach
+.Nd attach new device to existing ZFS vdev
+.Sh SYNOPSIS
+.Nm zpool
+.Cm attach
+.Op Fl fsw
+.Oo Fl o Ar property Ns = Ns Ar value Oc
+.Ar pool device new_device
+.
+.Sh DESCRIPTION
+Attaches
+.Ar new_device
+to the existing
+.Ar device .
+The behavior differs depending on if the existing
+.Ar device
+is a RAID-Z device, or a mirror/plain device.
+.Pp
+If the existing device is a mirror or plain device
+.Pq e.g. specified as Qo Li sda Qc or Qq Li mirror-7 ,
+the new device will be mirrored with the existing device, a resilver will be
+initiated, and the new device will contribute to additional redundancy once the
+resilver completes.
+If
+.Ar device
+is not currently part of a mirrored configuration,
+.Ar device
+automatically transforms into a two-way mirror of
+.Ar device
+and
+.Ar new_device .
+If
+.Ar device
+is part of a two-way mirror, attaching
+.Ar new_device
+creates a three-way mirror, and so on.
+In either case,
+.Ar new_device
+begins to resilver immediately and any running scrub is cancelled.
+.Pp
+If the existing device is a RAID-Z device
+.Pq e.g. specified as Qq Ar raidz2-0 ,
+the new device will become part of that RAID-Z group.
+A "raidz expansion" will be initiated, and once the expansion completes,
+the new device will contribute additional space to the RAID-Z group.
+The expansion entails reading all allocated space from existing disks in the
+RAID-Z group, and rewriting it to the new disks in the RAID-Z group (including
+the newly added
+.Ar device ) .
+Its progress can be monitored with
+.Nm zpool Cm status .
+.Pp
+Data redundancy is maintained during and after the expansion.
+If a disk fails while the expansion is in progress, the expansion pauses until
+the health of the RAID-Z vdev is restored (e.g. by replacing the failed disk
+and waiting for reconstruction to complete).
+Expansion does not change the number of failures that can be tolerated
+without data loss (e.g. a RAID-Z2 is still a RAID-Z2 even after expansion).
+A RAID-Z vdev can be expanded multiple times.
+.Pp
+After the expansion completes, old blocks retain their old data-to-parity
+ratio
+.Pq e.g. 5-wide RAID-Z2 has 3 data and 2 parity
+but distributed among the larger set of disks.
+New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide
+RAID-Z2 which has been expanded once to 6-wide, has 4 data and 2 parity).
+However, the vdev's assumed parity ratio does not change, so slightly less
+space than is expected may be reported for newly-written blocks, according to
+.Nm zfs Cm list ,
+.Nm df ,
+.Nm ls Fl s ,
+and similar tools.
+.Pp
+A pool-wide scrub is initiated at the end of the expansion in order to verify
+the checksums of all blocks which have been copied during the expansion.
+.Bl -tag -width Ds
+.It Fl f
+Forces use of
+.Ar new_device ,
+even if it appears to be in use.
+Not all devices can be overridden in this manner.
+.It Fl o Ar property Ns = Ns Ar value
+Sets the given pool properties.
+See the
+.Xr zpoolprops 7
+manual page for a list of valid properties that can be set.
+The only property supported at the moment is
+.Sy ashift .
+.It Fl s
+When attaching to a mirror or plain device, the
+.Ar new_device
+is reconstructed sequentially to restore redundancy as quickly as possible.
+Checksums are not verified during sequential reconstruction so a scrub is
+started when the resilver completes.
+.It Fl w
+Waits until
+.Ar new_device
+has finished resilvering or expanding before returning.
+.El
+.
+.Sh SEE ALSO
+.Xr zpool-add 8 ,
+.Xr zpool-detach 8 ,
+.Xr zpool-import 8 ,
+.Xr zpool-initialize 8 ,
+.Xr zpool-online 8 ,
+.Xr zpool-replace 8 ,
+.Xr zpool-resilver 8
diff --git a/share/man/man8/zpool-checkpoint.8 b/share/man/man8/zpool-checkpoint.8
@@ -0,0 +1,72 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd May 27, 2021
+.Dt ZPOOL-CHECKPOINT 8
+.Os
+.
+.Sh NAME
+.Nm zpool-checkpoint
+.Nd check-point current ZFS storage pool state
+.Sh SYNOPSIS
+.Nm zpool
+.Cm checkpoint
+.Op Fl d Op Fl w
+.Ar pool
+.
+.Sh DESCRIPTION
+Checkpoints the current state of
+.Ar pool
+, which can be later restored by
+.Nm zpool Cm import --rewind-to-checkpoint .
+The existence of a checkpoint in a pool prohibits the following
+.Nm zpool
+subcommands:
+.Cm remove , attach , detach , split , No and Cm reguid .
+In addition, it may break reservation boundaries if the pool lacks free
+space.
+The
+.Nm zpool Cm status
+command indicates the existence of a checkpoint or the progress of discarding a
+checkpoint from a pool.
+.Nm zpool Cm list
+can be used to check how much space the checkpoint takes from the pool.
+.
+.Sh OPTIONS
+.Bl -tag -width Ds
+.It Fl d , -discard
+Discards an existing checkpoint from
+.Ar pool .
+.It Fl w , -wait
+Waits until the checkpoint has finished being discarded before returning.
+.El
+.
+.Sh SEE ALSO
+.Xr zfs-snapshot 8 ,
+.Xr zpool-import 8 ,
+.Xr zpool-status 8
diff --git a/share/man/man8/zpool-clear.8 b/share/man/man8/zpool-clear.8
@@ -0,0 +1,71 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd May 27, 2021
+.Dt ZPOOL-CLEAR 8
+.Os
+.
+.Sh NAME
+.Nm zpool-clear
+.Nd clear device errors in ZFS storage pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm clear
+.Op Fl -power
+.Ar pool
+.Oo Ar device Oc Ns …
+.
+.Sh DESCRIPTION
+Clears device errors in a pool.
+If no arguments are specified, all device errors within the pool are cleared.
+If one or more devices is specified, only those errors associated with the
+specified device or devices are cleared.
+.Pp
+If the pool was suspended it will be brought back online provided the
+devices can be accessed.
+Pools with
+.Sy multihost
+enabled which have been suspended cannot be resumed when there is evidence
+that the pool was imported by another host.
+The same checks performed during an import will be applied before the clear
+proceeds.
+.Bl -tag -width Ds
+.It Fl -power
+Power on the devices's slot in the storage enclosure and wait for the device
+to show up before attempting to clear errors.
+This is done on all the devices specified.
+Alternatively, you can set the
+.Sy ZPOOL_AUTO_POWER_ON_SLOT
+environment variable to always enable this behavior.
+Note: This flag currently works on Linux only.
+.El
+.
+.Sh SEE ALSO
+.Xr zdb 8 ,
+.Xr zpool-reopen 8 ,
+.Xr zpool-status 8
diff --git a/share/man/man8/zpool-create.8 b/share/man/man8/zpool-create.8
@@ -0,0 +1,244 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
+.\"
+.Dd March 16, 2022
+.Dt ZPOOL-CREATE 8
+.Os
+.
+.Sh NAME
+.Nm zpool-create
+.Nd create ZFS storage pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm create
+.Op Fl dfn
+.Op Fl m Ar mountpoint
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Oo Fl o Sy feature@ Ns Ar feature Ns = Ns Ar value Oc
+.Op Fl o Ar compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns …
+.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns …
+.Op Fl R Ar root
+.Op Fl t Ar tname
+.Ar pool
+.Ar vdev Ns …
+.
+.Sh DESCRIPTION
+Creates a new storage pool containing the virtual devices specified on the
+command line.
+The pool name must begin with a letter, and can only contain
+alphanumeric characters as well as the underscore
+.Pq Qq Sy _ ,
+dash
+.Pq Qq Sy \&- ,
+colon
+.Pq Qq Sy \&: ,
+space
+.Pq Qq Sy \&\ ,
+and period
+.Pq Qq Sy \&. .
+The pool names
+.Sy mirror ,
+.Sy raidz ,
+.Sy draid ,
+.Sy spare
+and
+.Sy log
+are reserved, as are names beginning with
+.Sy mirror ,
+.Sy raidz ,
+.Sy draid ,
+and
+.Sy spare .
+The
+.Ar vdev
+specification is described in the
+.Sx Virtual Devices
+section of
+.Xr zpoolconcepts 7 .
+.Pp
+The command attempts to verify that each device specified is accessible and not
+currently in use by another subsystem.
+However this check is not robust enough
+to detect simultaneous attempts to use a new device in different pools, even if
+.Sy multihost Ns = Sy enabled .
+The administrator must ensure that simultaneous invocations of any combination
+of
+.Nm zpool Cm replace ,
+.Nm zpool Cm create ,
+.Nm zpool Cm add ,
+or
+.Nm zpool Cm labelclear
+do not refer to the same device.
+Using the same device in two pools will result in pool corruption.
+.Pp
+There are some uses, such as being currently mounted, or specified as the
+dedicated dump device, that prevents a device from ever being used by ZFS.
+Other uses, such as having a preexisting UFS file system, can be overridden with
+.Fl f .
+.Pp
+The command also checks that the replication strategy for the pool is
+consistent.
+An attempt to combine redundant and non-redundant storage in a single pool,
+or to mix disks and files, results in an error unless
+.Fl f
+is specified.
+The use of differently-sized devices within a single raidz or mirror group is
+also flagged as an error unless
+.Fl f
+is specified.
+.Pp
+Unless the
+.Fl R
+option is specified, the default mount point is
+.Pa / Ns Ar pool .
+The mount point must not exist or must be empty, or else the root dataset
+will not be able to be be mounted.
+This can be overridden with the
+.Fl m
+option.
+.Pp
+By default all supported features are enabled on the new pool.
+The
+.Fl d
+option and the
+.Fl o Ar compatibility
+property
+.Pq e.g Fl o Sy compatibility Ns = Ns Ar 2020
+can be used to restrict the features that are enabled, so that the
+pool can be imported on other releases of ZFS.
+.Bl -tag -width "-t tname"
+.It Fl d
+Do not enable any features on the new pool.
+Individual features can be enabled by setting their corresponding properties to
+.Sy enabled
+with
+.Fl o .
+See
+.Xr zpool-features 7
+for details about feature properties.
+.It Fl f
+Forces use of
+.Ar vdev Ns s ,
+even if they appear in use or specify a conflicting replication level.
+Not all devices can be overridden in this manner.
+.It Fl m Ar mountpoint
+Sets the mount point for the root dataset.
+The default mount point is
+.Pa /pool
+or
+.Pa altroot/pool
+if
+.Sy altroot
+is specified.
+The mount point must be an absolute path,
+.Sy legacy ,
+or
+.Sy none .
+For more information on dataset mount points, see
+.Xr zfsprops 7 .
+.It Fl n
+Displays the configuration that would be used without actually creating the
+pool.
+The actual pool creation can still fail due to insufficient privileges or
+device sharing.
+.It Fl o Ar property Ns = Ns Ar value
+Sets the given pool properties.
+See
+.Xr zpoolprops 7
+for a list of valid properties that can be set.
+.It Fl o Ar compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns …
+Specifies compatibility feature sets.
+See
+.Xr zpool-features 7
+for more information about compatibility feature sets.
+.It Fl o Sy feature@ Ns Ar feature Ns = Ns Ar value
+Sets the given pool feature.
+See the
+.Xr zpool-features 7
+section for a list of valid features that can be set.
+Value can be either disabled or enabled.
+.It Fl O Ar file-system-property Ns = Ns Ar value
+Sets the given file system properties in the root file system of the pool.
+See
+.Xr zfsprops 7
+for a list of valid properties that can be set.
+.It Fl R Ar root
+Equivalent to
+.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
+.It Fl t Ar tname
+Sets the in-core pool name to
+.Ar tname
+while the on-disk name will be the name specified as
+.Ar pool .
+This will set the default of the
+.Sy cachefile
+property to
+.Sy none .
+This is intended
+to handle name space collisions when creating pools for other systems,
+such as virtual machines or physical machines whose pools live on network
+block devices.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 1, 2, 3, 4, 11, 12 from zpool.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Creating a RAID-Z Storage Pool
+The following command creates a pool with a single raidz root vdev that
+consists of six disks:
+.Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf
+.
+.Ss Example 2 : No Creating a Mirrored Storage Pool
+The following command creates a pool with two mirrors, where each mirror
+contains two disks:
+.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd
+.
+.Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions
+The following command creates a non-redundant pool using two disk partitions:
+.Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2
+.
+.Ss Example 4 : No Creating a ZFS Storage Pool by Using Files
+The following command creates a non-redundant pool using files.
+While not recommended, a pool based on files can be useful for experimental
+purposes.
+.Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b
+.
+.Ss Example 5 : No Managing Hot Spares
+The following command creates a new pool with an available hot spare:
+.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc
+.
+.Ss Example 6 : No Creating a ZFS Pool with Mirrored Separate Intent Logs
+The following command creates a ZFS storage pool consisting of two, two-way
+mirrors and mirrored log devices:
+.Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf
+.
+.Sh SEE ALSO
+.Xr zpool-destroy 8 ,
+.Xr zpool-export 8 ,
+.Xr zpool-import 8
diff --git a/share/man/man8/zpool-destroy.8 b/share/man/man8/zpool-destroy.8
@@ -0,0 +1,57 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd March 16, 2022
+.Dt ZPOOL-DESTROY 8
+.Os
+.
+.Sh NAME
+.Nm zpool-destroy
+.Nd destroy ZFS storage pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm destroy
+.Op Fl f
+.Ar pool
+.
+.Sh DESCRIPTION
+Destroys the given pool, freeing up any devices for other use.
+This command tries to unmount any active datasets before destroying the pool.
+.Bl -tag -width Ds
+.It Fl f
+Forcefully unmount all active datasets.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 7 from zpool.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Destroying a ZFS Storage Pool
+The following command destroys the pool
+.Ar tank
+and any datasets contained within:
+.Dl # Nm zpool Cm destroy Fl f Ar tank
diff --git a/share/man/man8/zpool-detach.8 b/share/man/man8/zpool-detach.8
@@ -0,0 +1,58 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd August 9, 2019
+.Dt ZPOOL-DETACH 8
+.Os
+.
+.Sh NAME
+.Nm zpool-detach
+.Nd detach device from ZFS mirror
+.Sh SYNOPSIS
+.Nm zpool
+.Cm detach
+.Ar pool device
+.
+.Sh DESCRIPTION
+Detaches
+.Ar device
+from a mirror.
+The operation is refused if there are no other valid replicas of the data.
+If
+.Ar device
+may be re-added to the pool later on then consider the
+.Nm zpool Cm offline
+command instead.
+.
+.Sh SEE ALSO
+.Xr zpool-attach 8 ,
+.Xr zpool-labelclear 8 ,
+.Xr zpool-offline 8 ,
+.Xr zpool-remove 8 ,
+.Xr zpool-replace 8 ,
+.Xr zpool-split 8
diff --git a/share/man/man8/zpool-events.8 b/share/man/man8/zpool-events.8
@@ -0,0 +1,482 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\" Copyright (c) 2024, Klara Inc.
+.\"
+.Dd February 28, 2024
+.Dt ZPOOL-EVENTS 8
+.Os
+.
+.Sh NAME
+.Nm zpool-events
+.Nd list recent events generated by kernel
+.Sh SYNOPSIS
+.Nm zpool
+.Cm events
+.Op Fl vHf
+.Op Ar pool
+.Nm zpool
+.Cm events
+.Fl c
+.
+.Sh DESCRIPTION
+Lists all recent events generated by the ZFS kernel modules.
+These events are consumed by the
+.Xr zed 8
+and used to automate administrative tasks such as replacing a failed device
+with a hot spare.
+For more information about the subclasses and event payloads
+that can be generated see
+.Sx EVENTS
+and the following sections.
+.
+.Sh OPTIONS
+.Bl -tag -compact -width Ds
+.It Fl c
+Clear all previous events.
+.It Fl f
+Follow mode.
+.It Fl H
+Scripted mode.
+Do not display headers, and separate fields by a
+single tab instead of arbitrary space.
+.It Fl v
+Print the entire payload for each event.
+.El
+.
+.Sh EVENTS
+These are the different event subclasses.
+The full event name would be
+.Sy ereport.fs.zfs.\& Ns Em SUBCLASS ,
+but only the last part is listed here.
+.Pp
+.Bl -tag -compact -width "vdev.bad_guid_sum"
+.It Sy checksum
+Issued when a checksum error has been detected.
+.It Sy io
+Issued when there is an I/O error in a vdev in the pool.
+.It Sy data
+Issued when there have been data errors in the pool.
+.It Sy deadman
+Issued when an I/O request is determined to be "hung", this can be caused
+by lost completion events due to flaky hardware or drivers.
+See
+.Sy zfs_deadman_failmode
+in
+.Xr zfs 4
+for additional information regarding "hung" I/O detection and configuration.
+.It Sy delay
+Issued when a completed I/O request exceeds the maximum allowed time
+specified by the
+.Sy zio_slow_io_ms
+module parameter.
+This can be an indicator of problems with the underlying storage device.
+The number of delay events is ratelimited by the
+.Sy zfs_slow_io_events_per_second
+module parameter.
+.It Sy dio_verify_rd
+Issued when there was a checksum verify error after a Direct I/O read has been
+issued.
+.It Sy dio_verify_wr
+Issued when there was a checksum verify error after a Direct I/O write has been
+issued.
+This event can only take place if the module parameter
+.Sy zfs_vdev_direct_write_verify
+is not set to zero.
+See
+.Xr zfs 4
+for more details on the
+.Sy zfs_vdev_direct_write_verify
+module paramter.
+.It Sy config
+Issued every time a vdev change have been done to the pool.
+.It Sy zpool
+Issued when a pool cannot be imported.
+.It Sy zpool.destroy
+Issued when a pool is destroyed.
+.It Sy zpool.export
+Issued when a pool is exported.
+.It Sy zpool.import
+Issued when a pool is imported.
+.It Sy zpool.reguid
+Issued when a REGUID (new unique identifier for the pool have been regenerated)
+have been detected.
+.It Sy vdev.unknown
+Issued when the vdev is unknown.
+Such as trying to clear device errors on a vdev that have failed/been kicked
+from the system/pool and is no longer available.
+.It Sy vdev.open_failed
+Issued when a vdev could not be opened (because it didn't exist for example).
+.It Sy vdev.corrupt_data
+Issued when corrupt data have been detected on a vdev.
+.It Sy vdev.no_replicas
+Issued when there are no more replicas to sustain the pool.
+This would lead to the pool being
+.Em DEGRADED .
+.It Sy vdev.bad_guid_sum
+Issued when a missing device in the pool have been detected.
+.It Sy vdev.too_small
+Issued when the system (kernel) have removed a device, and ZFS
+notices that the device isn't there any more.
+This is usually followed by a
+.Sy probe_failure
+event.
+.It Sy vdev.bad_label
+Issued when the label is OK but invalid.
+.It Sy vdev.bad_ashift
+Issued when the ashift alignment requirement has increased.
+.It Sy vdev.remove
+Issued when a vdev is detached from a mirror (or a spare detached from a
+vdev where it have been used to replace a failed drive - only works if
+the original drive have been re-added).
+.It Sy vdev.clear
+Issued when clearing device errors in a pool.
+Such as running
+.Nm zpool Cm clear
+on a device in the pool.
+.It Sy vdev.check
+Issued when a check to see if a given vdev could be opened is started.
+.It Sy vdev.spare
+Issued when a spare have kicked in to replace a failed device.
+.It Sy vdev.autoexpand
+Issued when a vdev can be automatically expanded.
+.It Sy io_failure
+Issued when there is an I/O failure in a vdev in the pool.
+.It Sy probe_failure
+Issued when a probe fails on a vdev.
+This would occur if a vdev
+have been kicked from the system outside of ZFS (such as the kernel
+have removed the device).
+.It Sy log_replay
+Issued when the intent log cannot be replayed.
+The can occur in the case of a missing or damaged log device.
+.It Sy resilver.start
+Issued when a resilver is started.
+.It Sy resilver.finish
+Issued when the running resilver have finished.
+.It Sy scrub.start
+Issued when a scrub is started on a pool.
+.It Sy scrub.finish
+Issued when a pool has finished scrubbing.
+.It Sy scrub.abort
+Issued when a scrub is aborted on a pool.
+.It Sy scrub.resume
+Issued when a scrub is resumed on a pool.
+.It Sy scrub.paused
+Issued when a scrub is paused on a pool.
+.It Sy bootfs.vdev.attach
+.El
+.
+.Sh PAYLOADS
+This is the payload (data, information) that accompanies an
+event.
+.Pp
+For
+.Xr zed 8 ,
+these are set to uppercase and prefixed with
+.Sy ZEVENT_ .
+.Pp
+.Bl -tag -compact -width "vdev_cksum_errors"
+.It Sy pool
+Pool name.
+.It Sy pool_failmode
+Failmode -
+.Sy wait ,
+.Sy continue ,
+or
+.Sy panic .
+See the
+.Sy failmode
+property in
+.Xr zpoolprops 7
+for more information.
+.It Sy pool_guid
+The GUID of the pool.
+.It Sy pool_context
+The load state for the pool (0=none, 1=open, 2=import, 3=tryimport, 4=recover
+5=error).
+.It Sy vdev_guid
+The GUID of the vdev in question (the vdev failing or operated upon with
+.Nm zpool Cm clear ,
+etc.).
+.It Sy vdev_type
+Type of vdev -
+.Sy disk ,
+.Sy file ,
+.Sy mirror ,
+etc.
+See the
+.Sy Virtual Devices
+section of
+.Xr zpoolconcepts 7
+for more information on possible values.
+.It Sy vdev_path
+Full path of the vdev, including any
+.Em -partX .
+.It Sy vdev_devid
+ID of vdev (if any).
+.It Sy vdev_fru
+Physical FRU location.
+.It Sy vdev_state
+State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed to
+open, 5=faulted, 6=degraded, 7=healthy).
+.It Sy vdev_ashift
+The ashift value of the vdev.
+.It Sy vdev_complete_ts
+The time the last I/O request completed for the specified vdev.
+.It Sy vdev_delta_ts
+The time since the last I/O request completed for the specified vdev.
+.It Sy vdev_spare_paths
+List of spares, including full path and any
+.Em -partX .
+.It Sy vdev_spare_guids
+GUID(s) of spares.
+.It Sy vdev_read_errors
+How many read errors that have been detected on the vdev.
+.It Sy vdev_write_errors
+How many write errors that have been detected on the vdev.
+.It Sy vdev_cksum_errors
+How many checksum errors that have been detected on the vdev.
+.It Sy parent_guid
+GUID of the vdev parent.
+.It Sy parent_type
+Type of parent.
+See
+.Sy vdev_type .
+.It Sy parent_path
+Path of the vdev parent (if any).
+.It Sy parent_devid
+ID of the vdev parent (if any).
+.It Sy zio_objset
+The object set number for a given I/O request.
+.It Sy zio_object
+The object number for a given I/O request.
+.It Sy zio_level
+The indirect level for the block.
+Level 0 is the lowest level and includes data blocks.
+Values > 0 indicate metadata blocks at the appropriate level.
+.It Sy zio_blkid
+The block ID for a given I/O request.
+.It Sy zio_err
+The error number for a failure when handling a given I/O request,
+compatible with
+.Xr errno 3
+with the value of
+.Sy EBADE
+used to indicate a ZFS checksum error.
+.It Sy zio_offset
+The offset in bytes of where to write the I/O request for the specified vdev.
+.It Sy zio_size
+The size in bytes of the I/O request.
+.It Sy zio_flags
+The current flags describing how the I/O request should be handled.
+See the
+.Sy I/O FLAGS
+section for the full list of I/O flags.
+.It Sy zio_stage
+The current stage of the I/O in the pipeline.
+See the
+.Sy I/O STAGES
+section for a full list of all the I/O stages.
+.It Sy zio_pipeline
+The valid pipeline stages for the I/O.
+See the
+.Sy I/O STAGES
+section for a full list of all the I/O stages.
+.It Sy zio_delay
+The time elapsed (in nanoseconds) waiting for the block layer to complete the
+I/O request.
+Unlike
+.Sy zio_delta ,
+this does not include any vdev queuing time and is
+therefore solely a measure of the block layer performance.
+.It Sy zio_timestamp
+The time when a given I/O request was submitted.
+.It Sy zio_delta
+The time required to service a given I/O request.
+.It Sy prev_state
+The previous state of the vdev.
+.It Sy cksum_algorithm
+Checksum algorithm used.
+See
+.Xr zfsprops 7
+for more information on the available checksum algorithms.
+.It Sy cksum_byteswap
+Whether or not the data is byteswapped.
+.It Sy bad_ranges
+.No [\& Ns Ar start , end )
+pairs of corruption offsets.
+Offsets are always aligned on a 64-bit boundary,
+and can include some gaps of non-corruption.
+(See
+.Sy bad_ranges_min_gap )
+.It Sy bad_ranges_min_gap
+In order to bound the size of the
+.Sy bad_ranges
+array, gaps of non-corruption
+less than or equal to
+.Sy bad_ranges_min_gap
+bytes have been merged with
+adjacent corruption.
+Always at least 8 bytes, since corruption is detected on a 64-bit word basis.
+.It Sy bad_range_sets
+This array has one element per range in
+.Sy bad_ranges .
+Each element contains
+the count of bits in that range which were clear in the good data and set
+in the bad data.
+.It Sy bad_range_clears
+This array has one element per range in
+.Sy bad_ranges .
+Each element contains
+the count of bits for that range which were set in the good data and clear in
+the bad data.
+.It Sy bad_set_bits
+If this field exists, it is an array of
+.Pq Ar bad data No & ~( Ns Ar good data ) ;
+that is, the bits set in the bad data which are cleared in the good data.
+Each element corresponds a byte whose offset is in a range in
+.Sy bad_ranges ,
+and the array is ordered by offset.
+Thus, the first element is the first byte in the first
+.Sy bad_ranges
+range, and the last element is the last byte in the last
+.Sy bad_ranges
+range.
+.It Sy bad_cleared_bits
+Like
+.Sy bad_set_bits ,
+but contains
+.Pq Ar good data No & ~( Ns Ar bad data ) ;
+that is, the bits set in the good data which are cleared in the bad data.
+.El
+.
+.Sh I/O STAGES
+The ZFS I/O pipeline is comprised of various stages which are defined below.
+The individual stages are used to construct these basic I/O
+operations: Read, Write, Free, Claim, Flush and Trim.
+These stages may be
+set on an event to describe the life cycle of a given I/O request.
+.Pp
+.TS
+tab(:);
+l l l .
+Stage:Bit Mask:Operations
+_:_:_
+ZIO_STAGE_OPEN:0x00000001:RWFCXT
+
+ZIO_STAGE_READ_BP_INIT:0x00000002:R-----
+ZIO_STAGE_WRITE_BP_INIT:0x00000004:-W----
+ZIO_STAGE_FREE_BP_INIT:0x00000008:--F---
+ZIO_STAGE_ISSUE_ASYNC:0x00000010:-WF--T
+ZIO_STAGE_WRITE_COMPRESS:0x00000020:-W----
+
+ZIO_STAGE_ENCRYPT:0x00000040:-W----
+ZIO_STAGE_CHECKSUM_GENERATE:0x00000080:-W----
+
+ZIO_STAGE_NOP_WRITE:0x00000100:-W----
+
+ZIO_STAGE_BRT_FREE:0x00000200:--F---
+
+ZIO_STAGE_DDT_READ_START:0x00000400:R-----
+ZIO_STAGE_DDT_READ_DONE:0x00000800:R-----
+ZIO_STAGE_DDT_WRITE:0x00001000:-W----
+ZIO_STAGE_DDT_FREE:0x00002000:--F---
+
+ZIO_STAGE_GANG_ASSEMBLE:0x00004000:RWFC--
+ZIO_STAGE_GANG_ISSUE:0x00008000:RWFC--
+
+ZIO_STAGE_DVA_THROTTLE:0x00010000:-W----
+ZIO_STAGE_DVA_ALLOCATE:0x00020000:-W----
+ZIO_STAGE_DVA_FREE:0x00040000:--F---
+ZIO_STAGE_DVA_CLAIM:0x00080000:---C--
+
+ZIO_STAGE_READY:0x00100000:RWFCIT
+
+ZIO_STAGE_VDEV_IO_START:0x00200000:RW--XT
+ZIO_STAGE_VDEV_IO_DONE:0x00400000:RW--XT
+ZIO_STAGE_VDEV_IO_ASSESS:0x00800000:RW--XT
+
+ZIO_STAGE_CHECKSUM_VERIFY:0x01000000:R-----
+ZIO_STAGE_DIO_CHECKSUM_VERIFY:0x02000000:-W----
+
+ZIO_STAGE_DONE:0x04000000:RWFCXT
+.TE
+.
+.Sh I/O FLAGS
+Every I/O request in the pipeline contains a set of flags which describe its
+function and are used to govern its behavior.
+These flags will be set in an event as a
+.Sy zio_flags
+payload entry.
+.Pp
+.TS
+tab(:);
+l l .
+Flag:Bit Mask
+_:_
+ZIO_FLAG_DONT_AGGREGATE:0x00000001
+ZIO_FLAG_IO_REPAIR:0x00000002
+ZIO_FLAG_SELF_HEAL:0x00000004
+ZIO_FLAG_RESILVER:0x00000008
+ZIO_FLAG_SCRUB:0x00000010
+ZIO_FLAG_SCAN_THREAD:0x00000020
+ZIO_FLAG_PHYSICAL:0x00000040
+
+ZIO_FLAG_CANFAIL:0x00000080
+ZIO_FLAG_SPECULATIVE:0x00000100
+ZIO_FLAG_CONFIG_WRITER:0x00000200
+ZIO_FLAG_DONT_RETRY:0x00000400
+ZIO_FLAG_NODATA:0x00001000
+ZIO_FLAG_INDUCE_DAMAGE:0x00002000
+
+ZIO_FLAG_IO_ALLOCATING:0x00004000
+ZIO_FLAG_IO_RETRY:0x00008000
+ZIO_FLAG_PROBE:0x00010000
+ZIO_FLAG_TRYHARD:0x00020000
+ZIO_FLAG_OPTIONAL:0x00040000
+
+ZIO_FLAG_DONT_QUEUE:0x00080000
+ZIO_FLAG_DONT_PROPAGATE:0x00100000
+ZIO_FLAG_IO_BYPASS:0x00200000
+ZIO_FLAG_IO_REWRITE:0x00400000
+ZIO_FLAG_RAW_COMPRESS:0x00800000
+ZIO_FLAG_RAW_ENCRYPT:0x01000000
+
+ZIO_FLAG_GANG_CHILD:0x02000000
+ZIO_FLAG_DDT_CHILD:0x04000000
+ZIO_FLAG_GODFATHER:0x08000000
+ZIO_FLAG_NOPWRITE:0x10000000
+ZIO_FLAG_REEXECUTED:0x20000000
+ZIO_FLAG_DELEGATED:0x40000000
+ZIO_FLAG_FASTWRITE:0x80000000
+.TE
+.
+.Sh SEE ALSO
+.Xr zfs 4 ,
+.Xr zed 8 ,
+.Xr zpool-wait 8
diff --git a/share/man/man8/zpool-export.8 b/share/man/man8/zpool-export.8
@@ -0,0 +1,82 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd March 16, 2022
+.Dt ZPOOL-EXPORT 8
+.Os
+.
+.Sh NAME
+.Nm zpool-export
+.Nd export ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm export
+.Op Fl f
+.Fl a Ns | Ns Ar pool Ns …
+.
+.Sh DESCRIPTION
+Exports the given pools from the system.
+All devices are marked as exported, but are still considered in use by other
+subsystems.
+The devices can be moved between systems
+.Pq even those of different endianness
+and imported as long as a sufficient number of devices are present.
+.Pp
+Before exporting the pool, all datasets within the pool are unmounted.
+A pool can not be exported if it has a shared spare that is currently being
+used.
+.Pp
+For pools to be portable, you must give the
+.Nm zpool
+command whole disks, not just partitions, so that ZFS can label the disks with
+portable EFI labels.
+Otherwise, disk drivers on platforms of different endianness will not recognize
+the disks.
+.Bl -tag -width Ds
+.It Fl a
+Exports all pools imported on the system.
+.It Fl f
+Forcefully unmount all datasets, and allow export of pools with active shared
+spares.
+.Pp
+This command will forcefully export the pool even if it has a shared spare that
+is currently being used.
+This may lead to potential data corruption.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 8 from zpool.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Exporting a ZFS Storage Pool
+The following command exports the devices in pool
+.Ar tank
+so that they can be relocated or later imported:
+.Dl # Nm zpool Cm export Ar tank
+.
+.Sh SEE ALSO
+.Xr zpool-import 8
diff --git a/share/man/man8/zpool-get.8 b/share/man/man8/zpool-get.8
@@ -0,0 +1,204 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd August 9, 2019
+.Dt ZPOOL-GET 8
+.Os
+.
+.Sh NAME
+.Nm zpool-get
+.Nd retrieve properties of ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm get
+.Op Fl Hp
+.Op Fl j Op Ar --json-int, --json-pool-key-guid
+.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns …
+.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Oo Ar pool Oc Ns …
+.
+.Nm zpool
+.Cm get
+.Op Fl Hp
+.Op Fl j Op Ar --json-int
+.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns …
+.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Ar pool
+.Oo Sy all-vdevs Ns | Ns
+.Ar vdev Oc Ns …
+.
+.Nm zpool
+.Cm set
+.Ar property Ns = Ns Ar value
+.Ar pool
+.
+.Nm zpool
+.Cm set
+.Ar property Ns = Ns Ar value
+.Ar pool
+.Ar vdev
+.
+.Sh DESCRIPTION
+.Bl -tag -width Ds
+.It Xo
+.Nm zpool
+.Cm get
+.Op Fl Hp
+.Op Fl j Op Ar --json-int, --json-pool-key-guid
+.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns …
+.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Oo Ar pool Oc Ns …
+.Xc
+Retrieves the given list of properties
+.Po
+or all properties if
+.Sy all
+is used
+.Pc
+for the specified storage pool(s).
+These properties are displayed with the following fields:
+.Bl -tag -compact -offset Ds -width "property"
+.It Sy name
+Name of storage pool.
+.It Sy property
+Property name.
+.It Sy value
+Property value.
+.It Sy source
+Property source, either
+.Sy default No or Sy local .
+.El
+.Pp
+See the
+.Xr zpoolprops 7
+manual page for more information on the available pool properties.
+.Bl -tag -compact -offset Ds -width "-o field"
+.It Fl j , -json Op Ar --json-int, --json-pool-key-guid
+Display the list of properties in JSON format.
+Specify
+.Sy --json-int
+to display the numbers in integer format instead of strings in JSON output.
+Specify
+.Sy --json-pool-key-guid
+to set pool GUID as key for pool objects instead of pool name.
+.It Fl H
+Scripted mode.
+Do not display headers, and separate fields by a single tab instead of arbitrary
+space.
+.It Fl o Ar field
+A comma-separated list of columns to display, defaults to
+.Sy name , Ns Sy property , Ns Sy value , Ns Sy source .
+.It Fl p
+Display numbers in parsable (exact) values.
+.El
+.It Xo
+.Nm zpool
+.Cm get
+.Op Fl j Op Ar --json-int
+.Op Fl Hp
+.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns …
+.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Ar pool
+.Oo Sy all-vdevs Ns | Ns
+.Ar vdev Oc Ns …
+.Xc
+Retrieves the given list of properties
+.Po
+or all properties if
+.Sy all
+is used
+.Pc
+for the specified vdevs
+.Po
+or all vdevs if
+.Sy all-vdevs
+is used
+.Pc
+in the specified pool.
+These properties are displayed with the following fields:
+.Bl -tag -compact -offset Ds -width "property"
+.It Sy name
+Name of vdev.
+.It Sy property
+Property name.
+.It Sy value
+Property value.
+.It Sy source
+Property source, either
+.Sy default No or Sy local .
+.El
+.Pp
+See the
+.Xr vdevprops 7
+manual page for more information on the available pool properties.
+.Bl -tag -compact -offset Ds -width "-o field"
+.It Fl j , -json Op Ar --json-int
+Display the list of properties in JSON format.
+Specify
+.Sy --json-int
+to display the numbers in integer format instead of strings in JSON output.
+.It Fl H
+Scripted mode.
+Do not display headers, and separate fields by a single tab instead of arbitrary
+space.
+.It Fl o Ar field
+A comma-separated list of columns to display, defaults to
+.Sy name , Ns Sy property , Ns Sy value , Ns Sy source .
+.It Fl p
+Display numbers in parsable (exact) values.
+.El
+.It Xo
+.Nm zpool
+.Cm set
+.Ar property Ns = Ns Ar value
+.Ar pool
+.Xc
+Sets the given property on the specified pool.
+See the
+.Xr zpoolprops 7
+manual page for more information on what properties can be set and acceptable
+values.
+.It Xo
+.Nm zpool
+.Cm set
+.Ar property Ns = Ns Ar value
+.Ar pool
+.Ar vdev
+.Xc
+Sets the given property on the specified vdev in the specified pool.
+See the
+.Xr vdevprops 7
+manual page for more information on what properties can be set and acceptable
+values.
+.El
+.
+.Sh SEE ALSO
+.Xr vdevprops 7 ,
+.Xr zpool-features 7 ,
+.Xr zpoolprops 7 ,
+.Xr zpool-list 8
diff --git a/share/man/man8/zpool-history.8 b/share/man/man8/zpool-history.8
@@ -0,0 +1,58 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd August 9, 2019
+.Dt ZPOOL-HISTORY 8
+.Os
+.
+.Sh NAME
+.Nm zpool-history
+.Nd inspect command history of ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm history
+.Op Fl il
+.Oo Ar pool Oc Ns …
+.
+.Sh DESCRIPTION
+Displays the command history of the specified pool(s) or all pools if no pool is
+specified.
+.Bl -tag -width Ds
+.It Fl i
+Displays internally logged ZFS events in addition to user initiated events.
+.It Fl l
+Displays log records in long format, which in addition to standard format
+includes, the user name, the hostname, and the zone in which the operation was
+performed.
+.El
+.
+.Sh SEE ALSO
+.Xr zpool-checkpoint 8 ,
+.Xr zpool-events 8 ,
+.Xr zpool-status 8 ,
+.Xr zpool-wait 8
diff --git a/share/man/man8/zpool-import.8 b/share/man/man8/zpool-import.8
@@ -0,0 +1,435 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd March 16, 2022
+.Dt ZPOOL-IMPORT 8
+.Os
+.
+.Sh NAME
+.Nm zpool-import
+.Nd import ZFS storage pools or list available pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm import
+.Op Fl D
+.Oo Fl d Ar dir Ns | Ns Ar device Oc Ns …
+.Nm zpool
+.Cm import
+.Fl a
+.Op Fl DflmN
+.Op Fl F Op Fl nTX
+.Op Fl -rewind-to-checkpoint
+.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device
+.Op Fl o Ar mntopts
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Op Fl R Ar root
+.Nm zpool
+.Cm import
+.Op Fl Dflmt
+.Op Fl F Op Fl nTX
+.Op Fl -rewind-to-checkpoint
+.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device
+.Op Fl o Ar mntopts
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Op Fl R Ar root
+.Op Fl s
+.Ar pool Ns | Ns Ar id
+.Op Ar newpool
+.
+.Sh DESCRIPTION
+.Bl -tag -width Ds
+.It Xo
+.Nm zpool
+.Cm import
+.Op Fl D
+.Oo Fl d Ar dir Ns | Ns Ar device Oc Ns …
+.Xc
+Lists pools available to import.
+If the
+.Fl d
+or
+.Fl c
+options are not specified, this command searches for devices using libblkid
+on Linux and geom on
+.Fx .
+The
+.Fl d
+option can be specified multiple times, and all directories are searched.
+If the device appears to be part of an exported pool, this command displays a
+summary of the pool with the name of the pool, a numeric identifier, as well as
+the vdev layout and current health of the device for each device or file.
+Destroyed pools, pools that were previously destroyed with the
+.Nm zpool Cm destroy
+command, are not listed unless the
+.Fl D
+option is specified.
+.Pp
+The numeric identifier is unique, and can be used instead of the pool name when
+multiple exported pools of the same name are available.
+.Bl -tag -width Ds
+.It Fl c Ar cachefile
+Reads configuration from the given
+.Ar cachefile
+that was created with the
+.Sy cachefile
+pool property.
+This
+.Ar cachefile
+is used instead of searching for devices.
+.It Fl d Ar dir Ns | Ns Ar device
+Uses
+.Ar device
+or searches for devices or files in
+.Ar dir .
+The
+.Fl d
+option can be specified multiple times.
+.It Fl D
+Lists destroyed pools only.
+.El
+.It Xo
+.Nm zpool
+.Cm import
+.Fl a
+.Op Fl DflmN
+.Op Fl F Op Fl nTX
+.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device
+.Op Fl o Ar mntopts
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Op Fl R Ar root
+.Op Fl s
+.Xc
+Imports all pools found in the search directories.
+Identical to the previous command, except that all pools with a sufficient
+number of devices available are imported.
+Destroyed pools, pools that were previously destroyed with the
+.Nm zpool Cm destroy
+command, will not be imported unless the
+.Fl D
+option is specified.
+.Bl -tag -width Ds
+.It Fl a
+Searches for and imports all pools found.
+.It Fl c Ar cachefile
+Reads configuration from the given
+.Ar cachefile
+that was created with the
+.Sy cachefile
+pool property.
+This
+.Ar cachefile
+is used instead of searching for devices.
+.It Fl d Ar dir Ns | Ns Ar device
+Uses
+.Ar device
+or searches for devices or files in
+.Ar dir .
+The
+.Fl d
+option can be specified multiple times.
+This option is incompatible with the
+.Fl c
+option.
+.It Fl D
+Imports destroyed pools only.
+The
+.Fl f
+option is also required.
+.It Fl f
+Forces import, even if the pool appears to be potentially active.
+.It Fl F
+Recovery mode for a non-importable pool.
+Attempt to return the pool to an importable state by discarding the last few
+transactions.
+Not all damaged pools can be recovered by using this option.
+If successful, the data from the discarded transactions is irretrievably lost.
+This option is ignored if the pool is importable or already imported.
+.It Fl l
+Indicates that this command will request encryption keys for all encrypted
+datasets it attempts to mount as it is bringing the pool online.
+Note that if any datasets have a
+.Sy keylocation
+of
+.Sy prompt
+this command will block waiting for the keys to be entered.
+Without this flag
+encrypted datasets will be left unavailable until the keys are loaded.
+.It Fl m
+Allows a pool to import when there is a missing log device.
+Recent transactions can be lost because the log device will be discarded.
+.It Fl n
+Used with the
+.Fl F
+recovery option.
+Determines whether a non-importable pool can be made importable again, but does
+not actually perform the pool recovery.
+For more details about pool recovery mode, see the
+.Fl F
+option, above.
+.It Fl N
+Import the pool without mounting any file systems.
+.It Fl o Ar mntopts
+Comma-separated list of mount options to use when mounting datasets within the
+pool.
+See
+.Xr zfs 8
+for a description of dataset properties and mount options.
+.It Fl o Ar property Ns = Ns Ar value
+Sets the specified property on the imported pool.
+See the
+.Xr zpoolprops 7
+manual page for more information on the available pool properties.
+.It Fl R Ar root
+Sets the
+.Sy cachefile
+property to
+.Sy none
+and the
+.Sy altroot
+property to
+.Ar root .
+.It Fl -rewind-to-checkpoint
+Rewinds pool to the checkpointed state.
+Once the pool is imported with this flag there is no way to undo the rewind.
+All changes and data that were written after the checkpoint are lost!
+The only exception is when the
+.Sy readonly
+mounting option is enabled.
+In this case, the checkpointed state of the pool is opened and an
+administrator can see how the pool would look like if they were
+to fully rewind.
+.It Fl s
+Scan using the default search path, the libblkid cache will not be
+consulted.
+A custom search path may be specified by setting the
+.Sy ZPOOL_IMPORT_PATH
+environment variable.
+.It Fl X
+Used with the
+.Fl F
+recovery option.
+Determines whether extreme measures to find a valid txg should take place.
+This allows the pool to
+be rolled back to a txg which is no longer guaranteed to be consistent.
+Pools imported at an inconsistent txg may contain uncorrectable checksum errors.
+For more details about pool recovery mode, see the
+.Fl F
+option, above.
+WARNING: This option can be extremely hazardous to the
+health of your pool and should only be used as a last resort.
+.It Fl T
+Specify the txg to use for rollback.
+Implies
+.Fl FX .
+For more details
+about pool recovery mode, see the
+.Fl X
+option, above.
+WARNING: This option can be extremely hazardous to the
+health of your pool and should only be used as a last resort.
+.El
+.It Xo
+.Nm zpool
+.Cm import
+.Op Fl Dflmt
+.Op Fl F Op Fl nTX
+.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device
+.Op Fl o Ar mntopts
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Op Fl R Ar root
+.Op Fl s
+.Ar pool Ns | Ns Ar id
+.Op Ar newpool
+.Xc
+Imports a specific pool.
+A pool can be identified by its name or the numeric identifier.
+If
+.Ar newpool
+is specified, the pool is imported using the name
+.Ar newpool .
+Otherwise, it is imported with the same name as its exported name.
+.Pp
+If a device is removed from a system without running
+.Nm zpool Cm export
+first, the device appears as potentially active.
+It cannot be determined if this was a failed export, or whether the device is
+really in use from another host.
+To import a pool in this state, the
+.Fl f
+option is required.
+.Bl -tag -width Ds
+.It Fl c Ar cachefile
+Reads configuration from the given
+.Ar cachefile
+that was created with the
+.Sy cachefile
+pool property.
+This
+.Ar cachefile
+is used instead of searching for devices.
+.It Fl d Ar dir Ns | Ns Ar device
+Uses
+.Ar device
+or searches for devices or files in
+.Ar dir .
+The
+.Fl d
+option can be specified multiple times.
+This option is incompatible with the
+.Fl c
+option.
+.It Fl D
+Imports destroyed pool.
+The
+.Fl f
+option is also required.
+.It Fl f
+Forces import, even if the pool appears to be potentially active.
+.It Fl F
+Recovery mode for a non-importable pool.
+Attempt to return the pool to an importable state by discarding the last few
+transactions.
+Not all damaged pools can be recovered by using this option.
+If successful, the data from the discarded transactions is irretrievably lost.
+This option is ignored if the pool is importable or already imported.
+.It Fl l
+Indicates that this command will request encryption keys for all encrypted
+datasets it attempts to mount as it is bringing the pool online.
+Note that if any datasets have a
+.Sy keylocation
+of
+.Sy prompt
+this command will block waiting for the keys to be entered.
+Without this flag
+encrypted datasets will be left unavailable until the keys are loaded.
+.It Fl m
+Allows a pool to import when there is a missing log device.
+Recent transactions can be lost because the log device will be discarded.
+.It Fl n
+Used with the
+.Fl F
+recovery option.
+Determines whether a non-importable pool can be made importable again, but does
+not actually perform the pool recovery.
+For more details about pool recovery mode, see the
+.Fl F
+option, above.
+.It Fl o Ar mntopts
+Comma-separated list of mount options to use when mounting datasets within the
+pool.
+See
+.Xr zfs 8
+for a description of dataset properties and mount options.
+.It Fl o Ar property Ns = Ns Ar value
+Sets the specified property on the imported pool.
+See the
+.Xr zpoolprops 7
+manual page for more information on the available pool properties.
+.It Fl R Ar root
+Sets the
+.Sy cachefile
+property to
+.Sy none
+and the
+.Sy altroot
+property to
+.Ar root .
+.It Fl s
+Scan using the default search path, the libblkid cache will not be
+consulted.
+A custom search path may be specified by setting the
+.Sy ZPOOL_IMPORT_PATH
+environment variable.
+.It Fl X
+Used with the
+.Fl F
+recovery option.
+Determines whether extreme measures to find a valid txg should take place.
+This allows the pool to
+be rolled back to a txg which is no longer guaranteed to be consistent.
+Pools imported at an inconsistent txg may contain uncorrectable
+checksum errors.
+For more details about pool recovery mode, see the
+.Fl F
+option, above.
+WARNING: This option can be extremely hazardous to the
+health of your pool and should only be used as a last resort.
+.It Fl T
+Specify the txg to use for rollback.
+Implies
+.Fl FX .
+For more details
+about pool recovery mode, see the
+.Fl X
+option, above.
+.Em WARNING :
+This option can be extremely hazardous to the
+health of your pool and should only be used as a last resort.
+.It Fl t
+Used with
+.Ar newpool .
+Specifies that
+.Ar newpool
+is temporary.
+Temporary pool names last until export.
+Ensures that the original pool name will be used
+in all label updates and therefore is retained upon export.
+Will also set
+.Fl o Sy cachefile Ns = Ns Sy none
+when not explicitly specified.
+.El
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 9 from zpool.8
+.\" Make sure to update them bidirectionally
+.Ss Example 9 : No Importing a ZFS Storage Pool
+The following command displays available pools, and then imports the pool
+.Ar tank
+for use on the system.
+The results from this command are similar to the following:
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm import
+ pool: tank
+ id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+ tank ONLINE
+ mirror ONLINE
+ sda ONLINE
+ sdb ONLINE
+
+.No # Nm zpool Cm import Ar tank
+.Ed
+.
+.Sh SEE ALSO
+.Xr zpool-export 8 ,
+.Xr zpool-list 8 ,
+.Xr zpool-status 8
diff --git a/share/man/man8/zpool-initialize.8 b/share/man/man8/zpool-initialize.8
@@ -0,0 +1,81 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd May 27, 2021
+.Dt ZPOOL-INITIALIZE 8
+.Os
+.
+.Sh NAME
+.Nm zpool-initialize
+.Nd write to unallocated regions of ZFS storage pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm initialize
+.Op Fl c Ns | Ns Fl s | Ns Fl u
+.Op Fl w
+.Ar pool
+.Oo Ar device Oc Ns …
+.
+.Sh DESCRIPTION
+Begins initializing by writing to all unallocated regions on the specified
+devices, or all eligible devices in the pool if no individual devices are
+specified.
+Only leaf data or log devices may be initialized.
+.Bl -tag -width Ds
+.It Fl c , -cancel
+Cancel initializing on the specified devices, or all eligible devices if none
+are specified.
+If one or more target devices are invalid or are not currently being
+initialized, the command will fail and no cancellation will occur on any device.
+.It Fl s , -suspend
+Suspend initializing on the specified devices, or all eligible devices if none
+are specified.
+If one or more target devices are invalid or are not currently being
+initialized, the command will fail and no suspension will occur on any device.
+Initializing can then be resumed by running
+.Nm zpool Cm initialize
+with no flags on the relevant target devices.
+.It Fl u , -uninit
+Clears the initialization state on the specified devices, or all eligible
+devices if none are specified.
+If the devices are being actively initialized the command will fail.
+After being cleared
+.Nm zpool Cm initialize
+with no flags can be used to re-initialize all unallocoated regions on
+the relevant target devices.
+.It Fl w , -wait
+Wait until the devices have finished initializing before returning.
+.El
+.
+.Sh SEE ALSO
+.Xr zpool-add 8 ,
+.Xr zpool-attach 8 ,
+.Xr zpool-create 8 ,
+.Xr zpool-online 8 ,
+.Xr zpool-replace 8 ,
+.Xr zpool-trim 8
diff --git a/share/man/man8/zpool-iostat.8 b/share/man/man8/zpool-iostat.8
@@ -0,0 +1,306 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd March 16, 2022
+.Dt ZPOOL-IOSTAT 8
+.Os
+.
+.Sh NAME
+.Nm zpool-iostat
+.Nd display logical I/O statistics for ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm iostat
+.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
+.Op Fl T Sy u Ns | Ns Sy d
+.Op Fl ghHLnpPvy
+.Oo Ar pool Ns … Ns | Ns Oo Ar pool vdev Ns … Oc Ns | Ns Ar vdev Ns … Oc
+.Op Ar interval Op Ar count
+.
+.Sh DESCRIPTION
+Displays logical I/O statistics for the given pools/vdevs.
+Physical I/O statistics may be observed via
+.Xr iostat 1 .
+If writes are located nearby, they may be merged into a single
+larger operation.
+Additional I/O may be generated depending on the level of vdev redundancy.
+To filter output, you may pass in a list of pools, a pool and list of vdevs
+in that pool, or a list of any vdevs from any pool.
+If no items are specified, statistics for every pool in the system are shown.
+When given an
+.Ar interval ,
+the statistics are printed every
+.Ar interval
+seconds until killed.
+If
+.Fl n
+flag is specified the headers are displayed only once, otherwise they are
+displayed periodically.
+If
+.Ar count
+is specified, the command exits after
+.Ar count
+reports are printed.
+The first report printed is always the statistics since boot regardless of
+whether
+.Ar interval
+and
+.Ar count
+are passed.
+However, this behavior can be suppressed with the
+.Fl y
+flag.
+Also note that the units of
+.Sy K ,
+.Sy M ,
+.Sy G Ns …
+that are printed in the report are in base 1024.
+To get the raw values, use the
+.Fl p
+flag.
+.Bl -tag -width Ds
+.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns …
+Run a script (or scripts) on each vdev and include the output as a new column
+in the
+.Nm zpool Cm iostat
+output.
+Users can run any script found in their
+.Pa ~/.zpool.d
+directory or from the system
+.Pa /etc/zfs/zpool.d
+directory.
+Script names containing the slash
+.Pq Sy /
+character are not allowed.
+The default search path can be overridden by setting the
+.Sy ZPOOL_SCRIPTS_PATH
+environment variable.
+A privileged user can only run
+.Fl c
+if they have the
+.Sy ZPOOL_SCRIPTS_AS_ROOT
+environment variable set.
+If a script requires the use of a privileged command, like
+.Xr smartctl 8 ,
+then it's recommended you allow the user access to it in
+.Pa /etc/sudoers
+or add the user to the
+.Pa /etc/sudoers.d/zfs
+file.
+.Pp
+If
+.Fl c
+is passed without a script name, it prints a list of all scripts.
+.Fl c
+also sets verbose mode
+.No \&( Ns Fl v Ns No \&) .
+.Pp
+Script output should be in the form of "name=value".
+The column name is set to "name" and the value is set to "value".
+Multiple lines can be used to output multiple columns.
+The first line of output not in the
+"name=value" format is displayed without a column title,
+and no more output after that is displayed.
+This can be useful for printing error messages.
+Blank or NULL values are printed as a '-' to make output AWKable.
+.Pp
+The following environment variables are set before running each script:
+.Bl -tag -compact -width "VDEV_ENC_SYSFS_PATH"
+.It Sy VDEV_PATH
+Full path to the vdev
+.It Sy VDEV_UPATH
+Underlying path to the vdev
+.Pq Pa /dev/sd* .
+For use with device mapper, multipath, or partitioned vdevs.
+.It Sy VDEV_ENC_SYSFS_PATH
+The sysfs path to the enclosure for the vdev (if any).
+.El
+.It Fl T Sy u Ns | Ns Sy d
+Display a time stamp.
+Specify
+.Sy u
+for a printed representation of the internal representation of time.
+See
+.Xr time 1 .
+Specify
+.Sy d
+for standard date format.
+See
+.Xr date 1 .
+.It Fl g
+Display vdev GUIDs instead of the normal device names.
+These GUIDs can be used in place of device names for the zpool
+detach/offline/remove/replace commands.
+.It Fl H
+Scripted mode.
+Do not display headers, and separate fields by a
+single tab instead of arbitrary space.
+.It Fl L
+Display real paths for vdevs resolving all symbolic links.
+This can be used to look up the current block device name regardless of the
+.Pa /dev/disk/
+path used to open it.
+.It Fl n
+Print headers only once when passed
+.It Fl p
+Display numbers in parsable (exact) values.
+Time values are in nanoseconds.
+.It Fl P
+Display full paths for vdevs instead of only the last component of the path.
+This can be used in conjunction with the
+.Fl L
+flag.
+.It Fl r
+Print request size histograms for the leaf vdev's I/O.
+This includes histograms of individual I/O (ind) and aggregate I/O (agg).
+These stats can be useful for observing how well I/O aggregation is working.
+Note that TRIM I/O may exceed 16M, but will be counted as 16M.
+.It Fl v
+Verbose statistics Reports usage statistics for individual vdevs within the
+pool, in addition to the pool-wide statistics.
+.It Fl y
+Normally the first line of output reports the statistics since boot:
+suppress it.
+.It Fl w
+Display latency histograms:
+.Bl -tag -compact -width "asyncq_read/write"
+.It Sy total_wait
+Total I/O time (queuing + disk I/O time).
+.It Sy disk_wait
+Disk I/O time (time reading/writing the disk).
+.It Sy syncq_wait
+Amount of time I/O spent in synchronous priority queues.
+Does not include disk time.
+.It Sy asyncq_wait
+Amount of time I/O spent in asynchronous priority queues.
+Does not include disk time.
+.It Sy scrub
+Amount of time I/O spent in scrub queue.
+Does not include disk time.
+.It Sy rebuild
+Amount of time I/O spent in rebuild queue.
+Does not include disk time.
+.El
+.It Fl l
+Include average latency statistics:
+.Bl -tag -compact -width "asyncq_read/write"
+.It Sy total_wait
+Average total I/O time (queuing + disk I/O time).
+.It Sy disk_wait
+Average disk I/O time (time reading/writing the disk).
+.It Sy syncq_wait
+Average amount of time I/O spent in synchronous priority queues.
+Does not include disk time.
+.It Sy asyncq_wait
+Average amount of time I/O spent in asynchronous priority queues.
+Does not include disk time.
+.It Sy scrub
+Average queuing time in scrub queue.
+Does not include disk time.
+.It Sy trim
+Average queuing time in trim queue.
+Does not include disk time.
+.It Sy rebuild
+Average queuing time in rebuild queue.
+Does not include disk time.
+.El
+.It Fl q
+Include active queue statistics.
+Each priority queue has both pending
+.Sy ( pend )
+and active
+.Sy ( activ )
+I/O requests.
+Pending requests are waiting to be issued to the disk,
+and active requests have been issued to disk and are waiting for completion.
+These stats are broken out by priority queue:
+.Bl -tag -compact -width "asyncq_read/write"
+.It Sy syncq_read/write
+Current number of entries in synchronous priority
+queues.
+.It Sy asyncq_read/write
+Current number of entries in asynchronous priority queues.
+.It Sy scrubq_read
+Current number of entries in scrub queue.
+.It Sy trimq_write
+Current number of entries in trim queue.
+.It Sy rebuildq_write
+Current number of entries in rebuild queue.
+.El
+.Pp
+All queue statistics are instantaneous measurements of the number of
+entries in the queues.
+If you specify an interval,
+the measurements will be sampled from the end of the interval.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 13, 16 from zpool.8
+.\" Make sure to update them bidirectionally
+.Ss Example 13 : No Adding Cache Devices to a ZFS Pool
+The following command adds two disks for use as cache devices to a ZFS storage
+pool:
+.Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd
+.Pp
+Once added, the cache devices gradually fill with content from main memory.
+Depending on the size of your cache devices, it could take over an hour for
+them to fill.
+Capacity and reads can be monitored using the
+.Cm iostat
+subcommand as follows:
+.Dl # Nm zpool Cm iostat Fl v Ar pool 5
+.
+.Ss Example 16 : No Adding output columns
+Additional columns can be added to the
+.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c .
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size
+ NAME STATE READ WRITE CKSUM vendor model size
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+
+.No # Nm zpool Cm iostat Fl vc Pa size
+ capacity operations bandwidth
+pool alloc free read write read write size
+---------- ----- ----- ----- ----- ----- ----- ----
+rpool 14.6G 54.9G 4 55 250K 2.69M
+ sda1 14.6G 54.9G 4 55 250K 2.69M 70G
+---------- ----- ----- ----- ----- ----- ----- ----
+.Ed
+.
+.Sh SEE ALSO
+.Xr iostat 1 ,
+.Xr smartctl 8 ,
+.Xr zpool-list 8 ,
+.Xr zpool-status 8
diff --git a/share/man/man8/zpool-labelclear.8 b/share/man/man8/zpool-labelclear.8
@@ -0,0 +1,61 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd May 31, 2021
+.Dt ZPOOL-LABELCLEAR 8
+.Os
+.
+.Sh NAME
+.Nm zpool-labelclear
+.Nd remove ZFS label information from device
+.Sh SYNOPSIS
+.Nm zpool
+.Cm labelclear
+.Op Fl f
+.Ar device
+.
+.Sh DESCRIPTION
+Removes ZFS label information from the specified
+.Ar device .
+If the
+.Ar device
+is a cache device, it also removes the L2ARC header
+(persistent L2ARC).
+The
+.Ar device
+must not be part of an active pool configuration.
+.Bl -tag -width Ds
+.It Fl f
+Treat exported or foreign devices as inactive.
+.El
+.
+.Sh SEE ALSO
+.Xr zpool-destroy 8 ,
+.Xr zpool-detach 8 ,
+.Xr zpool-remove 8 ,
+.Xr zpool-replace 8
diff --git a/share/man/man8/zpool-list.8 b/share/man/man8/zpool-list.8
@@ -0,0 +1,253 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd March 16, 2022
+.Dt ZPOOL-LIST 8
+.Os
+.
+.Sh NAME
+.Nm zpool-list
+.Nd list information about ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm list
+.Op Fl HgLpPv
+.Op Fl j Op Ar --json-int, --json-pool-key-guid
+.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns …
+.Op Fl T Sy u Ns | Ns Sy d
+.Oo Ar pool Oc Ns …
+.Op Ar interval Op Ar count
+.
+.Sh DESCRIPTION
+Lists the given pools along with a health status and space usage.
+If no
+.Ar pool Ns s
+are specified, all pools in the system are listed.
+When given an
+.Ar interval ,
+the information is printed every
+.Ar interval
+seconds until killed.
+If
+.Ar count
+is specified, the command exits after
+.Ar count
+reports are printed.
+.Bl -tag -width Ds
+.It Fl j , -json Op Ar --json-int, --json-pool-key-guid
+Display the list of pools in JSON format.
+Specify
+.Sy --json-int
+to display the numbers in integer format instead of strings.
+Specify
+.Sy --json-pool-key-guid
+to set pool GUID as key for pool objects instead of pool names.
+.It Fl g
+Display vdev GUIDs instead of the normal device names.
+These GUIDs can be used in place of device names for the zpool
+detach/offline/remove/replace commands.
+.It Fl H
+Scripted mode.
+Do not display headers, and separate fields by a single tab instead of arbitrary
+space.
+.It Fl o Ar property
+Comma-separated list of properties to display.
+See the
+.Xr zpoolprops 7
+manual page for a list of valid properties.
+The default list is
+.Sy name , size , allocated , free , checkpoint, expandsize , fragmentation ,
+.Sy capacity , dedupratio , health , altroot .
+.It Fl L
+Display real paths for vdevs resolving all symbolic links.
+This can be used to look up the current block device name regardless of the
+.Pa /dev/disk
+path used to open it.
+.It Fl p
+Display numbers in parsable
+.Pq exact
+values.
+.It Fl P
+Display full paths for vdevs instead of only the last component of
+the path.
+This can be used in conjunction with the
+.Fl L
+flag.
+.It Fl T Sy u Ns | Ns Sy d
+Display a time stamp.
+Specify
+.Sy u
+for a printed representation of the internal representation of time.
+See
+.Xr time 1 .
+Specify
+.Sy d
+for standard date format.
+See
+.Xr date 1 .
+.It Fl v
+Verbose statistics.
+Reports usage statistics for individual vdevs within the pool, in addition to
+the pool-wide statistics.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 6, 15 from zpool.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Listing Available ZFS Storage Pools
+The following command lists all available pools on the system.
+In this case, the pool
+.Ar zion
+is faulted due to a missing device.
+The results from this command are similar to the following:
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm list
+NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
+tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
+zion - - - - - - - FAULTED -
+.Ed
+.
+.Ss Example 2 : No Displaying expanded space on a device
+The following command displays the detailed information for the pool
+.Ar data .
+This pool is comprised of a single raidz vdev where one of its devices
+increased its capacity by 10 GiB.
+In this example, the pool will not be able to utilize this extra capacity until
+all the devices under the raidz vdev have been expanded.
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm list Fl v Ar data
+NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
+ raidz1 23.9G 14.6G 9.30G - 48%
+ sda - - - - -
+ sdb - - - 10G -
+ sdc - - - - -
+.Ed
+.
+.Ss Example 3 : No Displaying expanded space on a device
+The following command lists all available pools on the system in JSON
+format.
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm list Fl j | Nm jq
+{
+ "output_version": {
+ "command": "zpool list",
+ "vers_major": 0,
+ "vers_minor": 1
+ },
+ "pools": {
+ "tank": {
+ "name": "tank",
+ "type": "POOL",
+ "state": "ONLINE",
+ "guid": "15220353080205405147",
+ "txg": "2671",
+ "spa_version": "5000",
+ "zpl_version": "5",
+ "properties": {
+ "size": {
+ "value": "111G",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "allocated": {
+ "value": "30.8G",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "free": {
+ "value": "80.2G",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "checkpoint": {
+ "value": "-",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "expandsize": {
+ "value": "-",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "fragmentation": {
+ "value": "0%",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "capacity": {
+ "value": "27%",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "dedupratio": {
+ "value": "1.00x",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "health": {
+ "value": "ONLINE",
+ "source": {
+ "type": "NONE",
+ "data": "-"
+ }
+ },
+ "altroot": {
+ "value": "-",
+ "source": {
+ "type": "DEFAULT",
+ "data": "-"
+ }
+ }
+ }
+ }
+ }
+}
+
+.Ed
+.
+.Sh SEE ALSO
+.Xr zpool-import 8 ,
+.Xr zpool-status 8
diff --git a/share/man/man8/zpool-offline.8 b/share/man/man8/zpool-offline.8
@@ -0,0 +1,106 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd August 9, 2019
+.Dt ZPOOL-OFFLINE 8
+.Os
+.
+.Sh NAME
+.Nm zpool-offline
+.Nd take physical devices offline in ZFS storage pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm offline
+.Op Fl Sy -power Ns | Ns Op Fl Sy ft
+.Ar pool
+.Ar device Ns …
+.Nm zpool
+.Cm online
+.Op Fl Sy -power
+.Op Fl Sy e
+.Ar pool
+.Ar device Ns …
+.
+.Sh DESCRIPTION
+.Bl -tag -width Ds
+.It Xo
+.Nm zpool
+.Cm offline
+.Op Fl Sy -power Ns | Ns Op Fl Sy ft
+.Ar pool
+.Ar device Ns …
+.Xc
+Takes the specified physical device offline.
+While the
+.Ar device
+is offline, no attempt is made to read or write to the device.
+This command is not applicable to spares.
+.Bl -tag -width Ds
+.It Fl -power
+Power off the device's slot in the storage enclosure.
+This flag currently works on Linux only
+.It Fl f
+Force fault.
+Instead of offlining the disk, put it into a faulted state.
+The fault will persist across imports unless the
+.Fl t
+flag was specified.
+.It Fl t
+Temporary.
+Upon reboot, the specified physical device reverts to its previous state.
+.El
+.It Xo
+.Nm zpool
+.Cm online
+.Op Fl -power
+.Op Fl e
+.Ar pool
+.Ar device Ns …
+.Xc
+Brings the specified physical device online.
+This command is not applicable to spares.
+.Bl -tag -width Ds
+.It Fl -power
+Power on the device's slot in the storage enclosure and wait for the device
+to show up before attempting to online it.
+Alternatively, you can set the
+.Sy ZPOOL_AUTO_POWER_ON_SLOT
+environment variable to always enable this behavior.
+This flag currently works on Linux only
+.It Fl e
+Expand the device to use all available space.
+If the device is part of a mirror or raidz then all devices must be expanded
+before the new space will become available to the pool.
+.El
+.El
+.
+.Sh SEE ALSO
+.Xr zpool-detach 8 ,
+.Xr zpool-remove 8 ,
+.Xr zpool-reopen 8 ,
+.Xr zpool-resilver 8
diff --git a/share/man/man8/zpool-online.8 b/share/man/man8/zpool-online.8
@@ -0,0 +1,106 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd August 9, 2019
+.Dt ZPOOL-OFFLINE 8
+.Os
+.
+.Sh NAME
+.Nm zpool-offline
+.Nd take physical devices offline in ZFS storage pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm offline
+.Op Fl Sy -power Ns | Ns Op Fl Sy ft
+.Ar pool
+.Ar device Ns …
+.Nm zpool
+.Cm online
+.Op Fl Sy -power
+.Op Fl Sy e
+.Ar pool
+.Ar device Ns …
+.
+.Sh DESCRIPTION
+.Bl -tag -width Ds
+.It Xo
+.Nm zpool
+.Cm offline
+.Op Fl Sy -power Ns | Ns Op Fl Sy ft
+.Ar pool
+.Ar device Ns …
+.Xc
+Takes the specified physical device offline.
+While the
+.Ar device
+is offline, no attempt is made to read or write to the device.
+This command is not applicable to spares.
+.Bl -tag -width Ds
+.It Fl -power
+Power off the device's slot in the storage enclosure.
+This flag currently works on Linux only
+.It Fl f
+Force fault.
+Instead of offlining the disk, put it into a faulted state.
+The fault will persist across imports unless the
+.Fl t
+flag was specified.
+.It Fl t
+Temporary.
+Upon reboot, the specified physical device reverts to its previous state.
+.El
+.It Xo
+.Nm zpool
+.Cm online
+.Op Fl -power
+.Op Fl e
+.Ar pool
+.Ar device Ns …
+.Xc
+Brings the specified physical device online.
+This command is not applicable to spares.
+.Bl -tag -width Ds
+.It Fl -power
+Power on the device's slot in the storage enclosure and wait for the device
+to show up before attempting to online it.
+Alternatively, you can set the
+.Sy ZPOOL_AUTO_POWER_ON_SLOT
+environment variable to always enable this behavior.
+This flag currently works on Linux only
+.It Fl e
+Expand the device to use all available space.
+If the device is part of a mirror or raidz then all devices must be expanded
+before the new space will become available to the pool.
+.El
+.El
+.
+.Sh SEE ALSO
+.Xr zpool-detach 8 ,
+.Xr zpool-remove 8 ,
+.Xr zpool-reopen 8 ,
+.Xr zpool-resilver 8
diff --git a/share/man/man8/zpool-reguid.8 b/share/man/man8/zpool-reguid.8
@@ -0,0 +1,60 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\" Copyright (c) 2024, Klara Inc.
+.\" Copyright (c) 2024, Mateusz Piotrowski
+.\"
+.Dd June 21, 2023
+.Dt ZPOOL-REGUID 8
+.Os
+.
+.Sh NAME
+.Nm zpool-reguid
+.Nd generate new unique identifier for ZFS storage pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm reguid
+.Op Fl g Ar guid
+.Ar pool
+.
+.Sh DESCRIPTION
+Generates a new unique identifier for the pool.
+You must ensure that all devices in this pool are online and healthy before
+performing this action.
+.
+.Bl -tag -width Ds
+.It Fl g Ar guid
+Set the pool GUID to the provided value.
+The GUID can be any 64-bit value accepted by
+.Xr strtoull 3
+in base 10.
+.Nm
+will return an error if the provided GUID is already in use.
+.El
+.Sh SEE ALSO
+.Xr zpool-export 8 ,
+.Xr zpool-import 8
diff --git a/share/man/man8/zpool-remove.8 b/share/man/man8/zpool-remove.8
@@ -0,0 +1,189 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd March 16, 2022
+.Dt ZPOOL-REMOVE 8
+.Os
+.
+.Sh NAME
+.Nm zpool-remove
+.Nd remove devices from ZFS storage pool
+.
+.Sh SYNOPSIS
+.Nm zpool
+.Cm remove
+.Op Fl npw
+.Ar pool Ar device Ns …
+.Nm zpool
+.Cm remove
+.Fl s
+.Ar pool
+.
+.Sh DESCRIPTION
+.Bl -tag -width Ds
+.It Xo
+.Nm zpool
+.Cm remove
+.Op Fl npw
+.Ar pool Ar device Ns …
+.Xc
+Removes the specified device from the pool.
+This command supports removing hot spare, cache, log, and both mirrored and
+non-redundant primary top-level vdevs, including dedup and special vdevs.
+.Pp
+Top-level vdevs can only be removed if the primary pool storage does not contain
+a top-level raidz vdev, all top-level vdevs have the same sector size, and the
+keys for all encrypted datasets are loaded.
+.Pp
+Removing a top-level vdev reduces the total amount of space in the storage pool.
+The specified device will be evacuated by copying all allocated space from it to
+the other devices in the pool.
+In this case, the
+.Nm zpool Cm remove
+command initiates the removal and returns, while the evacuation continues in
+the background.
+The removal progress can be monitored with
+.Nm zpool Cm status .
+If an I/O error is encountered during the removal process it will be cancelled.
+The
+.Sy device_removal
+feature flag must be enabled to remove a top-level vdev, see
+.Xr zpool-features 7 .
+.Pp
+A mirrored top-level device (log or data) can be removed by specifying the top-
+level mirror for the
+same.
+Non-log devices or data devices that are part of a mirrored configuration can be
+removed using
+the
+.Nm zpool Cm detach
+command.
+.Bl -tag -width Ds
+.It Fl n
+Do not actually perform the removal
+.Pq Qq No-op .
+Instead, print the estimated amount of memory that will be used by the
+mapping table after the removal completes.
+This is nonzero only for top-level vdevs.
+.El
+.Bl -tag -width Ds
+.It Fl p
+Used in conjunction with the
+.Fl n
+flag, displays numbers as parsable (exact) values.
+.It Fl w
+Waits until the removal has completed before returning.
+.El
+.It Xo
+.Nm zpool
+.Cm remove
+.Fl s
+.Ar pool
+.Xc
+Stops and cancels an in-progress removal of a top-level vdev.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 15 from zpool.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Removing a Mirrored top-level (Log or Data) Device
+The following commands remove the mirrored log device
+.Sy mirror-2
+and mirrored top-level data device
+.Sy mirror-1 .
+.Pp
+Given this configuration:
+.Bd -literal -compact -offset Ds
+ pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+ NAME STATE READ WRITE CKSUM
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ sda ONLINE 0 0 0
+ sdb ONLINE 0 0 0
+ mirror-1 ONLINE 0 0 0
+ sdc ONLINE 0 0 0
+ sdd ONLINE 0 0 0
+ logs
+ mirror-2 ONLINE 0 0 0
+ sde ONLINE 0 0 0
+ sdf ONLINE 0 0 0
+.Ed
+.Pp
+The command to remove the mirrored log
+.Ar mirror-2 No is :
+.Dl # Nm zpool Cm remove Ar tank mirror-2
+.Pp
+At this point, the log device no longer exists
+(both sides of the mirror have been removed):
+.Bd -literal -compact -offset Ds
+ pool: tank
+ state: ONLINE
+ scan: none requested
+config:
+
+ NAME STATE READ WRITE CKSUM
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ sda ONLINE 0 0 0
+ sdb ONLINE 0 0 0
+ mirror-1 ONLINE 0 0 0
+ sdc ONLINE 0 0 0
+ sdd ONLINE 0 0 0
+.Ed
+.Pp
+The command to remove the mirrored data
+.Ar mirror-1 No is :
+.Dl # Nm zpool Cm remove Ar tank mirror-1
+.Pp
+After
+.Ar mirror-1 No has been evacuated, the pool remains redundant, but
+the total amount of space is reduced:
+.Bd -literal -compact -offset Ds
+ pool: tank
+ state: ONLINE
+ scan: none requested
+config:
+
+ NAME STATE READ WRITE CKSUM
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ sda ONLINE 0 0 0
+ sdb ONLINE 0 0 0
+.Ed
+.
+.Sh SEE ALSO
+.Xr zpool-add 8 ,
+.Xr zpool-detach 8 ,
+.Xr zpool-labelclear 8 ,
+.Xr zpool-offline 8 ,
+.Xr zpool-replace 8 ,
+.Xr zpool-split 8
diff --git a/share/man/man8/zpool-reopen.8 b/share/man/man8/zpool-reopen.8
@@ -0,0 +1,52 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd June 2, 2021
+.Dt ZPOOL-REOPEN 8
+.Os
+.
+.Sh NAME
+.Nm zpool-reopen
+.Nd reopen vdevs associated with ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm reopen
+.Op Fl n
+.Oo Ar pool Oc Ns …
+.
+.Sh DESCRIPTION
+Reopen all vdevs associated with the specified pools,
+or all pools if none specified.
+.
+.Sh OPTIONS
+.Bl -tag -width "-n"
+.It Fl n
+Do not restart an in-progress scrub operation.
+This is not recommended and can
+result in partially resilvered devices unless a second scrub is performed.
+.El
diff --git a/share/man/man8/zpool-replace.8 b/share/man/man8/zpool-replace.8
@@ -0,0 +1,99 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd May 29, 2021
+.Dt ZPOOL-REPLACE 8
+.Os
+.
+.Sh NAME
+.Nm zpool-replace
+.Nd replace one device with another in ZFS storage pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm replace
+.Op Fl fsw
+.Oo Fl o Ar property Ns = Ns Ar value Oc
+.Ar pool Ar device Op Ar new-device
+.
+.Sh DESCRIPTION
+Replaces
+.Ar device
+with
+.Ar new-device .
+This is equivalent to attaching
+.Ar new-device ,
+waiting for it to resilver, and then detaching
+.Ar device .
+Any in progress scrub will be cancelled.
+.Pp
+The size of
+.Ar new-device
+must be greater than or equal to the minimum size of all the devices in a mirror
+or raidz configuration.
+.Pp
+.Ar new-device
+is required if the pool is not redundant.
+If
+.Ar new-device
+is not specified, it defaults to
+.Ar device .
+This form of replacement is useful after an existing disk has failed and has
+been physically replaced.
+In this case, the new disk may have the same
+.Pa /dev
+path as the old device, even though it is actually a different disk.
+ZFS recognizes this.
+.Bl -tag -width Ds
+.It Fl f
+Forces use of
+.Ar new-device ,
+even if it appears to be in use.
+Not all devices can be overridden in this manner.
+.It Fl o Ar property Ns = Ns Ar value
+Sets the given pool properties.
+See the
+.Xr zpoolprops 7
+manual page for a list of valid properties that can be set.
+The only property supported at the moment is
+.Sy ashift .
+.It Fl s
+The
+.Ar new-device
+is reconstructed sequentially to restore redundancy as quickly as possible.
+Checksums are not verified during sequential reconstruction so a scrub is
+started when the resilver completes.
+Sequential reconstruction is not supported for raidz configurations.
+.It Fl w
+Waits until the replacement has completed before returning.
+.El
+.
+.Sh SEE ALSO
+.Xr zpool-detach 8 ,
+.Xr zpool-initialize 8 ,
+.Xr zpool-online 8 ,
+.Xr zpool-resilver 8
diff --git a/share/man/man8/zpool-resilver.8 b/share/man/man8/zpool-resilver.8
@@ -0,0 +1,57 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd May 27, 2021
+.Dt ZPOOL-RESILVER 8
+.Os
+.
+.Sh NAME
+.Nm zpool-resilver
+.Nd resilver devices in ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm resilver
+.Ar pool Ns …
+.
+.Sh DESCRIPTION
+Starts a resilver of the specified pools.
+If an existing resilver is already running it will be restarted from the
+beginning.
+Any drives that were scheduled for a deferred
+resilver will be added to the new one.
+This requires the
+.Sy resilver_defer
+pool feature.
+.
+.Sh SEE ALSO
+.Xr zpool-iostat 8 ,
+.Xr zpool-online 8 ,
+.Xr zpool-reopen 8 ,
+.Xr zpool-replace 8 ,
+.Xr zpool-scrub 8 ,
+.Xr zpool-status 8
diff --git a/share/man/man8/zpool-scrub.8 b/share/man/man8/zpool-scrub.8
@@ -0,0 +1,162 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018, 2021 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd November 18, 2024
+.Dt ZPOOL-SCRUB 8
+.Os
+.
+.Sh NAME
+.Nm zpool-scrub
+.Nd begin or resume scrub of ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm scrub
+.Op Ns Fl e | Ns Fl p | Fl s Ns | Fl C Ns
+.Op Fl w
+.Ar pool Ns …
+.
+.Sh DESCRIPTION
+Begins a scrub or resumes a paused scrub.
+The scrub examines all data in the specified pools to verify that it checksums
+correctly.
+For replicated
+.Pq mirror, raidz, or draid
+devices, ZFS automatically repairs any damage discovered during the scrub.
+The
+.Nm zpool Cm status
+command reports the progress of the scrub and summarizes the results of the
+scrub upon completion.
+.Pp
+Scrubbing and resilvering are very similar operations.
+The difference is that resilvering only examines data that ZFS knows to be out
+of date
+.Po
+for example, when attaching a new device to a mirror or replacing an existing
+device
+.Pc ,
+whereas scrubbing examines all data to discover silent errors due to hardware
+faults or disk failure.
+.Pp
+When scrubbing a pool with encrypted filesystems the keys do not need to be
+loaded.
+However, if the keys are not loaded and an unrepairable checksum error is
+detected the file name cannot be included in the
+.Nm zpool Cm status Fl v
+verbose error report.
+.Pp
+Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
+one at a time.
+.Pp
+A scrub is split into two parts: metadata scanning and block scrubbing.
+The metadata scanning sorts blocks into large sequential ranges which can then
+be read much more efficiently from disk when issuing the scrub I/O.
+.Pp
+If a scrub is paused, the
+.Nm zpool Cm scrub
+resumes it.
+If a resilver is in progress, ZFS does not allow a scrub to be started until the
+resilver completes.
+.Pp
+Note that, due to changes in pool data on a live system, it is possible for
+scrubs to progress slightly beyond 100% completion.
+During this period, no completion time estimate will be provided.
+.
+.Sh OPTIONS
+.Bl -tag -width "-s"
+.It Fl s
+Stop scrubbing.
+.It Fl p
+Pause scrubbing.
+Scrub pause state and progress are periodically synced to disk.
+If the system is restarted or pool is exported during a paused scrub,
+even after import, scrub will remain paused until it is resumed.
+Once resumed the scrub will pick up from the place where it was last
+checkpointed to disk.
+To resume a paused scrub issue
+.Nm zpool Cm scrub
+or
+.Nm zpool Cm scrub
+.Fl e
+again.
+.It Fl w
+Wait until scrub has completed before returning.
+.It Fl e
+Only scrub files with known data errors as reported by
+.Nm zpool Cm status Fl v .
+The pool must have been scrubbed at least once with the
+.Sy head_errlog
+feature enabled to use this option.
+Error scrubbing cannot be run simultaneously with regular scrubbing or
+resilvering, nor can it be run when a regular scrub is paused.
+.It Fl C
+Continue scrub from last saved txg (see zpool
+.Sy last_scrubbed_txg
+property).
+.El
+.Sh EXAMPLES
+.Ss Example 1
+Status of pool with ongoing scrub:
+.sp
+.Bd -literal -compact
+.No # Nm zpool Cm status
+ ...
+ scan: scrub in progress since Sun Jul 25 16:07:49 2021
+ 403M / 405M scanned at 100M/s, 68.4M / 405M issued at 10.0M/s
+ 0B repaired, 16.91% done, 00:00:04 to go
+ ...
+.Ed
+.Pp
+Where metadata which references 403M of file data has been
+scanned at 100M/s, and 68.4M of that file data has been
+scrubbed sequentially at 10.0M/s.
+.Sh PERIODIC SCRUB
+On machines using systemd, scrub timers can be enabled on per-pool basis.
+.Nm weekly
+and
+.Nm monthly
+timer units are provided.
+.Bl -tag -width Ds
+.It Xo
+.Xc
+.Nm systemctl
+.Cm enable
+.Cm zfs-scrub-\fIweekly\fB@\fIrpool\fB.timer
+.Cm --now
+.It Xo
+.Xc
+.Nm systemctl
+.Cm enable
+.Cm zfs-scrub-\fImonthly\fB@\fIotherpool\fB.timer
+.Cm --now
+.El
+.
+.Sh SEE ALSO
+.Xr systemd.timer 5 ,
+.Xr zpool-iostat 8 ,
+.Xr zpool-resilver 8 ,
+.Xr zpool-status 8
diff --git a/share/man/man8/zpool-set.8 b/share/man/man8/zpool-set.8
@@ -0,0 +1,204 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd August 9, 2019
+.Dt ZPOOL-GET 8
+.Os
+.
+.Sh NAME
+.Nm zpool-get
+.Nd retrieve properties of ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm get
+.Op Fl Hp
+.Op Fl j Op Ar --json-int, --json-pool-key-guid
+.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns …
+.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Oo Ar pool Oc Ns …
+.
+.Nm zpool
+.Cm get
+.Op Fl Hp
+.Op Fl j Op Ar --json-int
+.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns …
+.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Ar pool
+.Oo Sy all-vdevs Ns | Ns
+.Ar vdev Oc Ns …
+.
+.Nm zpool
+.Cm set
+.Ar property Ns = Ns Ar value
+.Ar pool
+.
+.Nm zpool
+.Cm set
+.Ar property Ns = Ns Ar value
+.Ar pool
+.Ar vdev
+.
+.Sh DESCRIPTION
+.Bl -tag -width Ds
+.It Xo
+.Nm zpool
+.Cm get
+.Op Fl Hp
+.Op Fl j Op Ar --json-int, --json-pool-key-guid
+.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns …
+.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Oo Ar pool Oc Ns …
+.Xc
+Retrieves the given list of properties
+.Po
+or all properties if
+.Sy all
+is used
+.Pc
+for the specified storage pool(s).
+These properties are displayed with the following fields:
+.Bl -tag -compact -offset Ds -width "property"
+.It Sy name
+Name of storage pool.
+.It Sy property
+Property name.
+.It Sy value
+Property value.
+.It Sy source
+Property source, either
+.Sy default No or Sy local .
+.El
+.Pp
+See the
+.Xr zpoolprops 7
+manual page for more information on the available pool properties.
+.Bl -tag -compact -offset Ds -width "-o field"
+.It Fl j , -json Op Ar --json-int, --json-pool-key-guid
+Display the list of properties in JSON format.
+Specify
+.Sy --json-int
+to display the numbers in integer format instead of strings in JSON output.
+Specify
+.Sy --json-pool-key-guid
+to set pool GUID as key for pool objects instead of pool name.
+.It Fl H
+Scripted mode.
+Do not display headers, and separate fields by a single tab instead of arbitrary
+space.
+.It Fl o Ar field
+A comma-separated list of columns to display, defaults to
+.Sy name , Ns Sy property , Ns Sy value , Ns Sy source .
+.It Fl p
+Display numbers in parsable (exact) values.
+.El
+.It Xo
+.Nm zpool
+.Cm get
+.Op Fl j Op Ar --json-int
+.Op Fl Hp
+.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns …
+.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns …
+.Ar pool
+.Oo Sy all-vdevs Ns | Ns
+.Ar vdev Oc Ns …
+.Xc
+Retrieves the given list of properties
+.Po
+or all properties if
+.Sy all
+is used
+.Pc
+for the specified vdevs
+.Po
+or all vdevs if
+.Sy all-vdevs
+is used
+.Pc
+in the specified pool.
+These properties are displayed with the following fields:
+.Bl -tag -compact -offset Ds -width "property"
+.It Sy name
+Name of vdev.
+.It Sy property
+Property name.
+.It Sy value
+Property value.
+.It Sy source
+Property source, either
+.Sy default No or Sy local .
+.El
+.Pp
+See the
+.Xr vdevprops 7
+manual page for more information on the available pool properties.
+.Bl -tag -compact -offset Ds -width "-o field"
+.It Fl j , -json Op Ar --json-int
+Display the list of properties in JSON format.
+Specify
+.Sy --json-int
+to display the numbers in integer format instead of strings in JSON output.
+.It Fl H
+Scripted mode.
+Do not display headers, and separate fields by a single tab instead of arbitrary
+space.
+.It Fl o Ar field
+A comma-separated list of columns to display, defaults to
+.Sy name , Ns Sy property , Ns Sy value , Ns Sy source .
+.It Fl p
+Display numbers in parsable (exact) values.
+.El
+.It Xo
+.Nm zpool
+.Cm set
+.Ar property Ns = Ns Ar value
+.Ar pool
+.Xc
+Sets the given property on the specified pool.
+See the
+.Xr zpoolprops 7
+manual page for more information on what properties can be set and acceptable
+values.
+.It Xo
+.Nm zpool
+.Cm set
+.Ar property Ns = Ns Ar value
+.Ar pool
+.Ar vdev
+.Xc
+Sets the given property on the specified vdev in the specified pool.
+See the
+.Xr vdevprops 7
+manual page for more information on what properties can be set and acceptable
+values.
+.El
+.
+.Sh SEE ALSO
+.Xr vdevprops 7 ,
+.Xr zpool-features 7 ,
+.Xr zpoolprops 7 ,
+.Xr zpool-list 8
diff --git a/share/man/man8/zpool-split.8 b/share/man/man8/zpool-split.8
@@ -0,0 +1,117 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd June 2, 2021
+.Dt ZPOOL-SPLIT 8
+.Os
+.
+.Sh NAME
+.Nm zpool-split
+.Nd split devices off ZFS storage pool, creating new pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm split
+.Op Fl gLlnP
+.Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
+.Op Fl R Ar root
+.Ar pool newpool
+.Oo Ar device Oc Ns …
+.
+.Sh DESCRIPTION
+Splits devices off
+.Ar pool
+creating
+.Ar newpool .
+All vdevs in
+.Ar pool
+must be mirrors and the pool must not be in the process of resilvering.
+At the time of the split,
+.Ar newpool
+will be a replica of
+.Ar pool .
+By default, the
+last device in each mirror is split from
+.Ar pool
+to create
+.Ar newpool .
+.Pp
+The optional device specification causes the specified device(s) to be
+included in the new
+.Ar pool
+and, should any devices remain unspecified,
+the last device in each mirror is used as would be by default.
+.Bl -tag -width Ds
+.It Fl g
+Display vdev GUIDs instead of the normal device names.
+These GUIDs can be used in place of device names for the zpool
+detach/offline/remove/replace commands.
+.It Fl L
+Display real paths for vdevs resolving all symbolic links.
+This can be used to look up the current block device name regardless of the
+.Pa /dev/disk/
+path used to open it.
+.It Fl l
+Indicates that this command will request encryption keys for all encrypted
+datasets it attempts to mount as it is bringing the new pool online.
+Note that if any datasets have
+.Sy keylocation Ns = Ns Sy prompt ,
+this command will block waiting for the keys to be entered.
+Without this flag, encrypted datasets will be left unavailable until the keys
+are loaded.
+.It Fl n
+Do a dry-run
+.Pq Qq No-op
+split: do not actually perform it.
+Print out the expected configuration of
+.Ar newpool .
+.It Fl P
+Display full paths for vdevs instead of only the last component of
+the path.
+This can be used in conjunction with the
+.Fl L
+flag.
+.It Fl o Ar property Ns = Ns Ar value
+Sets the specified property for
+.Ar newpool .
+See the
+.Xr zpoolprops 7
+manual page for more information on the available pool properties.
+.It Fl R Ar root
+Set
+.Sy altroot
+for
+.Ar newpool
+to
+.Ar root
+and automatically import it.
+.El
+.
+.Sh SEE ALSO
+.Xr zpool-import 8 ,
+.Xr zpool-list 8 ,
+.Xr zpool-remove 8
diff --git a/share/man/man8/zpool-status.8 b/share/man/man8/zpool-status.8
@@ -0,0 +1,365 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd February 14, 2024
+.Dt ZPOOL-STATUS 8
+.Os
+.
+.Sh NAME
+.Nm zpool-status
+.Nd show detailed health status for ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm status
+.Op Fl dDegiLpPstvx
+.Op Fl T Sy u Ns | Ns Sy d
+.Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns …
+.Oo Ar pool Oc Ns …
+.Op Ar interval Op Ar count
+.Op Fl j Op Ar --json-int, --json-flat-vdevs, --json-pool-key-guid
+.
+.Sh DESCRIPTION
+Displays the detailed health status for the given pools.
+If no
+.Ar pool
+is specified, then the status of each pool in the system is displayed.
+For more information on pool and device health, see the
+.Sx Device Failure and Recovery
+section of
+.Xr zpoolconcepts 7 .
+.Pp
+If a scrub or resilver is in progress, this command reports the percentage done
+and the estimated time to completion.
+Both of these are only approximate, because the amount of data in the pool and
+the other workloads on the system can change.
+.Bl -tag -width Ds
+.It Fl -power
+Display vdev enclosure slot power status (on or off).
+.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns …
+Run a script (or scripts) on each vdev and include the output as a new column
+in the
+.Nm zpool Cm status
+output.
+See the
+.Fl c
+option of
+.Nm zpool Cm iostat
+for complete details.
+.It Fl j , -json Op Ar --json-int, --json-flat-vdevs, --json-pool-key-guid
+Display the status for ZFS pools in JSON format.
+Specify
+.Sy --json-int
+to display numbers in integer format instead of strings.
+Specify
+.Sy --json-flat-vdevs
+to display vdevs in flat hierarchy instead of nested vdev objects.
+Specify
+.Sy --json-pool-key-guid
+to set pool GUID as key for pool objects instead of pool names.
+.It Fl d
+Display the number of Direct I/O read/write checksum verify errors that have
+occured on a top-level VDEV.
+See
+.Sx zfs_vdev_direct_write_verify
+in
+.Xr zfs 4
+for details about the conditions that can cause Direct I/O write checksum
+verify failures to occur.
+Direct I/O reads checksum verify errors can also occur if the contents of the
+buffer are being manipulated after the I/O has been issued and is in flight.
+In the case of Direct I/O read checksum verify errors, the I/O will be reissued
+through the ARC.
+.It Fl D
+Display a histogram of deduplication statistics, showing the allocated
+.Pq physically present on disk
+and referenced
+.Pq logically referenced in the pool
+block counts and sizes by reference count.
+If repeated, (-DD), also shows statistics on how much of the DDT is resident
+in the ARC.
+.It Fl e
+Only show unhealthy vdevs (not-ONLINE or with errors).
+.It Fl g
+Display vdev GUIDs instead of the normal device names
+These GUIDs can be used in place of device names for the zpool
+detach/offline/remove/replace commands.
+.It Fl i
+Display vdev initialization status.
+.It Fl L
+Display real paths for vdevs resolving all symbolic links.
+This can be used to look up the current block device name regardless of the
+.Pa /dev/disk/
+path used to open it.
+.It Fl p
+Display numbers in parsable (exact) values.
+.It Fl P
+Display full paths for vdevs instead of only the last component of
+the path.
+This can be used in conjunction with the
+.Fl L
+flag.
+.It Fl s
+Display the number of leaf vdev slow I/O operations.
+This is the number of I/O operations that didn't complete in
+.Sy zio_slow_io_ms
+milliseconds
+.Pq Sy 30000 No by default .
+This does not necessarily mean the I/O operations failed to complete, just took
+an
+unreasonably long amount of time.
+This may indicate a problem with the underlying storage.
+.It Fl t
+Display vdev TRIM status.
+.It Fl T Sy u Ns | Ns Sy d
+Display a time stamp.
+Specify
+.Sy u
+for a printed representation of the internal representation of time.
+See
+.Xr time 1 .
+Specify
+.Sy d
+for standard date format.
+See
+.Xr date 1 .
+.It Fl v
+Displays verbose data error information, printing out a complete list of all
+data errors since the last complete pool scrub.
+If the head_errlog feature is enabled and files containing errors have been
+removed then the respective filenames will not be reported in subsequent runs
+of this command.
+.It Fl x
+Only display status for pools that are exhibiting errors or are otherwise
+unavailable.
+Warnings about pools not using the latest on-disk format will not be included.
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 16 from zpool.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Adding output columns
+Additional columns can be added to the
+.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c .
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size
+ NAME STATE READ WRITE CKSUM vendor model size
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+
+.No # Nm zpool Cm iostat Fl vc Pa size
+ capacity operations bandwidth
+pool alloc free read write read write size
+---------- ----- ----- ----- ----- ----- ----- ----
+rpool 14.6G 54.9G 4 55 250K 2.69M
+ sda1 14.6G 54.9G 4 55 250K 2.69M 70G
+---------- ----- ----- ----- ----- ----- ----- ----
+.Ed
+.
+.Ss Example 2 : No Display the status output in JSON format
+.Nm zpool Cm status No can output in JSON format if
+.Fl j
+is specified.
+.Fl c
+can be used to run a script on each VDEV.
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm status Fl j Fl c Pa vendor , Ns Pa model , Ns Pa size | Nm jq
+{
+ "output_version": {
+ "command": "zpool status",
+ "vers_major": 0,
+ "vers_minor": 1
+ },
+ "pools": {
+ "tank": {
+ "name": "tank",
+ "state": "ONLINE",
+ "guid": "3920273586464696295",
+ "txg": "16597",
+ "spa_version": "5000",
+ "zpl_version": "5",
+ "status": "OK",
+ "vdevs": {
+ "tank": {
+ "name": "tank",
+ "alloc_space": "62.6G",
+ "total_space": "15.0T",
+ "def_space": "11.3T",
+ "read_errors": "0",
+ "write_errors": "0",
+ "checksum_errors": "0",
+ "vdevs": {
+ "raidz1-0": {
+ "name": "raidz1-0",
+ "vdev_type": "raidz",
+ "guid": "763132626387621737",
+ "state": "HEALTHY",
+ "alloc_space": "62.5G",
+ "total_space": "10.9T",
+ "def_space": "7.26T",
+ "rep_dev_size": "10.9T",
+ "read_errors": "0",
+ "write_errors": "0",
+ "checksum_errors": "0",
+ "vdevs": {
+ "ca1eb824-c371-491d-ac13-37637e35c683": {
+ "name": "ca1eb824-c371-491d-ac13-37637e35c683",
+ "vdev_type": "disk",
+ "guid": "12841765308123764671",
+ "path": "/dev/disk/by-partuuid/ca1eb824-c371-491d-ac13-37637e35c683",
+ "state": "HEALTHY",
+ "rep_dev_size": "3.64T",
+ "phys_space": "3.64T",
+ "read_errors": "0",
+ "write_errors": "0",
+ "checksum_errors": "0",
+ "vendor": "ATA",
+ "model": "WDC WD40EFZX-68AWUN0",
+ "size": "3.6T"
+ },
+ "97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7": {
+ "name": "97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7",
+ "vdev_type": "disk",
+ "guid": "1527839927278881561",
+ "path": "/dev/disk/by-partuuid/97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7",
+ "state": "HEALTHY",
+ "rep_dev_size": "3.64T",
+ "phys_space": "3.64T",
+ "read_errors": "0",
+ "write_errors": "0",
+ "checksum_errors": "0",
+ "vendor": "ATA",
+ "model": "WDC WD40EFZX-68AWUN0",
+ "size": "3.6T"
+ },
+ "e9ddba5f-f948-4734-a472-cb8aa5f0ff65": {
+ "name": "e9ddba5f-f948-4734-a472-cb8aa5f0ff65",
+ "vdev_type": "disk",
+ "guid": "6982750226085199860",
+ "path": "/dev/disk/by-partuuid/e9ddba5f-f948-4734-a472-cb8aa5f0ff65",
+ "state": "HEALTHY",
+ "rep_dev_size": "3.64T",
+ "phys_space": "3.64T",
+ "read_errors": "0",
+ "write_errors": "0",
+ "checksum_errors": "0",
+ "vendor": "ATA",
+ "model": "WDC WD40EFZX-68AWUN0",
+ "size": "3.6T"
+ }
+ }
+ }
+ }
+ }
+ },
+ "dedup": {
+ "mirror-2": {
+ "name": "mirror-2",
+ "vdev_type": "mirror",
+ "guid": "2227766268377771003",
+ "state": "HEALTHY",
+ "alloc_space": "89.1M",
+ "total_space": "3.62T",
+ "def_space": "3.62T",
+ "rep_dev_size": "3.62T",
+ "read_errors": "0",
+ "write_errors": "0",
+ "checksum_errors": "0",
+ "vdevs": {
+ "db017360-d8e9-4163-961b-144ca75293a3": {
+ "name": "db017360-d8e9-4163-961b-144ca75293a3",
+ "vdev_type": "disk",
+ "guid": "17880913061695450307",
+ "path": "/dev/disk/by-partuuid/db017360-d8e9-4163-961b-144ca75293a3",
+ "state": "HEALTHY",
+ "rep_dev_size": "3.63T",
+ "phys_space": "3.64T",
+ "read_errors": "0",
+ "write_errors": "0",
+ "checksum_errors": "0",
+ "vendor": "ATA",
+ "model": "WDC WD40EFZX-68AWUN0",
+ "size": "3.6T"
+ },
+ "952c3baf-b08a-4a8c-b7fa-33a07af5fe6f": {
+ "name": "952c3baf-b08a-4a8c-b7fa-33a07af5fe6f",
+ "vdev_type": "disk",
+ "guid": "10276374011610020557",
+ "path": "/dev/disk/by-partuuid/952c3baf-b08a-4a8c-b7fa-33a07af5fe6f",
+ "state": "HEALTHY",
+ "rep_dev_size": "3.63T",
+ "phys_space": "3.64T",
+ "read_errors": "0",
+ "write_errors": "0",
+ "checksum_errors": "0",
+ "vendor": "ATA",
+ "model": "WDC WD40EFZX-68AWUN0",
+ "size": "3.6T"
+ }
+ }
+ }
+ },
+ "special": {
+ "25d418f8-92bd-4327-b59f-7ef5d5f50d81": {
+ "name": "25d418f8-92bd-4327-b59f-7ef5d5f50d81",
+ "vdev_type": "disk",
+ "guid": "3935742873387713123",
+ "path": "/dev/disk/by-partuuid/25d418f8-92bd-4327-b59f-7ef5d5f50d81",
+ "state": "HEALTHY",
+ "alloc_space": "37.4M",
+ "total_space": "444G",
+ "def_space": "444G",
+ "rep_dev_size": "444G",
+ "phys_space": "447G",
+ "read_errors": "0",
+ "write_errors": "0",
+ "checksum_errors": "0",
+ "vendor": "ATA",
+ "model": "Micron_5300_MTFDDAK480TDS",
+ "size": "447.1G"
+ }
+ },
+ "error_count": "0"
+ }
+ }
+}
+.Ed
+.
+.Sh SEE ALSO
+.Xr zpool-events 8 ,
+.Xr zpool-history 8 ,
+.Xr zpool-iostat 8 ,
+.Xr zpool-list 8 ,
+.Xr zpool-resilver 8 ,
+.Xr zpool-scrub 8 ,
+.Xr zpool-wait 8
diff --git a/share/man/man8/zpool-sync.8 b/share/man/man8/zpool-sync.8
@@ -0,0 +1,53 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd August 9, 2019
+.Dt ZPOOL-SYNC 8
+.Os
+.
+.Sh NAME
+.Nm zpool-sync
+.Nd flush data to primary storage of ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm sync
+.Oo Ar pool Oc Ns …
+.
+.Sh DESCRIPTION
+This command forces all in-core dirty data to be written to the primary
+pool storage and not the ZIL.
+It will also update administrative information including quota reporting.
+Without arguments,
+.Nm zpool Cm sync
+will sync all pools on the system.
+Otherwise, it will sync only the specified pools.
+.
+.Sh SEE ALSO
+.Xr zpoolconcepts 7 ,
+.Xr zpool-export 8 ,
+.Xr zpool-iostat 8
diff --git a/share/man/man8/zpool-trim.8 b/share/man/man8/zpool-trim.8
@@ -0,0 +1,112 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd May 27, 2021
+.Dt ZPOOL-TRIM 8
+.Os
+.
+.Sh NAME
+.Nm zpool-trim
+.Nd initiate TRIM of free space in ZFS storage pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm trim
+.Op Fl dw
+.Op Fl r Ar rate
+.Op Fl c Ns | Ns Fl s
+.Ar pool
+.Oo Ar device Ns Oc Ns …
+.
+.Sh DESCRIPTION
+Initiates an immediate on-demand TRIM operation for all of the free space in
+a pool.
+This operation informs the underlying storage devices of all blocks
+in the pool which are no longer allocated and allows thinly provisioned
+devices to reclaim the space.
+.Pp
+A manual on-demand TRIM operation can be initiated irrespective of the
+.Sy autotrim
+pool property setting.
+See the documentation for the
+.Sy autotrim
+property above for the types of vdev devices which can be trimmed.
+.Bl -tag -width Ds
+.It Fl d , -secure
+Causes a secure TRIM to be initiated.
+When performing a secure TRIM, the
+device guarantees that data stored on the trimmed blocks has been erased.
+This requires support from the device and is not supported by all SSDs.
+.It Fl r , -rate Ar rate
+Controls the rate at which the TRIM operation progresses.
+Without this
+option TRIM is executed as quickly as possible.
+The rate, expressed in bytes
+per second, is applied on a per-vdev basis and may be set differently for
+each leaf vdev.
+.It Fl c , -cancel
+Cancel trimming on the specified devices, or all eligible devices if none
+are specified.
+If one or more target devices are invalid or are not currently being
+trimmed, the command will fail and no cancellation will occur on any device.
+.It Fl s , -suspend
+Suspend trimming on the specified devices, or all eligible devices if none
+are specified.
+If one or more target devices are invalid or are not currently being
+trimmed, the command will fail and no suspension will occur on any device.
+Trimming can then be resumed by running
+.Nm zpool Cm trim
+with no flags on the relevant target devices.
+.It Fl w , -wait
+Wait until the devices are done being trimmed before returning.
+.El
+.Sh PERIODIC TRIM
+On machines using systemd, trim timers can be enabled on a per-pool basis.
+.Nm weekly
+and
+.Nm monthly
+timer units are provided.
+.Bl -tag -width Ds
+.It Xo
+.Xc
+.Nm systemctl
+.Cm enable
+.Cm zfs-trim-\fIweekly\fB@\fIrpool\fB.timer
+.Cm --now
+.It Xo
+.Xc
+.Nm systemctl
+.Cm enable
+.Cm zfs-trim-\fImonthly\fB@\fIotherpool\fB.timer
+.Cm --now
+.El
+.
+.Sh SEE ALSO
+.Xr systemd.timer 5 ,
+.Xr zpoolprops 7 ,
+.Xr zpool-initialize 8 ,
+.Xr zpool-wait 8
diff --git a/share/man/man8/zpool-upgrade.8 b/share/man/man8/zpool-upgrade.8
@@ -0,0 +1,121 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
+.\"
+.Dd March 16, 2022
+.Dt ZPOOL-UPGRADE 8
+.Os
+.
+.Sh NAME
+.Nm zpool-upgrade
+.Nd manage version and feature flags of ZFS storage pools
+.Sh SYNOPSIS
+.Nm zpool
+.Cm upgrade
+.Nm zpool
+.Cm upgrade
+.Fl v
+.Nm zpool
+.Cm upgrade
+.Op Fl V Ar version
+.Fl a Ns | Ns Ar pool Ns …
+.
+.Sh DESCRIPTION
+.Bl -tag -width Ds
+.It Xo
+.Nm zpool
+.Cm upgrade
+.Xc
+Displays pools which do not have all supported features enabled and pools
+formatted using a legacy ZFS version number.
+These pools can continue to be used, but some features may not be available.
+Use
+.Nm zpool Cm upgrade Fl a
+to enable all features on all pools (subject to the
+.Fl o Sy compatibility
+property).
+.It Xo
+.Nm zpool
+.Cm upgrade
+.Fl v
+.Xc
+Displays legacy ZFS versions supported by the this version of ZFS.
+See
+.Xr zpool-features 7
+for a description of feature flags features supported by this version of ZFS.
+.It Xo
+.Nm zpool
+.Cm upgrade
+.Op Fl V Ar version
+.Fl a Ns | Ns Ar pool Ns …
+.Xc
+Enables all supported features on the given pool.
+.Pp
+If the pool has specified compatibility feature sets using the
+.Fl o Sy compatibility
+property, only the features present in all requested compatibility sets will be
+enabled.
+If this property is set to
+.Ar legacy
+then no upgrade will take place.
+.Pp
+Once this is done, the pool will no longer be accessible on systems that do not
+support feature flags.
+See
+.Xr zpool-features 7
+for details on compatibility with systems that support feature flags, but do not
+support all features enabled on the pool.
+.Bl -tag -width Ds
+.It Fl a
+Enables all supported features (from specified compatibility sets, if any) on
+all
+pools.
+.It Fl V Ar version
+Upgrade to the specified legacy version.
+If specified, no features will be enabled on the pool.
+This option can only be used to increase the version number up to the last
+supported legacy version number.
+.El
+.El
+.
+.Sh EXAMPLES
+.\" These are, respectively, examples 10 from zpool.8
+.\" Make sure to update them bidirectionally
+.Ss Example 1 : No Upgrading All ZFS Storage Pools to the Current Version
+The following command upgrades all ZFS Storage pools to the current version of
+the software:
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm upgrade Fl a
+This system is currently running ZFS version 2.
+.Ed
+.
+.Sh SEE ALSO
+.Xr zpool-features 7 ,
+.Xr zpoolconcepts 7 ,
+.Xr zpoolprops 7 ,
+.Xr zpool-history 8
diff --git a/share/man/man8/zpool-wait.8 b/share/man/man8/zpool-wait.8
@@ -0,0 +1,118 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2021 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd May 27, 2021
+.Dt ZPOOL-WAIT 8
+.Os
+.
+.Sh NAME
+.Nm zpool-wait
+.Nd wait for activity to stop in a ZFS storage pool
+.Sh SYNOPSIS
+.Nm zpool
+.Cm wait
+.Op Fl Hp
+.Op Fl T Sy u Ns | Ns Sy d
+.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns …
+.Ar pool
+.Op Ar interval
+.
+.Sh DESCRIPTION
+Waits until all background activity of the given types has ceased in the given
+pool.
+The activity could cease because it has completed, or because it has been
+paused or canceled by a user, or because the pool has been exported or
+destroyed.
+If no activities are specified, the command waits until background activity of
+every type listed below has ceased.
+If there is no activity of the given types in progress, the command returns
+immediately.
+.Pp
+These are the possible values for
+.Ar activity ,
+along with what each one waits for:
+.Bl -tag -compact -offset Ds -width "raidz_expand"
+.It Sy discard
+Checkpoint to be discarded
+.It Sy free
+.Sy freeing
+property to become
+.Sy 0
+.It Sy initialize
+All initializations to cease
+.It Sy replace
+All device replacements to cease
+.It Sy remove
+Device removal to cease
+.It Sy resilver
+Resilver to cease
+.It Sy scrub
+Scrub to cease
+.It Sy trim
+Manual trim to cease
+.It Sy raidz_expand
+Attaching to a RAID-Z vdev to complete
+.El
+.Pp
+If an
+.Ar interval
+is provided, the amount of work remaining, in bytes, for each activity is
+printed every
+.Ar interval
+seconds.
+.Bl -tag -width Ds
+.It Fl H
+Scripted mode.
+Do not display headers, and separate fields by a single tab instead of arbitrary
+space.
+.It Fl p
+Display numbers in parsable (exact) values.
+.It Fl T Sy u Ns | Ns Sy d
+Display a time stamp.
+Specify
+.Sy u
+for a printed representation of the internal representation of time.
+See
+.Xr time 1 .
+Specify
+.Sy d
+for standard date format.
+See
+.Xr date 1 .
+.El
+.
+.Sh SEE ALSO
+.Xr zpool-checkpoint 8 ,
+.Xr zpool-initialize 8 ,
+.Xr zpool-remove 8 ,
+.Xr zpool-replace 8 ,
+.Xr zpool-resilver 8 ,
+.Xr zpool-scrub 8 ,
+.Xr zpool-status 8 ,
+.Xr zpool-trim 8
diff --git a/share/man/man8/zpool.8 b/share/man/man8/zpool.8
@@ -0,0 +1,656 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd February 14, 2024
+.Dt ZPOOL 8
+.Os
+.
+.Sh NAME
+.Nm zpool
+.Nd configure ZFS storage pools
+.Sh SYNOPSIS
+.Nm
+.Fl ?V
+.Nm
+.Cm version
+.Op Fl j
+.Nm
+.Cm subcommand
+.Op Ar arguments
+.
+.Sh DESCRIPTION
+The
+.Nm
+command configures ZFS storage pools.
+A storage pool is a collection of devices that provides physical storage and
+data replication for ZFS datasets.
+All datasets within a storage pool share the same space.
+See
+.Xr zfs 8
+for information on managing datasets.
+.Pp
+For an overview of creating and managing ZFS storage pools see the
+.Xr zpoolconcepts 7
+manual page.
+.
+.Sh SUBCOMMANDS
+All subcommands that modify state are logged persistently to the pool in their
+original form.
+.Pp
+The
+.Nm
+command provides subcommands to create and destroy storage pools, add capacity
+to storage pools, and provide information about the storage pools.
+The following subcommands are supported:
+.Bl -tag -width Ds
+.It Xo
+.Nm
+.Fl ?\&
+.Xc
+Displays a help message.
+.It Xo
+.Nm
+.Fl V , -version
+.Xc
+.It Xo
+.Nm
+.Cm version
+.Op Fl j
+.Xc
+Displays the software version of the
+.Nm
+userland utility and the ZFS kernel module.
+Use
+.Fl j
+option to output in JSON format.
+.El
+.
+.Ss Creation
+.Bl -tag -width Ds
+.It Xr zpool-create 8
+Creates a new storage pool containing the virtual devices specified on the
+command line.
+.It Xr zpool-initialize 8
+Begins initializing by writing to all unallocated regions on the specified
+devices, or all eligible devices in the pool if no individual devices are
+specified.
+.El
+.
+.Ss Destruction
+.Bl -tag -width Ds
+.It Xr zpool-destroy 8
+Destroys the given pool, freeing up any devices for other use.
+.It Xr zpool-labelclear 8
+Removes ZFS label information from the specified
+.Ar device .
+.El
+.
+.Ss Virtual Devices
+.Bl -tag -width Ds
+.It Xo
+.Xr zpool-attach 8 Ns / Ns Xr zpool-detach 8
+.Xc
+Converts a non-redundant disk into a mirror, or increases
+the redundancy level of an existing mirror
+.Cm ( attach Ns ), or performs the inverse operation (
+.Cm detach Ns ).
+.It Xo
+.Xr zpool-add 8 Ns / Ns Xr zpool-remove 8
+.Xc
+Adds the specified virtual devices to the given pool,
+or removes the specified device from the pool.
+.It Xr zpool-replace 8
+Replaces an existing device (which may be faulted) with a new one.
+.It Xr zpool-split 8
+Creates a new pool by splitting all mirrors in an existing pool (which decreases
+its redundancy).
+.El
+.
+.Ss Properties
+Available pool properties listed in the
+.Xr zpoolprops 7
+manual page.
+.Bl -tag -width Ds
+.It Xr zpool-list 8
+Lists the given pools along with a health status and space usage.
+.It Xo
+.Xr zpool-get 8 Ns / Ns Xr zpool-set 8
+.Xc
+Retrieves the given list of properties
+.Po
+or all properties if
+.Sy all
+is used
+.Pc
+for the specified storage pool(s).
+.El
+.
+.Ss Monitoring
+.Bl -tag -width Ds
+.It Xr zpool-status 8
+Displays the detailed health status for the given pools.
+.It Xr zpool-iostat 8
+Displays logical I/O statistics for the given pools/vdevs.
+Physical I/O operations may be observed via
+.Xr iostat 1 .
+.It Xr zpool-events 8
+Lists all recent events generated by the ZFS kernel modules.
+These events are consumed by the
+.Xr zed 8
+and used to automate administrative tasks such as replacing a failed device
+with a hot spare.
+That manual page also describes the subclasses and event payloads
+that can be generated.
+.It Xr zpool-history 8
+Displays the command history of the specified pool(s) or all pools if no pool is
+specified.
+.El
+.
+.Ss Maintenance
+.Bl -tag -width Ds
+.It Xr zpool-prefetch 8
+Prefetches specific types of pool data.
+.It Xr zpool-scrub 8
+Begins a scrub or resumes a paused scrub.
+.It Xr zpool-checkpoint 8
+Checkpoints the current state of
+.Ar pool ,
+which can be later restored by
+.Nm zpool Cm import Fl -rewind-to-checkpoint .
+.It Xr zpool-trim 8
+Initiates an immediate on-demand TRIM operation for all of the free space in a
+pool.
+This operation informs the underlying storage devices of all blocks
+in the pool which are no longer allocated and allows thinly provisioned
+devices to reclaim the space.
+.It Xr zpool-sync 8
+This command forces all in-core dirty data to be written to the primary
+pool storage and not the ZIL.
+It will also update administrative information including quota reporting.
+Without arguments,
+.Nm zpool Cm sync
+will sync all pools on the system.
+Otherwise, it will sync only the specified pool(s).
+.It Xr zpool-upgrade 8
+Manage the on-disk format version of storage pools.
+.It Xr zpool-wait 8
+Waits until all background activity of the given types has ceased in the given
+pool.
+.El
+.
+.Ss Fault Resolution
+.Bl -tag -width Ds
+.It Xo
+.Xr zpool-offline 8 Ns / Ns Xr zpool-online 8
+.Xc
+Takes the specified physical device offline or brings it online.
+.It Xr zpool-resilver 8
+Starts a resilver.
+If an existing resilver is already running it will be restarted from the
+beginning.
+.It Xr zpool-reopen 8
+Reopen all the vdevs associated with the pool.
+.It Xr zpool-clear 8
+Clears device errors in a pool.
+.El
+.
+.Ss Import & Export
+.Bl -tag -width Ds
+.It Xr zpool-import 8
+Make disks containing ZFS storage pools available for use on the system.
+.It Xr zpool-export 8
+Exports the given pools from the system.
+.It Xr zpool-reguid 8
+Generates a new unique identifier for the pool.
+.El
+.
+.Sh EXIT STATUS
+The following exit values are returned:
+.Bl -tag -compact -offset 4n -width "a"
+.It Sy 0
+Successful completion.
+.It Sy 1
+An error occurred.
+.It Sy 2
+Invalid command line options were specified.
+.El
+.
+.Sh EXAMPLES
+.\" Examples 1, 2, 3, 4, 12, 13 are shared with zpool-create.8.
+.\" Examples 6, 14 are shared with zpool-add.8.
+.\" Examples 7, 16 are shared with zpool-list.8.
+.\" Examples 8 are shared with zpool-destroy.8.
+.\" Examples 9 are shared with zpool-export.8.
+.\" Examples 10 are shared with zpool-import.8.
+.\" Examples 11 are shared with zpool-upgrade.8.
+.\" Examples 15 are shared with zpool-remove.8.
+.\" Examples 17 are shared with zpool-status.8.
+.\" Examples 14, 17 are also shared with zpool-iostat.8.
+.\" Make sure to update them omnidirectionally
+.Ss Example 1 : No Creating a RAID-Z Storage Pool
+The following command creates a pool with a single raidz root vdev that
+consists of six disks:
+.Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf
+.
+.Ss Example 2 : No Creating a Mirrored Storage Pool
+The following command creates a pool with two mirrors, where each mirror
+contains two disks:
+.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd
+.
+.Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions
+The following command creates a non-redundant pool using two disk partitions:
+.Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2
+.
+.Ss Example 4 : No Creating a ZFS Storage Pool by Using Files
+The following command creates a non-redundant pool using files.
+While not recommended, a pool based on files can be useful for experimental
+purposes.
+.Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b
+.
+.Ss Example 5 : No Making a non-mirrored ZFS Storage Pool mirrored
+The following command converts an existing single device
+.Ar sda
+into a mirror by attaching a second device to it,
+.Ar sdb .
+.Dl # Nm zpool Cm attach Ar tank Pa sda sdb
+.
+.Ss Example 6 : No Adding a Mirror to a ZFS Storage Pool
+The following command adds two mirrored disks to the pool
+.Ar tank ,
+assuming the pool is already made up of two-way mirrors.
+The additional space is immediately available to any datasets within the pool.
+.Dl # Nm zpool Cm add Ar tank Sy mirror Pa sda sdb
+.
+.Ss Example 7 : No Listing Available ZFS Storage Pools
+The following command lists all available pools on the system.
+In this case, the pool
+.Ar zion
+is faulted due to a missing device.
+The results from this command are similar to the following:
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm list
+NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
+tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
+zion - - - - - - - FAULTED -
+.Ed
+.
+.Ss Example 8 : No Destroying a ZFS Storage Pool
+The following command destroys the pool
+.Ar tank
+and any datasets contained within:
+.Dl # Nm zpool Cm destroy Fl f Ar tank
+.
+.Ss Example 9 : No Exporting a ZFS Storage Pool
+The following command exports the devices in pool
+.Ar tank
+so that they can be relocated or later imported:
+.Dl # Nm zpool Cm export Ar tank
+.
+.Ss Example 10 : No Importing a ZFS Storage Pool
+The following command displays available pools, and then imports the pool
+.Ar tank
+for use on the system.
+The results from this command are similar to the following:
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm import
+ pool: tank
+ id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+ tank ONLINE
+ mirror ONLINE
+ sda ONLINE
+ sdb ONLINE
+
+.No # Nm zpool Cm import Ar tank
+.Ed
+.
+.Ss Example 11 : No Upgrading All ZFS Storage Pools to the Current Version
+The following command upgrades all ZFS Storage pools to the current version of
+the software:
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm upgrade Fl a
+This system is currently running ZFS version 2.
+.Ed
+.
+.Ss Example 12 : No Managing Hot Spares
+The following command creates a new pool with an available hot spare:
+.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc
+.Pp
+If one of the disks were to fail, the pool would be reduced to the degraded
+state.
+The failed device can be replaced using the following command:
+.Dl # Nm zpool Cm replace Ar tank Pa sda sdd
+.Pp
+Once the data has been resilvered, the spare is automatically removed and is
+made available for use should another device fail.
+The hot spare can be permanently removed from the pool using the following
+command:
+.Dl # Nm zpool Cm remove Ar tank Pa sdc
+.
+.Ss Example 13 : No Creating a ZFS Pool with Mirrored Separate Intent Logs
+The following command creates a ZFS storage pool consisting of two, two-way
+mirrors and mirrored log devices:
+.Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf
+.
+.Ss Example 14 : No Adding Cache Devices to a ZFS Pool
+The following command adds two disks for use as cache devices to a ZFS storage
+pool:
+.Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd
+.Pp
+Once added, the cache devices gradually fill with content from main memory.
+Depending on the size of your cache devices, it could take over an hour for
+them to fill.
+Capacity and reads can be monitored using the
+.Cm iostat
+subcommand as follows:
+.Dl # Nm zpool Cm iostat Fl v Ar pool 5
+.
+.Ss Example 15 : No Removing a Mirrored top-level (Log or Data) Device
+The following commands remove the mirrored log device
+.Sy mirror-2
+and mirrored top-level data device
+.Sy mirror-1 .
+.Pp
+Given this configuration:
+.Bd -literal -compact -offset Ds
+ pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+ NAME STATE READ WRITE CKSUM
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ sda ONLINE 0 0 0
+ sdb ONLINE 0 0 0
+ mirror-1 ONLINE 0 0 0
+ sdc ONLINE 0 0 0
+ sdd ONLINE 0 0 0
+ logs
+ mirror-2 ONLINE 0 0 0
+ sde ONLINE 0 0 0
+ sdf ONLINE 0 0 0
+.Ed
+.Pp
+The command to remove the mirrored log
+.Ar mirror-2 No is :
+.Dl # Nm zpool Cm remove Ar tank mirror-2
+.Pp
+At this point, the log device no longer exists
+(both sides of the mirror have been removed):
+.Bd -literal -compact -offset Ds
+ pool: tank
+ state: ONLINE
+ scan: none requested
+config:
+
+ NAME STATE READ WRITE CKSUM
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ sda ONLINE 0 0 0
+ sdb ONLINE 0 0 0
+ mirror-1 ONLINE 0 0 0
+ sdc ONLINE 0 0 0
+ sdd ONLINE 0 0 0
+.Ed
+.Pp
+The command to remove the mirrored data
+.Ar mirror-1 No is :
+.Dl # Nm zpool Cm remove Ar tank mirror-1
+.Pp
+After
+.Ar mirror-1 No has been evacuated, the pool remains redundant, but
+the total amount of space is reduced:
+.Bd -literal -compact -offset Ds
+ pool: tank
+ state: ONLINE
+ scan: none requested
+config:
+
+ NAME STATE READ WRITE CKSUM
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ sda ONLINE 0 0 0
+ sdb ONLINE 0 0 0
+.Ed
+.
+.Ss Example 16 : No Displaying expanded space on a device
+The following command displays the detailed information for the pool
+.Ar data .
+This pool is comprised of a single raidz vdev where one of its devices
+increased its capacity by 10 GiB.
+In this example, the pool will not be able to utilize this extra capacity until
+all the devices under the raidz vdev have been expanded.
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm list Fl v Ar data
+NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
+ raidz1 23.9G 14.6G 9.30G - 48%
+ sda - - - - -
+ sdb - - - 10G -
+ sdc - - - - -
+.Ed
+.
+.Ss Example 17 : No Adding output columns
+Additional columns can be added to the
+.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c .
+.Bd -literal -compact -offset Ds
+.No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size
+ NAME STATE READ WRITE CKSUM vendor model size
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+
+.No # Nm zpool Cm iostat Fl vc Pa size
+ capacity operations bandwidth
+pool alloc free read write read write size
+---------- ----- ----- ----- ----- ----- ----- ----
+rpool 14.6G 54.9G 4 55 250K 2.69M
+ sda1 14.6G 54.9G 4 55 250K 2.69M 70G
+---------- ----- ----- ----- ----- ----- ----- ----
+.Ed
+.
+.Sh ENVIRONMENT VARIABLES
+.Bl -tag -compact -width "ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE"
+.It Sy ZFS_ABORT
+Cause
+.Nm
+to dump core on exit for the purposes of running
+.Sy ::findleaks .
+.It Sy ZFS_COLOR
+Use ANSI color in
+.Nm zpool Cm status
+and
+.Nm zpool Cm iostat
+output.
+.It Sy ZPOOL_AUTO_POWER_ON_SLOT
+Automatically attempt to turn on the drives enclosure slot power to a drive when
+running the
+.Nm zpool Cm online
+or
+.Nm zpool Cm clear
+commands.
+This has the same effect as passing the
+.Fl -power
+option to those commands.
+.It Sy ZPOOL_POWER_ON_SLOT_TIMEOUT_MS
+The maximum time in milliseconds to wait for a slot power sysfs value
+to return the correct value after writing it.
+For example, after writing "on" to the sysfs enclosure slot power_control file,
+it can take some time for the enclosure to power down the slot and return
+"on" if you read back the 'power_control' value.
+Defaults to 30 seconds (30000ms) if not set.
+.It Sy ZPOOL_IMPORT_PATH
+The search path for devices or files to use with the pool.
+This is a colon-separated list of directories in which
+.Nm
+looks for device nodes and files.
+Similar to the
+.Fl d
+option in
+.Nm zpool import .
+.It Sy ZPOOL_IMPORT_UDEV_TIMEOUT_MS
+The maximum time in milliseconds that
+.Nm zpool import
+will wait for an expected device to be available.
+.It Sy ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE
+If set, suppress warning about non-native vdev ashift in
+.Nm zpool Cm status .
+The value is not used, only the presence or absence of the variable matters.
+.It Sy ZPOOL_VDEV_NAME_GUID
+Cause
+.Nm
+subcommands to output vdev guids by default.
+This behavior is identical to the
+.Nm zpool Cm status Fl g
+command line option.
+.It Sy ZPOOL_VDEV_NAME_FOLLOW_LINKS
+Cause
+.Nm
+subcommands to follow links for vdev names by default.
+This behavior is identical to the
+.Nm zpool Cm status Fl L
+command line option.
+.It Sy ZPOOL_VDEV_NAME_PATH
+Cause
+.Nm
+subcommands to output full vdev path names by default.
+This behavior is identical to the
+.Nm zpool Cm status Fl P
+command line option.
+.It Sy ZFS_VDEV_DEVID_OPT_OUT
+Older OpenZFS implementations had issues when attempting to display pool
+config vdev names if a
+.Sy devid
+NVP value is present in the pool's config.
+.Pp
+For example, a pool that originated on illumos platform would have a
+.Sy devid
+value in the config and
+.Nm zpool Cm status
+would fail when listing the config.
+This would also be true for future Linux-based pools.
+.Pp
+A pool can be stripped of any
+.Sy devid
+values on import or prevented from adding
+them on
+.Nm zpool Cm create
+or
+.Nm zpool Cm add
+by setting
+.Sy ZFS_VDEV_DEVID_OPT_OUT .
+.Pp
+.It Sy ZPOOL_SCRIPTS_AS_ROOT
+Allow a privileged user to run
+.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
+Normally, only unprivileged users are allowed to run
+.Fl c .
+.It Sy ZPOOL_SCRIPTS_PATH
+The search path for scripts when running
+.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
+This is a colon-separated list of directories and overrides the default
+.Pa ~/.zpool.d
+and
+.Pa /etc/zfs/zpool.d
+search paths.
+.It Sy ZPOOL_SCRIPTS_ENABLED
+Allow a user to run
+.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
+If
+.Sy ZPOOL_SCRIPTS_ENABLED
+is not set, it is assumed that the user is allowed to run
+.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
+.\" Shared with zfs.8
+.It Sy ZFS_MODULE_TIMEOUT
+Time, in seconds, to wait for
+.Pa /dev/zfs
+to appear.
+Defaults to
+.Sy 10 ,
+max
+.Sy 600 Pq 10 minutes .
+If
+.Pf < Sy 0 ,
+wait forever; if
+.Sy 0 ,
+don't wait.
+.El
+.
+.Sh INTERFACE STABILITY
+.Sy Evolving
+.
+.Sh SEE ALSO
+.Xr zfs 4 ,
+.Xr zpool-features 7 ,
+.Xr zpoolconcepts 7 ,
+.Xr zpoolprops 7 ,
+.Xr zed 8 ,
+.Xr zfs 8 ,
+.Xr zpool-add 8 ,
+.Xr zpool-attach 8 ,
+.Xr zpool-checkpoint 8 ,
+.Xr zpool-clear 8 ,
+.Xr zpool-create 8 ,
+.Xr zpool-ddtprune 8 ,
+.Xr zpool-destroy 8 ,
+.Xr zpool-detach 8 ,
+.Xr zpool-events 8 ,
+.Xr zpool-export 8 ,
+.Xr zpool-get 8 ,
+.Xr zpool-history 8 ,
+.Xr zpool-import 8 ,
+.Xr zpool-initialize 8 ,
+.Xr zpool-iostat 8 ,
+.Xr zpool-labelclear 8 ,
+.Xr zpool-list 8 ,
+.Xr zpool-offline 8 ,
+.Xr zpool-online 8 ,
+.Xr zpool-prefetch 8 ,
+.Xr zpool-reguid 8 ,
+.Xr zpool-remove 8 ,
+.Xr zpool-reopen 8 ,
+.Xr zpool-replace 8 ,
+.Xr zpool-resilver 8 ,
+.Xr zpool-scrub 8 ,
+.Xr zpool-set 8 ,
+.Xr zpool-split 8 ,
+.Xr zpool-status 8 ,
+.Xr zpool-sync 8 ,
+.Xr zpool-trim 8 ,
+.Xr zpool-upgrade 8 ,
+.Xr zpool-wait 8
diff --git a/share/man/man8/zstream.8 b/share/man/man8/zstream.8
@@ -0,0 +1,199 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2020 by Delphix. All rights reserved.
+.\"
+.Dd October 4, 2022
+.Dt ZSTREAM 8
+.Os
+.
+.Sh NAME
+.Nm zstream
+.Nd manipulate ZFS send streams
+.Sh SYNOPSIS
+.Nm
+.Cm dump
+.Op Fl Cvd
+.Op Ar file
+.Nm
+.Cm decompress
+.Op Fl v
+.Op Ar object Ns Sy \&, Ns Ar offset Ns Op Sy \&, Ns Ar type Ns ...
+.Nm
+.Cm redup
+.Op Fl v
+.Ar file
+.Nm
+.Cm token
+.Ar resume_token
+.Nm
+.Cm recompress
+.Op Fl l Ar level
+.Ar algorithm
+.
+.Sh DESCRIPTION
+The
+.Sy zstream
+utility manipulates ZFS send streams output by the
+.Sy zfs send
+command.
+.Bl -tag -width ""
+.It Xo
+.Nm
+.Cm dump
+.Op Fl Cvd
+.Op Ar file
+.Xc
+Print information about the specified send stream, including headers and
+record counts.
+The send stream may either be in the specified
+.Ar file ,
+or provided on standard input.
+.Bl -tag -width "-D"
+.It Fl C
+Suppress the validation of checksums.
+.It Fl v
+Verbose.
+Print metadata for each record.
+.It Fl d
+Dump data contained in each record.
+Implies verbose.
+.El
+.Pp
+The
+.Nm zstreamdump
+alias is provided for compatibility and is equivalent to running
+.Nm
+.Cm dump .
+.It Xo
+.Nm
+.Cm token
+.Ar resume_token
+.Xc
+Dumps zfs resume token information
+.It Xo
+.Nm
+.Cm decompress
+.Op Fl v
+.Op Ar object Ns Sy \&, Ns Ar offset Ns Op Sy \&, Ns Ar type Ns ...
+.Xc
+Decompress selected records in a ZFS send stream provided on standard input,
+when the compression type recorded in ZFS metadata may be incorrect.
+Specify the object number and byte offset of each record that you wish to
+decompress.
+Optionally specify the compression type.
+Valid compression types include
+.Sy off ,
+.Sy gzip ,
+.Sy lz4 ,
+.Sy lzjb ,
+.Sy zstd ,
+and
+.Sy zle .
+The default is
+.Sy lz4 .
+Every record for that object beginning at that offset will be decompressed, if
+possible.
+It may not be possible, because the record may be corrupted in some but not
+all of the stream's snapshots.
+Specifying a compression type of
+.Sy off
+will change the stream's metadata accordingly, without attempting decompression.
+This can be useful if the record is already uncompressed but the metadata
+insists otherwise.
+The repaired stream will be written to standard output.
+.Bl -tag -width "-v"
+.It Fl v
+Verbose.
+Print summary of decompressed records.
+.El
+.It Xo
+.Nm
+.Cm redup
+.Op Fl v
+.Ar file
+.Xc
+Deduplicated send streams can be generated by using the
+.Nm zfs Cm send Fl D
+command.
+The ability to send deduplicated send streams is deprecated.
+In the future, the ability to receive a deduplicated send stream with
+.Nm zfs Cm receive
+will be removed.
+However, deduplicated send streams can still be received by utilizing
+.Nm zstream Cm redup .
+.Pp
+The
+.Nm zstream Cm redup
+command is provided a
+.Ar file
+containing a deduplicated send stream, and outputs an equivalent
+non-deduplicated send stream on standard output.
+Therefore, a deduplicated send stream can be received by running:
+.Dl # Nm zstream Cm redup Pa DEDUP_STREAM_FILE | Nm zfs Cm receive No …
+.Bl -tag -width "-D"
+.It Fl v
+Verbose.
+Print summary of converted records.
+.El
+.It Xo
+.Nm
+.Cm recompress
+.Op Fl l Ar level
+.Ar algorithm
+.Xc
+Recompresses a send stream, provided on standard input, using the provided
+algorithm and optional level, and writes the modified stream to standard output.
+All WRITE records in the send stream will be recompressed, unless they fail
+to result in size reduction compared to being left uncompressed.
+The provided algorithm can be any valid value to the
+.Nm compress
+property.
+Note that encrypted send streams cannot be recompressed.
+.Bl -tag -width "-l"
+.It Fl l Ar level
+Specifies compression level.
+Only needed for algorithms where the level is not implied as part of the name
+of the algorithm (e.g. gzip-3 does not require it, while zstd does, if a
+non-default level is desired).
+.El
+.El
+.
+.Sh EXAMPLES
+Heal a dataset that was corrupted due to OpenZFS bug #12762.
+First, determine which records are corrupt.
+That cannot be done automatically; it requires information beyond ZFS's
+metadata.
+If object
+.Sy 128
+is corrupted at offset
+.Sy 0
+and is compressed using
+.Sy lz4 ,
+then run this command:
+.Bd -literal
+.No # Nm zfs Ar send Fl c Ar … | Nm zstream decompress Ar 128,0,lz4 | \
+Nm zfs recv Ar …
+.Ed
+.Sh SEE ALSO
+.Xr zfs 8 ,
+.Xr zfs-receive 8 ,
+.Xr zfs-send 8 ,
+.Lk https://github.com/openzfs/zfs/issues/12762
diff --git a/share/man/man8/zstreamdump.8 b/share/man/man8/zstreamdump.8
@@ -0,0 +1,199 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or https://opensource.org/licenses/CDDL-1.0.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\" Copyright (c) 2020 by Delphix. All rights reserved.
+.\"
+.Dd October 4, 2022
+.Dt ZSTREAM 8
+.Os
+.
+.Sh NAME
+.Nm zstream
+.Nd manipulate ZFS send streams
+.Sh SYNOPSIS
+.Nm
+.Cm dump
+.Op Fl Cvd
+.Op Ar file
+.Nm
+.Cm decompress
+.Op Fl v
+.Op Ar object Ns Sy \&, Ns Ar offset Ns Op Sy \&, Ns Ar type Ns ...
+.Nm
+.Cm redup
+.Op Fl v
+.Ar file
+.Nm
+.Cm token
+.Ar resume_token
+.Nm
+.Cm recompress
+.Op Fl l Ar level
+.Ar algorithm
+.
+.Sh DESCRIPTION
+The
+.Sy zstream
+utility manipulates ZFS send streams output by the
+.Sy zfs send
+command.
+.Bl -tag -width ""
+.It Xo
+.Nm
+.Cm dump
+.Op Fl Cvd
+.Op Ar file
+.Xc
+Print information about the specified send stream, including headers and
+record counts.
+The send stream may either be in the specified
+.Ar file ,
+or provided on standard input.
+.Bl -tag -width "-D"
+.It Fl C
+Suppress the validation of checksums.
+.It Fl v
+Verbose.
+Print metadata for each record.
+.It Fl d
+Dump data contained in each record.
+Implies verbose.
+.El
+.Pp
+The
+.Nm zstreamdump
+alias is provided for compatibility and is equivalent to running
+.Nm
+.Cm dump .
+.It Xo
+.Nm
+.Cm token
+.Ar resume_token
+.Xc
+Dumps zfs resume token information
+.It Xo
+.Nm
+.Cm decompress
+.Op Fl v
+.Op Ar object Ns Sy \&, Ns Ar offset Ns Op Sy \&, Ns Ar type Ns ...
+.Xc
+Decompress selected records in a ZFS send stream provided on standard input,
+when the compression type recorded in ZFS metadata may be incorrect.
+Specify the object number and byte offset of each record that you wish to
+decompress.
+Optionally specify the compression type.
+Valid compression types include
+.Sy off ,
+.Sy gzip ,
+.Sy lz4 ,
+.Sy lzjb ,
+.Sy zstd ,
+and
+.Sy zle .
+The default is
+.Sy lz4 .
+Every record for that object beginning at that offset will be decompressed, if
+possible.
+It may not be possible, because the record may be corrupted in some but not
+all of the stream's snapshots.
+Specifying a compression type of
+.Sy off
+will change the stream's metadata accordingly, without attempting decompression.
+This can be useful if the record is already uncompressed but the metadata
+insists otherwise.
+The repaired stream will be written to standard output.
+.Bl -tag -width "-v"
+.It Fl v
+Verbose.
+Print summary of decompressed records.
+.El
+.It Xo
+.Nm
+.Cm redup
+.Op Fl v
+.Ar file
+.Xc
+Deduplicated send streams can be generated by using the
+.Nm zfs Cm send Fl D
+command.
+The ability to send deduplicated send streams is deprecated.
+In the future, the ability to receive a deduplicated send stream with
+.Nm zfs Cm receive
+will be removed.
+However, deduplicated send streams can still be received by utilizing
+.Nm zstream Cm redup .
+.Pp
+The
+.Nm zstream Cm redup
+command is provided a
+.Ar file
+containing a deduplicated send stream, and outputs an equivalent
+non-deduplicated send stream on standard output.
+Therefore, a deduplicated send stream can be received by running:
+.Dl # Nm zstream Cm redup Pa DEDUP_STREAM_FILE | Nm zfs Cm receive No …
+.Bl -tag -width "-D"
+.It Fl v
+Verbose.
+Print summary of converted records.
+.El
+.It Xo
+.Nm
+.Cm recompress
+.Op Fl l Ar level
+.Ar algorithm
+.Xc
+Recompresses a send stream, provided on standard input, using the provided
+algorithm and optional level, and writes the modified stream to standard output.
+All WRITE records in the send stream will be recompressed, unless they fail
+to result in size reduction compared to being left uncompressed.
+The provided algorithm can be any valid value to the
+.Nm compress
+property.
+Note that encrypted send streams cannot be recompressed.
+.Bl -tag -width "-l"
+.It Fl l Ar level
+Specifies compression level.
+Only needed for algorithms where the level is not implied as part of the name
+of the algorithm (e.g. gzip-3 does not require it, while zstd does, if a
+non-default level is desired).
+.El
+.El
+.
+.Sh EXAMPLES
+Heal a dataset that was corrupted due to OpenZFS bug #12762.
+First, determine which records are corrupt.
+That cannot be done automatically; it requires information beyond ZFS's
+metadata.
+If object
+.Sy 128
+is corrupted at offset
+.Sy 0
+and is compressed using
+.Sy lz4 ,
+then run this command:
+.Bd -literal
+.No # Nm zfs Ar send Fl c Ar … | Nm zstream decompress Ar 128,0,lz4 | \
+Nm zfs recv Ar …
+.Ed
+.Sh SEE ALSO
+.Xr zfs 8 ,
+.Xr zfs-receive 8 ,
+.Xr zfs-send 8 ,
+.Lk https://github.com/openzfs/zfs/issues/12762