logo

oasis-root

Compiled tree of Oasis Linux based on own branch at <https://hacktivis.me/git/oasis/> git clone https://anongit.hacktivis.me/git/oasis-root.git

zpoolprops.7 (18511B)


  1. .\"
  2. .\" CDDL HEADER START
  3. .\"
  4. .\" The contents of this file are subject to the terms of the
  5. .\" Common Development and Distribution License (the "License").
  6. .\" You may not use this file except in compliance with the License.
  7. .\"
  8. .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
  9. .\" or https://opensource.org/licenses/CDDL-1.0.
  10. .\" See the License for the specific language governing permissions
  11. .\" and limitations under the License.
  12. .\"
  13. .\" When distributing Covered Code, include this CDDL HEADER in each
  14. .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  15. .\" If applicable, add the following below this CDDL HEADER, with the
  16. .\" fields enclosed by brackets "[]" replaced with your own identifying
  17. .\" information: Portions Copyright [yyyy] [name of copyright owner]
  18. .\"
  19. .\" CDDL HEADER END
  20. .\"
  21. .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
  22. .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
  23. .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
  24. .\" Copyright (c) 2017 Datto Inc.
  25. .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
  26. .\" Copyright 2017 Nexenta Systems, Inc.
  27. .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
  28. .\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
  29. .\" Copyright (c) 2023, Klara Inc.
  30. .\"
  31. .Dd November 18, 2024
  32. .Dt ZPOOLPROPS 7
  33. .Os
  34. .
  35. .Sh NAME
  36. .Nm zpoolprops
  37. .Nd properties of ZFS storage pools
  38. .
  39. .Sh DESCRIPTION
  40. Each pool has several properties associated with it.
  41. Some properties are read-only statistics while others are configurable and
  42. change the behavior of the pool.
  43. .Pp
  44. User properties have no effect on ZFS behavior.
  45. Use them to annotate pools in a way that is meaningful in your environment.
  46. For more information about user properties, see the
  47. .Sx User Properties
  48. section.
  49. .Pp
  50. The following are read-only properties:
  51. .Bl -tag -width "unsupported@guid"
  52. .It Sy allocated
  53. Amount of storage used within the pool.
  54. See
  55. .Sy fragmentation
  56. and
  57. .Sy free
  58. for more information.
  59. .It Sy bcloneratio
  60. The ratio of the total amount of storage that would be required to store all
  61. the cloned blocks without cloning to the actual storage used.
  62. The
  63. .Sy bcloneratio
  64. property is calculated as:
  65. .Pp
  66. .Sy ( ( bclonesaved + bcloneused ) * 100 ) / bcloneused
  67. .It Sy bclonesaved
  68. The amount of additional storage that would be required if block cloning
  69. was not used.
  70. .It Sy bcloneused
  71. The amount of storage used by cloned blocks.
  72. .It Sy capacity
  73. Percentage of pool space used.
  74. This property can also be referred to by its shortened column name,
  75. .Sy cap .
  76. .It Sy dedupcached
  77. Total size of the deduplication table currently loaded into the ARC.
  78. See
  79. .Xr zpool-prefetch 8 .
  80. .It Sy dedup_table_size
  81. Total on-disk size of the deduplication table.
  82. .It Sy expandsize
  83. Amount of uninitialized space within the pool or device that can be used to
  84. increase the total capacity of the pool.
  85. On whole-disk vdevs, this is the space beyond the end of the GPT –
  86. typically occurring when a LUN is dynamically expanded
  87. or a disk replaced with a larger one.
  88. On partition vdevs, this is the space appended to the partition after it was
  89. added to the pool – most likely by resizing it in-place.
  90. The space can be claimed for the pool by bringing it online with
  91. .Sy autoexpand=on
  92. or using
  93. .Nm zpool Cm online Fl e .
  94. .It Sy fragmentation
  95. The amount of fragmentation in the pool.
  96. As the amount of space
  97. .Sy allocated
  98. increases, it becomes more difficult to locate
  99. .Sy free
  100. space.
  101. This may result in lower write performance compared to pools with more
  102. unfragmented free space.
  103. .It Sy free
  104. The amount of free space available in the pool.
  105. By contrast, the
  106. .Xr zfs 8
  107. .Sy available
  108. property describes how much new data can be written to ZFS filesystems/volumes.
  109. The zpool
  110. .Sy free
  111. property is not generally useful for this purpose, and can be substantially more
  112. than the zfs
  113. .Sy available
  114. space.
  115. This discrepancy is due to several factors, including raidz parity;
  116. zfs reservation, quota, refreservation, and refquota properties; and space set
  117. aside by
  118. .Sy spa_slop_shift
  119. (see
  120. .Xr zfs 4
  121. for more information).
  122. .It Sy freeing
  123. After a file system or snapshot is destroyed, the space it was using is
  124. returned to the pool asynchronously.
  125. .Sy freeing
  126. is the amount of space remaining to be reclaimed.
  127. Over time
  128. .Sy freeing
  129. will decrease while
  130. .Sy free
  131. increases.
  132. .It Sy guid
  133. A unique identifier for the pool.
  134. .It Sy health
  135. The current health of the pool.
  136. Health can be one of
  137. .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
  138. .It Sy last_scrubbed_txg
  139. Indicates the transaction group (TXG) up to which the most recent scrub
  140. operation has checked and repaired the dataset.
  141. This provides insight into the data integrity status of their pool at
  142. a specific point in time.
  143. .Xr zpool-scrub 8
  144. can utilize this property to scan only data that has changed since the last
  145. scrub completed, when given the
  146. .Fl C
  147. flag.
  148. This property is not updated when performing an error scrub with the
  149. .Fl e
  150. flag.
  151. .It Sy leaked
  152. Space not released while
  153. .Sy freeing
  154. due to corruption, now permanently leaked into the pool.
  155. .It Sy load_guid
  156. A unique identifier for the pool.
  157. Unlike the
  158. .Sy guid
  159. property, this identifier is generated every time we load the pool (i.e. does
  160. not persist across imports/exports) and never changes while the pool is loaded
  161. (even if a
  162. .Sy reguid
  163. operation takes place).
  164. .It Sy size
  165. Total size of the storage pool.
  166. .It Sy unsupported@ Ns Em guid
  167. Information about unsupported features that are enabled on the pool.
  168. See
  169. .Xr zpool-features 7
  170. for details.
  171. .El
  172. .Pp
  173. The space usage properties report actual physical space available to the
  174. storage pool.
  175. The physical space can be different from the total amount of space that any
  176. contained datasets can actually use.
  177. The amount of space used in a raidz configuration depends on the characteristics
  178. of the data being written.
  179. In addition, ZFS reserves some space for internal accounting that the
  180. .Xr zfs 8
  181. command takes into account, but the
  182. .Nm
  183. command does not.
  184. For non-full pools of a reasonable size, these effects should be invisible.
  185. For small pools, or pools that are close to being completely full, these
  186. discrepancies may become more noticeable.
  187. .Pp
  188. The following property can be set at creation time and import time:
  189. .Bl -tag -width Ds
  190. .It Sy altroot
  191. Alternate root directory.
  192. If set, this directory is prepended to any mount points within the pool.
  193. This can be used when examining an unknown pool where the mount points cannot be
  194. trusted, or in an alternate boot environment, where the typical paths are not
  195. valid.
  196. .Sy altroot
  197. is not a persistent property.
  198. It is valid only while the system is up.
  199. Setting
  200. .Sy altroot
  201. defaults to using
  202. .Sy cachefile Ns = Ns Sy none ,
  203. though this may be overridden using an explicit setting.
  204. .El
  205. .Pp
  206. The following property can be set only at import time:
  207. .Bl -tag -width Ds
  208. .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
  209. If set to
  210. .Sy on ,
  211. the pool will be imported in read-only mode.
  212. This property can also be referred to by its shortened column name,
  213. .Sy rdonly .
  214. .El
  215. .Pp
  216. The following properties can be set at creation time and import time, and later
  217. changed with the
  218. .Nm zpool Cm set
  219. command:
  220. .Bl -tag -width Ds
  221. .It Sy ashift Ns = Ns Ar ashift
  222. Pool sector size exponent, to the power of
  223. .Sy 2
  224. (internally referred to as
  225. .Sy ashift ) .
  226. Values from 9 to 16, inclusive, are valid; also, the
  227. value 0 (the default) means to auto-detect using the kernel's block
  228. layer and a ZFS internal exception list.
  229. I/O operations will be aligned to the specified size boundaries.
  230. Additionally, the minimum (disk)
  231. write size will be set to the specified size, so this represents a
  232. space/performance trade-off.
  233. For optimal performance, the pool sector size should be greater than
  234. or equal to the sector size of the underlying disks.
  235. The typical case for setting this property is when
  236. performance is important and the underlying disks use 4KiB sectors but
  237. report 512B sectors to the OS (for compatibility reasons); in that
  238. case, set
  239. .Sy ashift Ns = Ns Sy 12
  240. (which is
  241. .Sy 1<<12 No = Sy 4096 ) .
  242. When set, this property is
  243. used as the default hint value in subsequent vdev operations (add,
  244. attach and replace).
  245. Changing this value will not modify any existing
  246. vdev, not even on disk replacement; however it can be used, for
  247. instance, to replace a dying 512B sectors disk with a newer 4KiB
  248. sectors device: this will probably result in bad performance but at the
  249. same time could prevent loss of data.
  250. .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
  251. Controls automatic pool expansion when the underlying LUN is grown.
  252. If set to
  253. .Sy on ,
  254. the pool will be resized according to the size of the expanded device.
  255. If the device is part of a mirror or raidz then all devices within that
  256. mirror/raidz group must be expanded before the new space is made available to
  257. the pool.
  258. The default behavior is
  259. .Sy off .
  260. This property can also be referred to by its shortened column name,
  261. .Sy expand .
  262. .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
  263. Controls automatic device replacement.
  264. If set to
  265. .Sy off ,
  266. device replacement must be initiated by the administrator by using the
  267. .Nm zpool Cm replace
  268. command.
  269. If set to
  270. .Sy on ,
  271. any new device, found in the same physical location as a device that previously
  272. belonged to the pool, is automatically formatted and replaced.
  273. The default behavior is
  274. .Sy off .
  275. This property can also be referred to by its shortened column name,
  276. .Sy replace .
  277. Autoreplace can also be used with virtual disks (like device
  278. mapper) provided that you use the /dev/disk/by-vdev paths setup by
  279. vdev_id.conf.
  280. See the
  281. .Xr vdev_id 8
  282. manual page for more details.
  283. Autoreplace and autoonline require the ZFS Event Daemon be configured and
  284. running.
  285. See the
  286. .Xr zed 8
  287. manual page for more details.
  288. .It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
  289. When set to
  290. .Sy on
  291. space which has been recently freed, and is no longer allocated by the pool,
  292. will be periodically trimmed.
  293. This allows block device vdevs which support
  294. BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
  295. supports hole-punching, to reclaim unused blocks.
  296. The default value for this property is
  297. .Sy off .
  298. .Pp
  299. Automatic TRIM does not immediately reclaim blocks after a free.
  300. Instead, it will optimistically delay allowing smaller ranges to be aggregated
  301. into a few larger ones.
  302. These can then be issued more efficiently to the storage.
  303. TRIM on L2ARC devices is enabled by setting
  304. .Sy l2arc_trim_ahead > 0 .
  305. .Pp
  306. Be aware that automatic trimming of recently freed data blocks can put
  307. significant stress on the underlying storage devices.
  308. This will vary depending of how well the specific device handles these commands.
  309. For lower-end devices it is often possible to achieve most of the benefits
  310. of automatic trimming by running an on-demand (manual) TRIM periodically
  311. using the
  312. .Nm zpool Cm trim
  313. command.
  314. .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns Op / Ns Ar dataset
  315. Identifies the default bootable dataset for the root pool.
  316. This property is expected to be set mainly by the installation and upgrade
  317. programs.
  318. Not all Linux distribution boot processes use the bootfs property.
  319. .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
  320. Controls the location of where the pool configuration is cached.
  321. Discovering all pools on system startup requires a cached copy of the
  322. configuration data that is stored on the root file system.
  323. All pools in this cache are automatically imported when the system boots.
  324. Some environments, such as install and clustering, need to cache this
  325. information in a different location so that pools are not automatically
  326. imported.
  327. Setting this property caches the pool configuration in a different location that
  328. can later be imported with
  329. .Nm zpool Cm import Fl c .
  330. Setting it to the value
  331. .Sy none
  332. creates a temporary pool that is never cached, and the
  333. .Qq
  334. .Pq empty string
  335. uses the default location.
  336. .Pp
  337. Multiple pools can share the same cache file.
  338. Because the kernel destroys and recreates this file when pools are added and
  339. removed, care should be taken when attempting to access this file.
  340. When the last pool using a
  341. .Sy cachefile
  342. is exported or destroyed, the file will be empty.
  343. .It Sy comment Ns = Ns Ar text
  344. A text string consisting of printable ASCII characters that will be stored
  345. such that it is available even if the pool becomes faulted.
  346. An administrator can provide additional information about a pool using this
  347. property.
  348. .It Sy compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns …
  349. Specifies that the pool maintain compatibility with specific feature sets.
  350. When set to
  351. .Sy off
  352. (or unset) compatibility is disabled (all features may be enabled); when set to
  353. .Sy legacy
  354. no features may be enabled.
  355. When set to a comma-separated list of filenames
  356. (each filename may either be an absolute path, or relative to
  357. .Pa /etc/zfs/compatibility.d
  358. or
  359. .Pa /usr/share/zfs/compatibility.d )
  360. the lists of requested features are read from those files, separated by
  361. whitespace and/or commas.
  362. Only features present in all files may be enabled.
  363. .Pp
  364. See
  365. .Xr zpool-features 7 ,
  366. .Xr zpool-create 8
  367. and
  368. .Xr zpool-upgrade 8
  369. for more information on the operation of compatibility feature sets.
  370. .It Sy dedup_table_quota Ns = Ns Ar number Ns | Ns Sy none Ns | Ns Sy auto
  371. This property sets a limit on the on-disk size of the pool's dedup table.
  372. Entries will not be added to the dedup table once this size is reached;
  373. if a dedup table already exists, and is larger than this size, they
  374. will not be removed as part of setting this property.
  375. Existing entries will still have their reference counts updated.
  376. .Pp
  377. The actual size limit of the table may be above or below the quota,
  378. depending on the actual on-disk size of the entries (which may be
  379. approximated for purposes of calculating the quota).
  380. That is, setting a quota size of 1M may result in the maximum size being
  381. slightly below, or slightly above, that value.
  382. Set to
  383. .Sy 'none'
  384. to disable.
  385. In automatic mode, which is the default, the size of a dedicated dedup vdev
  386. is used as the quota limit.
  387. .Pp
  388. The
  389. .Sy dedup_table_quota
  390. property works for both legacy and fast dedup tables.
  391. .It Sy dedupditto Ns = Ns Ar number
  392. This property is deprecated and no longer has any effect.
  393. .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
  394. Controls whether a non-privileged user is granted access based on the dataset
  395. permissions defined on the dataset.
  396. See
  397. .Xr zfs 8
  398. for more information on ZFS delegated administration.
  399. .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
  400. Controls the system behavior in the event of catastrophic pool failure.
  401. This condition is typically a result of a loss of connectivity to the underlying
  402. storage device(s) or a failure of all devices within the pool.
  403. The behavior of such an event is determined as follows:
  404. .Bl -tag -width "continue"
  405. .It Sy wait
  406. Blocks all I/O access until the device connectivity is recovered and the errors
  407. are cleared with
  408. .Nm zpool Cm clear .
  409. This is the default behavior.
  410. .It Sy continue
  411. Returns
  412. .Er EIO
  413. to any new write I/O requests but allows reads to any of the remaining healthy
  414. devices.
  415. Any write requests that have yet to be committed to disk would be blocked.
  416. .It Sy panic
  417. Prints out a message to the console and generates a system crash dump.
  418. .El
  419. .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
  420. The value of this property is the current state of
  421. .Ar feature_name .
  422. The only valid value when setting this property is
  423. .Sy enabled
  424. which moves
  425. .Ar feature_name
  426. to the enabled state.
  427. See
  428. .Xr zpool-features 7
  429. for details on feature states.
  430. .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
  431. Controls whether information about snapshots associated with this pool is
  432. output when
  433. .Nm zfs Cm list
  434. is run without the
  435. .Fl t
  436. option.
  437. The default value is
  438. .Sy off .
  439. This property can also be referred to by its shortened name,
  440. .Sy listsnaps .
  441. .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
  442. Controls whether a pool activity check should be performed during
  443. .Nm zpool Cm import .
  444. When a pool is determined to be active it cannot be imported, even with the
  445. .Fl f
  446. option.
  447. This property is intended to be used in failover configurations
  448. where multiple hosts have access to a pool on shared storage.
  449. .Pp
  450. Multihost provides protection on import only.
  451. It does not protect against an
  452. individual device being used in multiple pools, regardless of the type of vdev.
  453. See the discussion under
  454. .Nm zpool Cm create .
  455. .Pp
  456. When this property is on, periodic writes to storage occur to show the pool is
  457. in use.
  458. See
  459. .Sy zfs_multihost_interval
  460. in the
  461. .Xr zfs 4
  462. manual page.
  463. In order to enable this property each host must set a unique hostid.
  464. See
  465. .Xr genhostid 1
  466. .Xr zgenhostid 8
  467. .Xr spl 4
  468. for additional details.
  469. The default value is
  470. .Sy off .
  471. .It Sy version Ns = Ns Ar version
  472. The current on-disk version of the pool.
  473. This can be increased, but never decreased.
  474. The preferred method of updating pools is with the
  475. .Nm zpool Cm upgrade
  476. command, though this property can be used when a specific version is needed for
  477. backwards compatibility.
  478. Once feature flags are enabled on a pool this property will no longer have a
  479. value.
  480. .El
  481. .
  482. .Ss User Properties
  483. In addition to the standard native properties, ZFS supports arbitrary user
  484. properties.
  485. User properties have no effect on ZFS behavior, but applications or
  486. administrators can use them to annotate pools.
  487. .Pp
  488. User property names must contain a colon
  489. .Pq Qq Sy \&:
  490. character to distinguish them from native properties.
  491. They may contain lowercase letters, numbers, and the following punctuation
  492. characters: colon
  493. .Pq Qq Sy \&: ,
  494. dash
  495. .Pq Qq Sy - ,
  496. period
  497. .Pq Qq Sy \&. ,
  498. and underscore
  499. .Pq Qq Sy _ .
  500. The expected convention is that the property name is divided into two portions
  501. such as
  502. .Ar module : Ns Ar property ,
  503. but this namespace is not enforced by ZFS.
  504. User property names can be at most 255 characters, and cannot begin with a dash
  505. .Pq Qq Sy - .
  506. .Pp
  507. When making programmatic use of user properties, it is strongly suggested to use
  508. a reversed DNS domain name for the
  509. .Ar module
  510. component of property names to reduce the chance that two
  511. independently-developed packages use the same property name for different
  512. purposes.
  513. .Pp
  514. The values of user properties are arbitrary strings and
  515. are never validated.
  516. All of the commands that operate on properties
  517. .Po Nm zpool Cm list ,
  518. .Nm zpool Cm get ,
  519. .Nm zpool Cm set ,
  520. and so forth
  521. .Pc
  522. can be used to manipulate both native properties and user properties.
  523. Use
  524. .Nm zpool Cm set Ar name Ns =
  525. to clear a user property.
  526. Property values are limited to 8192 bytes.