logo

oasis-root

Compiled tree of Oasis Linux based on own branch at <https://hacktivis.me/git/oasis/> git clone https://anongit.hacktivis.me/git/oasis-root.git

zpool-attach.8 (4833B)


  1. .\"
  2. .\" CDDL HEADER START
  3. .\"
  4. .\" The contents of this file are subject to the terms of the
  5. .\" Common Development and Distribution License (the "License").
  6. .\" You may not use this file except in compliance with the License.
  7. .\"
  8. .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
  9. .\" or https://opensource.org/licenses/CDDL-1.0.
  10. .\" See the License for the specific language governing permissions
  11. .\" and limitations under the License.
  12. .\"
  13. .\" When distributing Covered Code, include this CDDL HEADER in each
  14. .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  15. .\" If applicable, add the following below this CDDL HEADER, with the
  16. .\" fields enclosed by brackets "[]" replaced with your own identifying
  17. .\" information: Portions Copyright [yyyy] [name of copyright owner]
  18. .\"
  19. .\" CDDL HEADER END
  20. .\"
  21. .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
  22. .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
  23. .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
  24. .\" Copyright (c) 2017 Datto Inc.
  25. .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
  26. .\" Copyright 2017 Nexenta Systems, Inc.
  27. .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
  28. .\"
  29. .Dd June 28, 2023
  30. .Dt ZPOOL-ATTACH 8
  31. .Os
  32. .
  33. .Sh NAME
  34. .Nm zpool-attach
  35. .Nd attach new device to existing ZFS vdev
  36. .Sh SYNOPSIS
  37. .Nm zpool
  38. .Cm attach
  39. .Op Fl fsw
  40. .Oo Fl o Ar property Ns = Ns Ar value Oc
  41. .Ar pool device new_device
  42. .
  43. .Sh DESCRIPTION
  44. Attaches
  45. .Ar new_device
  46. to the existing
  47. .Ar device .
  48. The behavior differs depending on if the existing
  49. .Ar device
  50. is a RAID-Z device, or a mirror/plain device.
  51. .Pp
  52. If the existing device is a mirror or plain device
  53. .Pq e.g. specified as Qo Li sda Qc or Qq Li mirror-7 ,
  54. the new device will be mirrored with the existing device, a resilver will be
  55. initiated, and the new device will contribute to additional redundancy once the
  56. resilver completes.
  57. If
  58. .Ar device
  59. is not currently part of a mirrored configuration,
  60. .Ar device
  61. automatically transforms into a two-way mirror of
  62. .Ar device
  63. and
  64. .Ar new_device .
  65. If
  66. .Ar device
  67. is part of a two-way mirror, attaching
  68. .Ar new_device
  69. creates a three-way mirror, and so on.
  70. In either case,
  71. .Ar new_device
  72. begins to resilver immediately and any running scrub is cancelled.
  73. .Pp
  74. If the existing device is a RAID-Z device
  75. .Pq e.g. specified as Qq Ar raidz2-0 ,
  76. the new device will become part of that RAID-Z group.
  77. A "raidz expansion" will be initiated, and once the expansion completes,
  78. the new device will contribute additional space to the RAID-Z group.
  79. The expansion entails reading all allocated space from existing disks in the
  80. RAID-Z group, and rewriting it to the new disks in the RAID-Z group (including
  81. the newly added
  82. .Ar device ) .
  83. Its progress can be monitored with
  84. .Nm zpool Cm status .
  85. .Pp
  86. Data redundancy is maintained during and after the expansion.
  87. If a disk fails while the expansion is in progress, the expansion pauses until
  88. the health of the RAID-Z vdev is restored (e.g. by replacing the failed disk
  89. and waiting for reconstruction to complete).
  90. Expansion does not change the number of failures that can be tolerated
  91. without data loss (e.g. a RAID-Z2 is still a RAID-Z2 even after expansion).
  92. A RAID-Z vdev can be expanded multiple times.
  93. .Pp
  94. After the expansion completes, old blocks retain their old data-to-parity
  95. ratio
  96. .Pq e.g. 5-wide RAID-Z2 has 3 data and 2 parity
  97. but distributed among the larger set of disks.
  98. New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide
  99. RAID-Z2 which has been expanded once to 6-wide, has 4 data and 2 parity).
  100. However, the vdev's assumed parity ratio does not change, so slightly less
  101. space than is expected may be reported for newly-written blocks, according to
  102. .Nm zfs Cm list ,
  103. .Nm df ,
  104. .Nm ls Fl s ,
  105. and similar tools.
  106. .Pp
  107. A pool-wide scrub is initiated at the end of the expansion in order to verify
  108. the checksums of all blocks which have been copied during the expansion.
  109. .Bl -tag -width Ds
  110. .It Fl f
  111. Forces use of
  112. .Ar new_device ,
  113. even if it appears to be in use.
  114. Not all devices can be overridden in this manner.
  115. .It Fl o Ar property Ns = Ns Ar value
  116. Sets the given pool properties.
  117. See the
  118. .Xr zpoolprops 7
  119. manual page for a list of valid properties that can be set.
  120. The only property supported at the moment is
  121. .Sy ashift .
  122. .It Fl s
  123. When attaching to a mirror or plain device, the
  124. .Ar new_device
  125. is reconstructed sequentially to restore redundancy as quickly as possible.
  126. Checksums are not verified during sequential reconstruction so a scrub is
  127. started when the resilver completes.
  128. .It Fl w
  129. Waits until
  130. .Ar new_device
  131. has finished resilvering or expanding before returning.
  132. .El
  133. .
  134. .Sh SEE ALSO
  135. .Xr zpool-add 8 ,
  136. .Xr zpool-detach 8 ,
  137. .Xr zpool-import 8 ,
  138. .Xr zpool-initialize 8 ,
  139. .Xr zpool-online 8 ,
  140. .Xr zpool-replace 8 ,
  141. .Xr zpool-resilver 8