logo

oasis-root

Compiled tree of Oasis Linux based on own branch at <https://hacktivis.me/git/oasis/> git clone https://anongit.hacktivis.me/git/oasis-root.git

zpool-create.8 (7740B)


  1. .\"
  2. .\" CDDL HEADER START
  3. .\"
  4. .\" The contents of this file are subject to the terms of the
  5. .\" Common Development and Distribution License (the "License").
  6. .\" You may not use this file except in compliance with the License.
  7. .\"
  8. .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
  9. .\" or https://opensource.org/licenses/CDDL-1.0.
  10. .\" See the License for the specific language governing permissions
  11. .\" and limitations under the License.
  12. .\"
  13. .\" When distributing Covered Code, include this CDDL HEADER in each
  14. .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  15. .\" If applicable, add the following below this CDDL HEADER, with the
  16. .\" fields enclosed by brackets "[]" replaced with your own identifying
  17. .\" information: Portions Copyright [yyyy] [name of copyright owner]
  18. .\"
  19. .\" CDDL HEADER END
  20. .\"
  21. .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
  22. .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
  23. .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
  24. .\" Copyright (c) 2017 Datto Inc.
  25. .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
  26. .\" Copyright 2017 Nexenta Systems, Inc.
  27. .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
  28. .\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
  29. .\"
  30. .Dd March 16, 2022
  31. .Dt ZPOOL-CREATE 8
  32. .Os
  33. .
  34. .Sh NAME
  35. .Nm zpool-create
  36. .Nd create ZFS storage pool
  37. .Sh SYNOPSIS
  38. .Nm zpool
  39. .Cm create
  40. .Op Fl dfn
  41. .Op Fl m Ar mountpoint
  42. .Oo Fl o Ar property Ns = Ns Ar value Oc Ns …
  43. .Oo Fl o Sy feature@ Ns Ar feature Ns = Ns Ar value Oc
  44. .Op Fl o Ar compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns …
  45. .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns …
  46. .Op Fl R Ar root
  47. .Op Fl t Ar tname
  48. .Ar pool
  49. .Ar vdev Ns …
  50. .
  51. .Sh DESCRIPTION
  52. Creates a new storage pool containing the virtual devices specified on the
  53. command line.
  54. The pool name must begin with a letter, and can only contain
  55. alphanumeric characters as well as the underscore
  56. .Pq Qq Sy _ ,
  57. dash
  58. .Pq Qq Sy \&- ,
  59. colon
  60. .Pq Qq Sy \&: ,
  61. space
  62. .Pq Qq Sy \&\ ,
  63. and period
  64. .Pq Qq Sy \&. .
  65. The pool names
  66. .Sy mirror ,
  67. .Sy raidz ,
  68. .Sy draid ,
  69. .Sy spare
  70. and
  71. .Sy log
  72. are reserved, as are names beginning with
  73. .Sy mirror ,
  74. .Sy raidz ,
  75. .Sy draid ,
  76. and
  77. .Sy spare .
  78. The
  79. .Ar vdev
  80. specification is described in the
  81. .Sx Virtual Devices
  82. section of
  83. .Xr zpoolconcepts 7 .
  84. .Pp
  85. The command attempts to verify that each device specified is accessible and not
  86. currently in use by another subsystem.
  87. However this check is not robust enough
  88. to detect simultaneous attempts to use a new device in different pools, even if
  89. .Sy multihost Ns = Sy enabled .
  90. The administrator must ensure that simultaneous invocations of any combination
  91. of
  92. .Nm zpool Cm replace ,
  93. .Nm zpool Cm create ,
  94. .Nm zpool Cm add ,
  95. or
  96. .Nm zpool Cm labelclear
  97. do not refer to the same device.
  98. Using the same device in two pools will result in pool corruption.
  99. .Pp
  100. There are some uses, such as being currently mounted, or specified as the
  101. dedicated dump device, that prevents a device from ever being used by ZFS.
  102. Other uses, such as having a preexisting UFS file system, can be overridden with
  103. .Fl f .
  104. .Pp
  105. The command also checks that the replication strategy for the pool is
  106. consistent.
  107. An attempt to combine redundant and non-redundant storage in a single pool,
  108. or to mix disks and files, results in an error unless
  109. .Fl f
  110. is specified.
  111. The use of differently-sized devices within a single raidz or mirror group is
  112. also flagged as an error unless
  113. .Fl f
  114. is specified.
  115. .Pp
  116. Unless the
  117. .Fl R
  118. option is specified, the default mount point is
  119. .Pa / Ns Ar pool .
  120. The mount point must not exist or must be empty, or else the root dataset
  121. will not be able to be be mounted.
  122. This can be overridden with the
  123. .Fl m
  124. option.
  125. .Pp
  126. By default all supported features are enabled on the new pool.
  127. The
  128. .Fl d
  129. option and the
  130. .Fl o Ar compatibility
  131. property
  132. .Pq e.g Fl o Sy compatibility Ns = Ns Ar 2020
  133. can be used to restrict the features that are enabled, so that the
  134. pool can be imported on other releases of ZFS.
  135. .Bl -tag -width "-t tname"
  136. .It Fl d
  137. Do not enable any features on the new pool.
  138. Individual features can be enabled by setting their corresponding properties to
  139. .Sy enabled
  140. with
  141. .Fl o .
  142. See
  143. .Xr zpool-features 7
  144. for details about feature properties.
  145. .It Fl f
  146. Forces use of
  147. .Ar vdev Ns s ,
  148. even if they appear in use or specify a conflicting replication level.
  149. Not all devices can be overridden in this manner.
  150. .It Fl m Ar mountpoint
  151. Sets the mount point for the root dataset.
  152. The default mount point is
  153. .Pa /pool
  154. or
  155. .Pa altroot/pool
  156. if
  157. .Sy altroot
  158. is specified.
  159. The mount point must be an absolute path,
  160. .Sy legacy ,
  161. or
  162. .Sy none .
  163. For more information on dataset mount points, see
  164. .Xr zfsprops 7 .
  165. .It Fl n
  166. Displays the configuration that would be used without actually creating the
  167. pool.
  168. The actual pool creation can still fail due to insufficient privileges or
  169. device sharing.
  170. .It Fl o Ar property Ns = Ns Ar value
  171. Sets the given pool properties.
  172. See
  173. .Xr zpoolprops 7
  174. for a list of valid properties that can be set.
  175. .It Fl o Ar compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns …
  176. Specifies compatibility feature sets.
  177. See
  178. .Xr zpool-features 7
  179. for more information about compatibility feature sets.
  180. .It Fl o Sy feature@ Ns Ar feature Ns = Ns Ar value
  181. Sets the given pool feature.
  182. See the
  183. .Xr zpool-features 7
  184. section for a list of valid features that can be set.
  185. Value can be either disabled or enabled.
  186. .It Fl O Ar file-system-property Ns = Ns Ar value
  187. Sets the given file system properties in the root file system of the pool.
  188. See
  189. .Xr zfsprops 7
  190. for a list of valid properties that can be set.
  191. .It Fl R Ar root
  192. Equivalent to
  193. .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
  194. .It Fl t Ar tname
  195. Sets the in-core pool name to
  196. .Ar tname
  197. while the on-disk name will be the name specified as
  198. .Ar pool .
  199. This will set the default of the
  200. .Sy cachefile
  201. property to
  202. .Sy none .
  203. This is intended
  204. to handle name space collisions when creating pools for other systems,
  205. such as virtual machines or physical machines whose pools live on network
  206. block devices.
  207. .El
  208. .
  209. .Sh EXAMPLES
  210. .\" These are, respectively, examples 1, 2, 3, 4, 11, 12 from zpool.8
  211. .\" Make sure to update them bidirectionally
  212. .Ss Example 1 : No Creating a RAID-Z Storage Pool
  213. The following command creates a pool with a single raidz root vdev that
  214. consists of six disks:
  215. .Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf
  216. .
  217. .Ss Example 2 : No Creating a Mirrored Storage Pool
  218. The following command creates a pool with two mirrors, where each mirror
  219. contains two disks:
  220. .Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd
  221. .
  222. .Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions
  223. The following command creates a non-redundant pool using two disk partitions:
  224. .Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2
  225. .
  226. .Ss Example 4 : No Creating a ZFS Storage Pool by Using Files
  227. The following command creates a non-redundant pool using files.
  228. While not recommended, a pool based on files can be useful for experimental
  229. purposes.
  230. .Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b
  231. .
  232. .Ss Example 5 : No Managing Hot Spares
  233. The following command creates a new pool with an available hot spare:
  234. .Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc
  235. .
  236. .Ss Example 6 : No Creating a ZFS Pool with Mirrored Separate Intent Logs
  237. The following command creates a ZFS storage pool consisting of two, two-way
  238. mirrors and mirrored log devices:
  239. .Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf
  240. .
  241. .Sh SEE ALSO
  242. .Xr zpool-destroy 8 ,
  243. .Xr zpool-export 8 ,
  244. .Xr zpool-import 8