zfs.html (1262B)
- <main>
- <dl>
- <dt><code>zpool attach</code></dt><dd>add to an existing vdev (ie. mirror/parallel)</dd>
- <dt><code>zpool add</code></dt><dd> (always try with -n first) → add to the pool outside of a vdev (ie. serial)</dd>
- <dt><code>zfs offline -f</code></dt><dd><strong>dont</strong>. It was made to manually fault a device if ZFS didn't detect it as faulty yet, it is <strong>permanent</strong>. If you want to recover a pool where you did an errorneous <code>zfs-add</code> and then <code>zfs offline -f</code> like the idiot that I am you have to increase <code>zfs_max_missing_tvds</code> to one (zfs kernel-module parameter in linux) and you'll be able to import+mount it as read-only (which I then recovered using zfs send|zfs receive plus rsync for still getting the missing bits). <a href="https://github.com/openzfs/zfs/issues/10254">Cannot import pool with one device forced offline #10254</a></dd>
- </dl>
- <ul>
- <li><p>Bug: Files at correct size but zero-ed out (happens in gentoo with .so files)<br />
- URL: <a href="https://github.com/openzfs/zfs/issues/3125">Move to ZFS volume creates correctly sized file filled with \0 #3125</a><br />
- Workaround: disable <code>xattr</code> (or maybe use <code>sync=always</code> but it's slow)</p></li>
- </ul>
- </main>