commit: acaaad445393f9f0e1e820404ffb1c789b1f94ce
parent a8d2b57dcf96ccab94a340329290bff7b85627f9
Author: Haelwenn (lanodan) Monnier <contact@hacktivis.me>
Date: Sat, 16 May 2020 18:30:40 +0200
notes/zfs.html: New file
Diffstat:
1 file changed, 10 insertions(+), 0 deletions(-)
diff --git a/notes/zfs.html b/notes/zfs.html
@@ -0,0 +1,10 @@
+<dl>
+<dt><code>zpool attach</code></dt><dd>add to an existing vdev (ie. mirror/parallel)</dd>
+<dt><code>zpool add</code></dt><dd> (always try with -n first) → add to the pool outside of a vdev (ie. serial)</dd>
+<dt><code>zfs offline -f</code></dt><dd><strong>dont</strong>. It was made to manually fault a device if ZFS didn't detect it as faulty yet, it is <strong>permanent</strong>. If you want to recover a pool where you did an errorneous <code>zfs-add</code> and then <code>zfs offline -f</code> like the idiot that I am you have to increase <code>zfs_max_missing_tvds</code> to one (zfs kernel-module parameter in linux) and you'll be able to import+mount it as read-only (which I then recovered using zfs send|zfs receive plus rsync for still getting the missing bits). <a href="https://github.com/openzfs/zfs/issues/10254">Cannot import pool with one device forced offline #10254</a></dd>
+</dl>
+<ul>
+<li><p>Bug: Files at correct size but zero-ed out (happens in gentoo with .so files)<br />
+URL: <a href="https://github.com/openzfs/zfs/issues/3125">Move to ZFS volume creates correctly sized file filled with \0 #3125</a><br />
+Workaround: disable <code>xattr</code> (or maybe use <code>sync=always</code> but it's slow)</p></li>
+</ul>