add some basic explanation how ZFS dRAID works including links to openZFS for more details
add documentation for two dRAID parameters used in code Signed-off-by: Stefan Hrdlicka <s.hrdli...@proxmox.com> --- local-zfs.adoc | 40 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 39 insertions(+), 1 deletion(-) diff --git a/local-zfs.adoc b/local-zfs.adoc index ab0f6ad..8eb681c 100644 --- a/local-zfs.adoc +++ b/local-zfs.adoc @@ -32,7 +32,8 @@ management. * Copy-on-write clone -* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3 +* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2, RAIDZ-3, +dRAID, dRAID2, dRAID3 * Can use SSD for cache @@ -244,6 +245,43 @@ them, unless your environment has specific needs and characteristics where RAIDZ performance characteristics are acceptable. +ZFS dRAID +~~~~~~~~~ + +In a ZFS dRAID (declustered RAID) the hot spare drive(s) participate in the RAID. +Their spare capacity is reservered and used for rebuilding when one drive fails. +This provides depending on the configuration faster rebuilding compaired to a +RAIDZ in case of drive failure. More information can be found in the official +openZFS documenation. footnote:[OpenZFS dRAID +https://openzfs.github.io/openzfs-docs/Basic%20Concepts/dRAID%20Howto.html] + +NOTE: dRAID is intended for more then 10-15 disks in a dRAID. A RAIDZ +setup should be better for a lower amount of disks in most use cases. + + * `dRAID1` or `dRAID`: requires at least 2 disks, one can fail before data is +lost + * `dRAID2`: requires at least 3 disks, two can fail before data is lost + * `dRAID3`: requires at least 4 disks, three can fail before data is lost + + +Additonal information can be found on manual page: + +---- +# man zpoolconcepts +---- + +spares and data +^^^^^^^^^^^^^^^ +The number of `spares` tells the system how many disks it should keep ready in +case of of a disk failure. The default value is 0 `spares`. Without spares +rebuilding won't get any speed benefits. + +The number of `data` devices specifies the size of a parity group. The default +is 8 if the number of `disks - parity - spares >= 8`. A higher number of `data` +and parity drives increases the allocation size (e.g. for 4k sectors with +default `data`=6 minimum allocation size is 24k) which can affect compression. + + Bootloader ~~~~~~~~~~ -- 2.30.2 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel