From 0ef619f3052a3530f830c3828dcfe24ce5ca988d Mon Sep 17 00:00:00 2001 From: Paul Date: Tue, 4 Apr 2023 00:21:33 +0200 Subject: [PATCH] Grammar/sp cleanup --- docs/Basic Concepts/RAIDZ.rst | 10 ++++---- docs/Performance and Tuning/Hardware.rst | 23 ++++++++++--------- .../Module Parameters.rst | 4 ++-- .../Workload Tuning.rst | 4 ++-- 4 files changed, 21 insertions(+), 20 deletions(-) diff --git a/docs/Basic Concepts/RAIDZ.rst b/docs/Basic Concepts/RAIDZ.rst index ede5df0..1b64882 100644 --- a/docs/Basic Concepts/RAIDZ.rst +++ b/docs/Basic Concepts/RAIDZ.rst @@ -18,7 +18,7 @@ vdev type specifies a double-parity raidz group; and the ``raidz3`` vdev type specifies a triple-parity raidz group. The ``raidz`` vdev type is an alias for raidz1. -A raidz group with N disks of size X with P parity disks can hold +A raidz group of N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P devices failing without losing data. The minimum number of devices in a raidz group is one more than the number of parity disks. The recommended number is between 3 and 9 @@ -53,7 +53,7 @@ we will allocate on disk: - one 4K padding block -, and usable space ratio will be 50%, same as with double mirror. +and usable space ratio will be 50%, same as with double mirror. Another example for ``ashift=12`` and ``recordsize=128K`` for raidz1 of 3 disks: @@ -64,11 +64,11 @@ Another example for ``ashift=12`` and ``recordsize=128K`` for raidz1 of 3 disks: - we will have 128K/2 = 64 stripes with 8K of data and 4K of parity each -, so usable space ratio in this case will be 66%. +so usable space ratio in this case will be 66%. -If RAIDZ will have more disks, it's stripe width will be larger, and space -efficiency better too. +The more disks RAIDZ has, the wider the stripe, the greater the space +efficiency. You can find actual parity cost per RAIDZ size here: diff --git a/docs/Performance and Tuning/Hardware.rst b/docs/Performance and Tuning/Hardware.rst index 03c0a99..0ca9f02 100644 --- a/docs/Performance and Tuning/Hardware.rst +++ b/docs/Performance and Tuning/Hardware.rst @@ -189,14 +189,15 @@ not be as reliable as it would be on its own. is set or the RAID array is part of a mirror/raid-z vdev within ZFS. - Sector size information is not necessarily passed correctly by - hardware RAID on RAID 1 and cannot be passed correctly on RAID 5/6. + hardware RAID on RAID 1. Sector size information cannot be passed + correctly on RAID 5/6. Hardware RAID 1 is more likely to experience read-modify-write - overhead from partial sector writes and Hardware RAID 5/6 will almost + overhead from partial sector writes while Hardware RAID 5/6 will almost certainty suffer from partial stripe writes (i.e. the RAID write - hole). Using ZFS with the disks directly will allow it to obtain the + hole). ZFS using the disks natively allows it to obtain the sector size information reported by the disks to avoid - read-modify-write on sectors while ZFS avoids partial stripe writes - on RAID-Z by desing from using copy-on-write. + read-modify-write on sectors, while ZFS avoids partial stripe writes + on RAID-Z by design from using copy-on-write. - There can be sector alignment problems on ZFS when a drive misreports its sector size. Such drives are typically NAND-flash @@ -209,7 +210,7 @@ not be as reliable as it would be on its own. actual drive, such that manual correction of sector alignment at vdev creation does not solve the problem. -- Controller failures can require that the controller be replaced with +- RAID controller failures can require that the controller be replaced with the same model, or in less extreme cases, a model from the same manufacturer. Using ZFS by itself allows any controller to be used. @@ -231,8 +232,8 @@ not be as reliable as it would be on its own. data is undefined. There are reports of RAID 5 and 6 arrays being lost during reconstruction when the controller encounters silent corruption. ZFS' checksums allow it to avoid this situation by - determining if not enough information exists to reconstruct data. In - which case, the file is listed as damaged in zpool status and the + determining whether enough information exists to reconstruct data. If + not, the file is listed as damaged in zpool status and the system administrator has the opportunity to restore it from a backup. - IO response times will be reduced whenever the OS blocks on IO @@ -254,10 +255,10 @@ not be as reliable as it would be on its own. interaction between the hardware RAID controller and the OS might rename arrays C and D to look like arrays B and C respectively. This can fault pools verbatim imported from the cachefile. - - Not all RAID controllers behave this way. However, this issue has + - Not all RAID controllers behave this way. This issue has been observed on both Linux and FreeBSD when system administrators - used single drive RAID 0 arrays. It has also been observed with - controllers from different vendors. + used single drive RAID 0 arrays, however. It has also been observed + with controllers from different vendors. One might be inclined to try using single-drive RAID 0 arrays to try to use a RAID controller like a HBA, but this is not recommended for many diff --git a/docs/Performance and Tuning/Module Parameters.rst b/docs/Performance and Tuning/Module Parameters.rst index 38a1f3c..bd4faea 100644 --- a/docs/Performance and Tuning/Module Parameters.rst +++ b/docs/Performance and Tuning/Module Parameters.rst @@ -2,7 +2,7 @@ Module Parameters ================= Most of the ZFS kernel module parameters are accessible in the SysFS -``/sys/module/zfs/parameters`` directory. Current value can be observed +``/sys/module/zfs/parameters`` directory. Current values can be observed by .. code:: shell @@ -68,7 +68,7 @@ Tags ---- The list of parameters is quite large and resists hierarchical -representation. To assist in quickly finding relevant information +representation. To assist in finding relevant information quickly, each module parameter has a "Tags" row with keywords for frequent searches. diff --git a/docs/Performance and Tuning/Workload Tuning.rst b/docs/Performance and Tuning/Workload Tuning.rst index c44f39e..2e7a8ea 100644 --- a/docs/Performance and Tuning/Workload Tuning.rst +++ b/docs/Performance and Tuning/Workload Tuning.rst @@ -145,7 +145,7 @@ The following compression algorithms are available: significantly superior to LZJB in all metrics tested. It is `new default compression algorithm `__ (compression=on) in OpenZFS. - It is available on all platforms have as of 2020. + It is available on all platforms as of 2020. - LZJB @@ -153,7 +153,7 @@ The following compression algorithms are available: It was created to satisfy the desire for a compression algorithm suitable for use in filesystems. Specifically, that it provides fair compression, has a high compression speed, has a high - decompression speed and detects incompressible data detection + decompression speed and detects incompressible data quickly. - GZIP (1 through 9)