Grammar/sp cleanup
This commit is contained in:
@@ -18,7 +18,7 @@ vdev type specifies a double-parity raidz group; and the ``raidz3`` vdev type
|
|||||||
specifies a triple-parity raidz group. The ``raidz`` vdev type is an alias for
|
specifies a triple-parity raidz group. The ``raidz`` vdev type is an alias for
|
||||||
raidz1.
|
raidz1.
|
||||||
|
|
||||||
A raidz group with N disks of size X with P parity disks can hold
|
A raidz group of N disks of size X with P parity disks can hold
|
||||||
approximately (N-P)*X bytes and can withstand P devices failing without
|
approximately (N-P)*X bytes and can withstand P devices failing without
|
||||||
losing data. The minimum number of devices in a raidz group is one more
|
losing data. The minimum number of devices in a raidz group is one more
|
||||||
than the number of parity disks. The recommended number is between 3 and 9
|
than the number of parity disks. The recommended number is between 3 and 9
|
||||||
@@ -53,7 +53,7 @@ we will allocate on disk:
|
|||||||
|
|
||||||
- one 4K padding block
|
- one 4K padding block
|
||||||
|
|
||||||
, and usable space ratio will be 50%, same as with double mirror.
|
and usable space ratio will be 50%, same as with double mirror.
|
||||||
|
|
||||||
|
|
||||||
Another example for ``ashift=12`` and ``recordsize=128K`` for raidz1 of 3 disks:
|
Another example for ``ashift=12`` and ``recordsize=128K`` for raidz1 of 3 disks:
|
||||||
@@ -64,11 +64,11 @@ Another example for ``ashift=12`` and ``recordsize=128K`` for raidz1 of 3 disks:
|
|||||||
|
|
||||||
- we will have 128K/2 = 64 stripes with 8K of data and 4K of parity each
|
- we will have 128K/2 = 64 stripes with 8K of data and 4K of parity each
|
||||||
|
|
||||||
, so usable space ratio in this case will be 66%.
|
so usable space ratio in this case will be 66%.
|
||||||
|
|
||||||
|
|
||||||
If RAIDZ will have more disks, it's stripe width will be larger, and space
|
The more disks RAIDZ has, the wider the stripe, the greater the space
|
||||||
efficiency better too.
|
efficiency.
|
||||||
|
|
||||||
You can find actual parity cost per RAIDZ size here:
|
You can find actual parity cost per RAIDZ size here:
|
||||||
|
|
||||||
|
|||||||
@@ -189,14 +189,15 @@ not be as reliable as it would be on its own.
|
|||||||
is set or the RAID array is part of a mirror/raid-z vdev within ZFS.
|
is set or the RAID array is part of a mirror/raid-z vdev within ZFS.
|
||||||
|
|
||||||
- Sector size information is not necessarily passed correctly by
|
- Sector size information is not necessarily passed correctly by
|
||||||
hardware RAID on RAID 1 and cannot be passed correctly on RAID 5/6.
|
hardware RAID on RAID 1. Sector size information cannot be passed
|
||||||
|
correctly on RAID 5/6.
|
||||||
Hardware RAID 1 is more likely to experience read-modify-write
|
Hardware RAID 1 is more likely to experience read-modify-write
|
||||||
overhead from partial sector writes and Hardware RAID 5/6 will almost
|
overhead from partial sector writes while Hardware RAID 5/6 will almost
|
||||||
certainty suffer from partial stripe writes (i.e. the RAID write
|
certainty suffer from partial stripe writes (i.e. the RAID write
|
||||||
hole). Using ZFS with the disks directly will allow it to obtain the
|
hole). ZFS using the disks natively allows it to obtain the
|
||||||
sector size information reported by the disks to avoid
|
sector size information reported by the disks to avoid
|
||||||
read-modify-write on sectors while ZFS avoids partial stripe writes
|
read-modify-write on sectors, while ZFS avoids partial stripe writes
|
||||||
on RAID-Z by desing from using copy-on-write.
|
on RAID-Z by design from using copy-on-write.
|
||||||
|
|
||||||
- There can be sector alignment problems on ZFS when a drive
|
- There can be sector alignment problems on ZFS when a drive
|
||||||
misreports its sector size. Such drives are typically NAND-flash
|
misreports its sector size. Such drives are typically NAND-flash
|
||||||
@@ -209,7 +210,7 @@ not be as reliable as it would be on its own.
|
|||||||
actual drive, such that manual correction of sector alignment at
|
actual drive, such that manual correction of sector alignment at
|
||||||
vdev creation does not solve the problem.
|
vdev creation does not solve the problem.
|
||||||
|
|
||||||
- Controller failures can require that the controller be replaced with
|
- RAID controller failures can require that the controller be replaced with
|
||||||
the same model, or in less extreme cases, a model from the same
|
the same model, or in less extreme cases, a model from the same
|
||||||
manufacturer. Using ZFS by itself allows any controller to be used.
|
manufacturer. Using ZFS by itself allows any controller to be used.
|
||||||
|
|
||||||
@@ -231,8 +232,8 @@ not be as reliable as it would be on its own.
|
|||||||
data is undefined. There are reports of RAID 5 and 6 arrays being
|
data is undefined. There are reports of RAID 5 and 6 arrays being
|
||||||
lost during reconstruction when the controller encounters silent
|
lost during reconstruction when the controller encounters silent
|
||||||
corruption. ZFS' checksums allow it to avoid this situation by
|
corruption. ZFS' checksums allow it to avoid this situation by
|
||||||
determining if not enough information exists to reconstruct data. In
|
determining whether enough information exists to reconstruct data. If
|
||||||
which case, the file is listed as damaged in zpool status and the
|
not, the file is listed as damaged in zpool status and the
|
||||||
system administrator has the opportunity to restore it from a backup.
|
system administrator has the opportunity to restore it from a backup.
|
||||||
|
|
||||||
- IO response times will be reduced whenever the OS blocks on IO
|
- IO response times will be reduced whenever the OS blocks on IO
|
||||||
@@ -254,10 +255,10 @@ not be as reliable as it would be on its own.
|
|||||||
interaction between the hardware RAID controller and the OS might
|
interaction between the hardware RAID controller and the OS might
|
||||||
rename arrays C and D to look like arrays B and C respectively.
|
rename arrays C and D to look like arrays B and C respectively.
|
||||||
This can fault pools verbatim imported from the cachefile.
|
This can fault pools verbatim imported from the cachefile.
|
||||||
- Not all RAID controllers behave this way. However, this issue has
|
- Not all RAID controllers behave this way. This issue has
|
||||||
been observed on both Linux and FreeBSD when system administrators
|
been observed on both Linux and FreeBSD when system administrators
|
||||||
used single drive RAID 0 arrays. It has also been observed with
|
used single drive RAID 0 arrays, however. It has also been observed
|
||||||
controllers from different vendors.
|
with controllers from different vendors.
|
||||||
|
|
||||||
One might be inclined to try using single-drive RAID 0 arrays to try to
|
One might be inclined to try using single-drive RAID 0 arrays to try to
|
||||||
use a RAID controller like a HBA, but this is not recommended for many
|
use a RAID controller like a HBA, but this is not recommended for many
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ Module Parameters
|
|||||||
=================
|
=================
|
||||||
|
|
||||||
Most of the ZFS kernel module parameters are accessible in the SysFS
|
Most of the ZFS kernel module parameters are accessible in the SysFS
|
||||||
``/sys/module/zfs/parameters`` directory. Current value can be observed
|
``/sys/module/zfs/parameters`` directory. Current values can be observed
|
||||||
by
|
by
|
||||||
|
|
||||||
.. code:: shell
|
.. code:: shell
|
||||||
@@ -68,7 +68,7 @@ Tags
|
|||||||
----
|
----
|
||||||
|
|
||||||
The list of parameters is quite large and resists hierarchical
|
The list of parameters is quite large and resists hierarchical
|
||||||
representation. To assist in quickly finding relevant information
|
representation. To assist in finding relevant information
|
||||||
quickly, each module parameter has a "Tags" row with keywords for
|
quickly, each module parameter has a "Tags" row with keywords for
|
||||||
frequent searches.
|
frequent searches.
|
||||||
|
|
||||||
|
|||||||
@@ -145,7 +145,7 @@ The following compression algorithms are available:
|
|||||||
significantly superior to LZJB in all metrics tested. It is `new
|
significantly superior to LZJB in all metrics tested. It is `new
|
||||||
default compression algorithm <https://github.com/illumos/illumos-gate/commit/db1741f555ec79def5e9846e6bfd132248514ffe>`__
|
default compression algorithm <https://github.com/illumos/illumos-gate/commit/db1741f555ec79def5e9846e6bfd132248514ffe>`__
|
||||||
(compression=on) in OpenZFS.
|
(compression=on) in OpenZFS.
|
||||||
It is available on all platforms have as of 2020.
|
It is available on all platforms as of 2020.
|
||||||
|
|
||||||
- LZJB
|
- LZJB
|
||||||
|
|
||||||
@@ -153,7 +153,7 @@ The following compression algorithms are available:
|
|||||||
It was created to satisfy the desire for a compression algorithm
|
It was created to satisfy the desire for a compression algorithm
|
||||||
suitable for use in filesystems. Specifically, that it provides
|
suitable for use in filesystems. Specifically, that it provides
|
||||||
fair compression, has a high compression speed, has a high
|
fair compression, has a high compression speed, has a high
|
||||||
decompression speed and detects incompressible data detection
|
decompression speed and detects incompressible data
|
||||||
quickly.
|
quickly.
|
||||||
|
|
||||||
- GZIP (1 through 9)
|
- GZIP (1 through 9)
|
||||||
|
|||||||
Reference in New Issue
Block a user